[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types



Thanks, Paul.

On 2/26/2016 5:21 PM, Paul Durrant wrote:
-----Original Message-----
From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
Sent: 26 February 2016 06:59
To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] docs/design: introduce
HVMMEM_ioreq_serverX types

Hi Paul,

    Thanks a lot for your help on this! And below are my questions.

On 2/25/2016 11:49 PM, Paul Durrant wrote:
This patch adds a new 'designs' subdirectory under docs as a repository
for this and future design proposals.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
---

For convenience this document can also be viewed in PDF at:

http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
---
   docs/designs/hvmmem_ioreq_server.md | 63
+++++++++++++++++++++++++++++++++++++
   1 file changed, 63 insertions(+)
   create mode 100755 docs/designs/hvmmem_ioreq_server.md

diff --git a/docs/designs/hvmmem_ioreq_server.md
b/docs/designs/hvmmem_ioreq_server.md
new file mode 100755
index 0000000..47fa715
--- /dev/null
+++ b/docs/designs/hvmmem_ioreq_server.md
@@ -0,0 +1,63 @@
+HVMMEM\_ioreq\_serverX
+----------------------
+
+Background
+==========
+
+The concept of the IOREQ server was introduced to allow multiple distinct
+device emulators to a single VM. The XenGT project uses an IOREQ server
to
+provide mediated pass-through of Intel GPUs to guests and, as part of the
+mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
+reside in guest RAM.
+
+The current implementation of this sets the type of GTT pages to type
+HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to
such pages,
+and then maps the guest physical addresses of those pages to the XenGT
+IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server
hypercall.
+However, because the number of GTTs is potentially large, using this
+approach does not scale well.
+
+Proposal
+========
+
+Because the number of spare types available in the P2M type-space is
+currently very limited it is proposed that HVMMEM\_mmio\_write\_dm
be
+replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
+P2M type-space is increased, this can be renamed to
HVMMEM\_ioreq\_server0
+and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc.
types
+can be added.
+
+Accesses to a page of type HVMMEM\_ioreq\_serverX should be the
same as
+HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server.
Furthermore

Sorry, do you mean even when a gfn is set to HVMMEM_ioreq_serverX
type,
its access rights in P2M still remains unchanged? So the new hypercall
pair, HVMOP_[un]map_mem_type_to_ioreq_server, are also responsible
for
the PTE updates on the access bits?

Yes, the access bits would not change *on existing* pages of this type until 
the type is claimed.


If it is true, I'm afraid this would be time consuming, because the
map/unmap will have to traverse all P2M structures to detect the PTEs
with HVMMEM_ioreq_serverX flag set. Yet in XenGT, setting this flag is
triggered dynamically with the construction/destruction of shadow PPGTT.
But I'm not sure to which degree the performance casualties will be,
with frequent EPT table walk and EPT tlb flush.

I don't see the concern. I am assuming XenGT would claim the type at start of 
day and then just to HVMOP_set_mem_type operations on PPGTT pages as necessary, 
which would not require p2m traversal (since that hypercall takes the gfn as an 
arg) and the EPT flush requirements are no different to setting the 
mmio_write_dm type in the current implementation.

So this looks like my second interpretation, with step 1> and step 2>
changed?



If it is not, I guess we can(e.g. when trying to WP a gfn):
1> use HVMOP_set_mem_type to set the HVMMEM_ioreq_serverX flag,
which
for the write protected case works the same as HVMMEM_mmio_write_dm;
If successful, accesses to a page of type HVMMEM_ioreq_serverX should
trigger the ioreq server selection path, but will be discarded.
2> after HVMOP_map_mem_type_to_ioreq_server is called, all accesses to
this pages of type HVMMEM_ioreq_serverX would be forwarded to a
specified ioreq server.

As to XenGT backend device model, we only need to use the map hypercall
once when trying to contruct the first shadow PPGTT, and use the unmap
hypercall when a VM is going to be torn down.


Yes, that's correct. A single claim at start of day and a single 'unclaim' at 
end of day. In between the current HVMOP_set_mem_type is pretty much as before 
(just with the new type name) and there is no longer a need for the hypercall 
to map the range to the ioreq server.

   Paul


B.R.
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.