[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] Differentiate IO/mem resources tracked by ioreq server





On 8/11/2015 4:25 PM, Paul Durrant wrote:
-----Original Message-----
From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
Sent: 11 August 2015 08:57
To: Paul Durrant; Andrew Cooper; Wei Liu
Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Ian Campbell;
Keir (Xen.org); jbeulich@xxxxxxxx; Kevin Tian; zhiyuan.lv@xxxxxxxxx
Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by ioreq
server



On 8/10/2015 6:57 PM, Paul Durrant wrote:
-----Original Message-----
From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
Sent: 10 August 2015 11:56
To: Paul Durrant; Wei Liu; Yu Zhang
Cc: xen-devel@xxxxxxxxxxxxx; Ian Jackson; Stefano Stabellini; Ian Campbell;
Keir (Xen.org); jbeulich@xxxxxxxx; Kevin Tian; zhiyuan.lv@xxxxxxxxx
Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by
ioreq
server

On 10/08/15 09:33, Paul Durrant wrote:
-----Original Message-----
From: Wei Liu [mailto:wei.liu2@xxxxxxxxxx]
Sent: 10 August 2015 09:26
To: Yu Zhang
Cc: xen-devel@xxxxxxxxxxxxx; Paul Durrant; Ian Jackson; Stefano
Stabellini;
Ian
Campbell; Wei Liu; Keir (Xen.org); jbeulich@xxxxxxxx; Andrew Cooper;
Kevin Tian; zhiyuan.lv@xxxxxxxxx
Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by
ioreq
server

On Mon, Aug 10, 2015 at 11:33:40AM +0800, Yu Zhang wrote:
Currently in ioreq server, guest write-protected ram pages are
tracked in the same rangeset with device mmio resources. Yet
unlike device mmio, which can be in big chunks, the guest write-
protected pages may be discrete ranges with 4K bytes each.

This patch uses a seperate rangeset for the guest ram pages.
And a new ioreq type, IOREQ_TYPE_MEM, is defined.

Note: Previously, a new hypercall or subop was suggested to map
write-protected pages into ioreq server. However, it turned out
handler of this new hypercall would be almost the same with the
existing pair - HVMOP_[un]map_io_range_to_ioreq_server, and
there's
already a type parameter in this hypercall. So no new hypercall
defined, only a new type is introduced.

Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
---
   tools/libxc/include/xenctrl.h    | 39 +++++++++++++++++++++++---
   tools/libxc/xc_domain.c          | 59
++++++++++++++++++++++++++++++++++++++--

FWIW the hypercall wrappers look correct to me.

diff --git a/xen/include/public/hvm/hvm_op.h
b/xen/include/public/hvm/hvm_op.h
index 014546a..9106cb9 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -329,8 +329,9 @@ struct xen_hvm_io_range {
       ioservid_t id;               /* IN - server id */
       uint32_t type;               /* IN - type of range */
   # define HVMOP_IO_RANGE_PORT   0 /* I/O port range */
-# define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
+# define HVMOP_IO_RANGE_MMIO   1 /* MMIO range */
   # define HVMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func
range
*/
+# define HVMOP_IO_RANGE_MEMORY 3 /* MEMORY range */
This looks problematic. Maybe you can get away with this because this
is
a toolstack-only interface?

Indeed, the old name is a bit problematic. Presumably re-use like this
would require an interface version change and some if-defery.

I assume it is an interface used by qemu, so this patch in its currently
state will break things.

If QEMU were re-built against the updated header, yes.

Thank you, Andrew & Paul. :)
Are you referring to the xen_map/unmap_memory_section routines in
QEMU?
I noticed they are called by xen_region_add/del in QEMU. And I wonder,
are these 2 routines used to track a memory region or to track a MMIO
region? If the region to be added is a MMIO, I guess the new interface
should be fine, but if it is memory region to be added into ioreq
server, maybe a patch in QEMU is necessary(e.g. use some if-defery for
this new interface version you suggested)?


I was forgetting that QEMU uses libxenctrl so your change to 
xc_hvm_map_io_range_to_ioreq_server() means everything will continue to work as 
before. There is still the (admittedly academic) problem of some unknown 
emulator out there that rolls its own hypercalls and blindly updates to the new 
version of hvm_op.h suddenly starting to register memory ranges rather than 
mmio ranges though. I would leave the existing definitions as-is and come up 
with a new name.

So, how about we keep the  HVMOP_IO_RANGE_MEMORY name for MMIO, and use
a new one, say HVMOP_IO_RANGE_WP_MEM, for write-protected rams only? :)

Thanks
Yu


   Paul

Thanks
Yu

    Paul


~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.