[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 3] Update paging/sharing/access interfaces v2



At 01:08 -0500 on 09 Feb (1328749705), Andres Lagar-Cavilla wrote:
> i(Was switch from domctl to memops)
> Changes from v1 posted Feb 2nd 2012
> 
> - Patches 1 & 2 Acked-by Tim Deegan on the hypervisor side
> - Added patch 3 to clean up the enable domctl interface, based on
>   discussion with Ian Campbell
> 
> Description from original post follows:
> 
> Per page operations in the paging, sharing, and access tracking subsystems are
> all implemented with domctls (e.g. a domctl to evict one page, or to share one
> page).
> 
> Under heavy load, the domctl path reveals a lack of scalability. The domctl
> lock serializes dom0's vcpus in the hypervisor. When performing thousands of
> per-page operations on dozens of domains, these vcpus will spin in the
> hypervisor. Beyond the aggressive locking, an added inefficiency of blocking
> vcpus in the domctl lock is that dom0 is prevented from re-scheduling any of
> its other work-starved processes.
> 
> We retain the domctl interface for setting up and tearing down
> paging/sharing/mem access for a domain. But we migrate all the per page
> operations to use the memory_op hypercalls (e.g XENMEM_*).
> 
> This is a backwards-incompatible ABI change. It's been floating on the list 
> for
> a couple weeks now, with no nacks thus far.
> 
> Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla>
> Signed-off-by: Adin Scannell <adin@xxxxxxxxxxx>

Applied 1 and 2; thanks.

I'll leave patch 3 for others to comment -- I know there are out-of-tree
users of the mem-access interface, and changing the hypercalls is less
disruptive than changing the libxc interface.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.