[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events



On Tue, Nov 29, Andres Lagar-Cavilla wrote:

>  xen/arch/x86/hvm/hvm.c          |   20 ++-
>  xen/arch/x86/mm/mem_event.c     |  205 
> ++++++++++++++++++++++++++++++---------
>  xen/arch/x86/mm/mem_sharing.c   |   27 +++-
>  xen/arch/x86/mm/p2m.c           |  104 ++++++++++---------
>  xen/common/memory.c             |    7 +-
>  xen/include/asm-x86/mem_event.h |   16 ++-
>  xen/include/asm-x86/p2m.h       |    6 +-
>  xen/include/xen/mm.h            |    2 +
>  xen/include/xen/sched.h         |    5 +-
>  9 files changed, 268 insertions(+), 124 deletions(-)
> 
> 
> The memevent code currently has a mechanism for reserving space in the ring
> before putting an event, but each caller must individually ensure that the
> vCPUs are correctly paused if no space is available.

I have an improved patch which uses wait queues in
mem_event_put_request() and also the new wake_up_nr(). Using pause here
and wait queues in get_gfn does not mix well AFAICS. My wait queue patch
for get_gfn is not yet finished.

I propose to use wait queues for both mem_event and get_gfn.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.