[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/mm: Improve ring management for memory events. Do not lose guest events



On Wed, Jan 11, Andres Lagar-Cavilla wrote:

>  xen/arch/x86/hvm/hvm.c          |   18 +-
>  xen/arch/x86/mm/mem_event.c     |  298 
> +++++++++++++++++++++++++++++++++------
>  xen/arch/x86/mm/mem_sharing.c   |   30 +--
>  xen/arch/x86/mm/p2m.c           |   81 +++++-----
>  xen/common/memory.c             |    7 +-
>  xen/include/asm-x86/mem_event.h |   22 +-
>  xen/include/asm-x86/p2m.h       |   12 +-
>  xen/include/xen/mm.h            |    2 +
>  xen/include/xen/sched.h         |   22 ++-
>  9 files changed, 359 insertions(+), 133 deletions(-)
> 
> 
> This patch is an amalgamation of the work done by Olaf Hering <olaf@xxxxxxxxx>
> and our work.
> 
> It combines logic changes to simplify the memory event API, as well as
> leveraging wait queues to deal with extreme conditions in which too many 
> events are
> generated by a guest vcpu.

I'm ok with the approach, and it looks like the approach does not
conflict with my attempt to use waitqueues in get_gfn_type_access().
If the ring is full, the vcpu is put in a wait queue.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.