[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events
> At 16:55 -0500 on 29 Nov (1322585711), Andres Lagar-Cavilla wrote: >> The memevent code currently has a mechanism for reserving space in the >> ring >> before putting an event, but each caller must individually ensure that >> the >> vCPUs are correctly paused if no space is available. >> >> This fixes that issue by reversing the semantics: we ensure that enough >> space >> is always left for one event per vCPU in the ring. If, after putting >> the >> current request, this constraint will be violated by the current vCPU >> when putting putting another request in the ring, we pause the vCPU. > > What about operations that touch more than one page of guest memory? > (E.g., pagetable walks, emulated faults and task switches). Can't they > still fill up the ring? Those only generate events on paging, which would go to sleep on the first fault with a wait queue. There is one case in which the guest vcpu can generate unbound events, and that is balloon down -> decrease_reservation -> paging_drop events. I handle that with preemption of the decrease_reservation hypercall. > > IIRC there are still cases where we need wait-queues anyway (when we hit > a paged-out page after an non-idempotent action has already been > taken). Is the purpose of this change just to reduce the number of > wait-queue uses or do you think you can do without them entirely? Certainly, there's no way around wait-queues, for, say hvm_copy with a paged out page. Andres > > Cheers, > > Tim. > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |