[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1 of 4] Improve ring management for memory events
> On Thu, Nov 24, Andres Lagar-Cavilla wrote: > >> > On Wed, Nov 23, Andres Lagar-Cavilla wrote: >> > >> >> Well, we can tone down printk's to be debug level. I don't think >> they're >> >> unnecessary if they're made an optional debug tool. >> > >> > There is nothing to debug here, since the callers have to retry >> anyway. >> > >> >> Question: I have one vcpu, how do I fill up the ring quickly? >> (outside >> >> of >> >> foreign mappings) >> > >> > Have a balloon driver in the guest and balloon down more than >> > 64*PAGE_SIZE. This is the default at least in my setup where the >> kernel >> > driver releases some memory right away (I havent checked where this is >> > actually configured). >> >> I see, a guest can call decrease_reservation with an extent_order large >> enough that it will overflow the ring. No matter the size of the ring. >> Isn't preemption of this hypercall a better tactic than putting the vcpu >> on a wait-queue? This won't preclude the need for wait queues, but it >> feels like a much cleaner solution. > > Yes, yesterday I was thinking about this as well. > p2m_mem_paging_drop_page() should return -EBUSY. But currently not all > callers of guest_remove_page() look at the exit code. Perhaps that can > be fixed. > >> With retrying of foreign mappings in xc_map_foreign_bulk (and grants), I >> wonder if we should put events in the ring due to foreign mappings *at >> all*, in the case of congestion. Eventually a retry will get to kick the >> pager. > > What do you mean with that? Thinking out loud here. In our ring management patch, we put a guest vcpu to sleep if space in the ring < d->max_vcpus. In the case of a foreign mapping vcpu, we still allow the vcpu to put an event in the ring, as long as there is any space. This can eventually fill up the ring and prevent guest vcpus from placing events. However, we could prevent this latter behaviour if it will result in a condition in which space_in_the_ring < (d->max_vcpus - ring->blocked) This will ensure no event caused by a guest vcpu will ever be lost (we still need the preemption of decrease_reservation, though). Correctness is preserved for the foreign mapping vcpu. It will retry its mapping and eventually there will be space in the ring. With this, we won't need wait-queues for ring management. Makes sense? Andres > > Olaf > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |