[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PoD code killing domain before it really gets started
On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote: >> 2. Allocate the PoD cache before populating the p2m table > > So this doesn't work, the call simply has no effect (and never > reaches p2m_pod_set_cache_target()). Apparently because > of > > /* P == B: Nothing to do. */ > if ( p2md->pod.entry_count == 0 ) > goto out; > > in p2m_pod_set_mem_target(). Now I'm not sure about the > proper adjustment here: Entirely dropping the conditional is > certainly wrong. Would > > if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 ) > goto out; > > be okay? > > But then later in that function we also have > > /* B < T': Set the cache size equal to # of outstanding entries, > * let the balloon driver fill in the rest. */ > if ( pod_target > p2md->pod.entry_count ) > pod_target = p2md->pod.entry_count; > > which in the case at hand would set pod_target to 0, and the > whole operation would again not have any effect afaict. So > maybe this was the reason to do this operation _after_ the > normal address space population? Snap -- forgot about that. The main thing is for set_mem_target() to be simple for the toolstack -- it's just supposed to say how much memory it wants the guest to use, and Xen is supposed to figure out how much memory the PoD cache needs. The interface is that the toolstack is just supposed to call set_mem_target() after each time it changes the balloon target. The idea was to be robust against the user setting arbitrary new targets before the balloon driver had reached the old target. So the problem with allowing (pod_target > entry_count) is that that's the condition that happens when you are ballooning up. Maybe the best thing to do is to introduce a specific call to initialize the PoD cache that would ignore entry_count? -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |