[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PoD code killing domain before it really gets started



>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:
> On 07/08/12 14:29, Jan Beulich wrote:
>>>>> On 07.08.12 at 15:13, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>> On Tue, Aug 7, 2012 at 1:17 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>> On 06.08.12 at 18:03, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>>>> 2. Allocate the PoD cache before populating the p2m table
>>>> So this doesn't work, the call simply has no effect (and never
>>>> reaches p2m_pod_set_cache_target()). Apparently because
>>>> of
>>>>
>>>>      /* P == B: Nothing to do. */
>>>>      if ( p2md->pod.entry_count == 0 )
>>>>          goto out;
>>>>
>>>> in p2m_pod_set_mem_target(). Now I'm not sure about the
>>>> proper adjustment here: Entirely dropping the conditional is
>>>> certainly wrong. Would
>>>>
>>>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>>>          goto out;
>>>>
>>>> be okay?
>>>>
>>>> But then later in that function we also have
>>>>
>>>>      /* B < T': Set the cache size equal to # of outstanding entries,
>>>>       * let the balloon driver fill in the rest. */
>>>>      if ( pod_target > p2md->pod.entry_count )
>>>>          pod_target = p2md->pod.entry_count;
>>>>
>>>> which in the case at hand would set pod_target to 0, and the
>>>> whole operation would again not have any effect afaict. So
>>>> maybe this was the reason to do this operation _after_ the
>>>> normal address space population?
>>> Snap -- forgot about that.
>>>
>>> The main thing is for set_mem_target() to be simple for the toolstack
>>> -- it's just supposed to say how much memory it wants the guest to
>>> use, and Xen is supposed to figure out how much memory the PoD cache
>>> needs.  The interface is that the toolstack is just supposed to call
>>> set_mem_target() after each time it changes the balloon target.  The
>>> idea was to be robust against the user setting arbitrary new targets
>>> before the balloon driver had reached the old target.  So the problem
>>> with allowing (pod_target > entry_count) is that that's the condition
>>> that happens when you are ballooning up.
>>>
>>> Maybe the best thing to do is to introduce a specific call to
>>> initialize the PoD cache that would ignore entry_count?
>> Hmm, would looks more like a hack to me.
>>
>> How about doing the initial check as suggested earlier
>>
>>      if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 )
>>          goto out;
>>
>> and the latter check in a similar way
>>
>>      if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 )
>>          pod_target = p2md->pod.entry_count;
>>
>> (which would still take care of any ballooning activity)? Or are
>> there any other traps to fall into?
> The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish 
> about having an interface like this?
> * allocate_pod_mem()
> * for() { populate_pod_mem() }
> * [Boot VM]
> * set_pod_target()

Mostly the fact that it's an almost identical interface to what
we already have. After all, the two !alloc checks would go at
exactly the place where I had put the d->tot_pages > 0.
Plus the new interface likely ought to check that d->tot_pages
is zero. In the end you're proposing a full new interface for
something that can be done with two small adjustments.

> Right now set_pod_mem() is used both for initial allocation and for 
> adjustments.  But it seems like there's good reason to make a distinction.

Yes. But that distinction can well be implicit.

In any case, as for testing this out my approach would be far
easier (even if it looks like a hack to you), is there anything
wrong with it (i.e. is my assumption where the !alloc checks
would have to go wrong)? If not, I'd prefer to have that
simpler thing tested, and then we can still settle on the more
involved change you would like to see.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.