[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [EXTERNAL][PATCH v5 6/6] domain: use PGC_extra domheap page for shared_info

On 12.03.2020 10:09, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: 12 March 2020 08:34
>> To: paul@xxxxxxx
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Durrant, Paul <pdurrant@xxxxxxxxxxxx>; 
>> 'Stefano Stabellini'
>> <sstabellini@xxxxxxxxxx>; 'Julien Grall' <julien@xxxxxxx>; 'Volodymyr 
>> Babchuk'
>> <Volodymyr_Babchuk@xxxxxxxx>; 'Andrew Cooper' <andrew.cooper3@xxxxxxxxxx>; 
>> 'George Dunlap'
>> <george.dunlap@xxxxxxxxxx>; 'Ian Jackson' <ian.jackson@xxxxxxxxxxxxx>; 
>> 'Konrad Rzeszutek Wilk'
>> <konrad.wilk@xxxxxxxxxx>; 'Wei Liu' <wl@xxxxxxx>
>> Subject: RE: [EXTERNAL][PATCH v5 6/6] domain: use PGC_extra domheap page for 
>> shared_info
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open
>> attachments unless you can confirm the sender and know the content is safe.
>> On 11.03.2020 16:28, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@xxxxxxxx>
>>>> Sent: 11 March 2020 09:17
>>>> To: paul@xxxxxxx
>>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; 'Paul Durrant' <pdurrant@xxxxxxxxxx>; 
>>>> 'Stefano Stabellini'
>>>> <sstabellini@xxxxxxxxxx>; 'Julien Grall' <julien@xxxxxxx>; 'Volodymyr 
>>>> Babchuk'
>>>> <Volodymyr_Babchuk@xxxxxxxx>; 'Andrew Cooper' <andrew.cooper3@xxxxxxxxxx>; 
>>>> 'George Dunlap'
>>>> <george.dunlap@xxxxxxxxxx>; 'Ian Jackson' <ian.jackson@xxxxxxxxxxxxx>; 
>>>> 'Konrad Rzeszutek Wilk'
>>>> <konrad.wilk@xxxxxxxxxx>; 'Wei Liu' <wl@xxxxxxx>
>>>> Subject: Re: [PATCH v5 6/6] domain: use PGC_extra domheap page for 
>>>> shared_info
>>>> On 10.03.2020 18:33, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>> Sent: 09 March 2020 15:56
>>>>>> On 09.03.2020 11:23, paul@xxxxxxx wrote:
>>>>>>> --- a/xen/common/time.c
>>>>>>> +++ b/xen/common/time.c
>>>>>>> @@ -99,6 +99,18 @@ void update_domain_wallclock_time(struct domain *d)
>>>>>>>      uint32_t *wc_version;
>>>>>>>      uint64_t sec;
>>>>>>> +    if ( d != current->domain )
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * We need to check is_dying here as, if it is set, the
>>>>>>> +         * shared_info may have been freed. To do this safely we need
>>>>>>> +         * hold the domain lock.
>>>>>>> +         */
>>>>>>> +        domain_lock(d);
>>>>>>> +        if ( d->is_dying )
>>>>>>> +            goto unlock;
>>>>>>> +    }
>>>>>> This shouldn't happen very often, but it's pretty heavy a lock.
>>>>>> It's a fundamental aspect of xenheap pages that their disposal
>>>>>> can b e delay until almost the last moment of guest cleanup. I
>>>>>> continue to think it's not really a good ideal to have special
>>>>>> purpose allocation (and mapping) accompanied by these pages
>>>>>> getting taken care of by the generic relinquish-resources logic
>>>>>> here (from a more general pov such is of course often nice to
>>>>>> have). Instead of freeing these pages there, couldn't they just
>>>>>> be taken off d->page_list, with the unmapping and freeing left
>>>>>> as it was?
>>>>> I don't think this can be achieved without being able de-assign
>>>>> pages and I don't really want to have to invent new logic to do
>>>>> that (basically re-implementing what happens to xenheap pages).
>>>> Where's the connection to being able to de-assign pages here?
>>>> There'll be one when the same conversion is to be done for
>>>> gnttab code, but I don't see it here - the shared info page is
>>>> never to be de-assigned. As to gnttab code, I think it was
>>>> noted before that we may be better off not "unpopulating"
>>>> status pages when switching back from v2 to v1. At which point
>>>> the de-assignment need would go away there, too.
>>> Ok, maybe I'm misunderstanding something then. We need to call
>>> free_domheap_pages() on all pages assigned to a domain so that
>>> the domain references get dropped. The xenpage ref is dropped
>>> when d->xenheap_pages == 0. The domheap ref is dropped when
>>> domain_adjust_tot_pages() returns zero. (This is what I meant
>>> by de-assigning... but that was probably a poor choice of words).
>>> So, because domain_adjust_tot_pages() returns d->tot_pages
>>> (which includes the extra_pages count) it won't fall to zero
>>> until the last put_page() on any PGC_extra page. So how is it
>>> possible to free shared_info in domain destroy? We'll never get
>>> that far, because the domheap ref will never get dropped.
>> Well, now that these pages sit on a separate list, it would
>> look even less problematic than before to me to also give them
>> special treatment here: You wouldn't even have to take them
>> off the list anymore, but just call domain_adjust_tot_pages()
>> with -d->extra_pages (and suitably deal with the return value;
>> perhaps for consistency then followed by also zeroing
>> d->extra_pages, so overall accounting still looks correct).
>> Actually taking these pages off the list could (for dumping
>> purposes) then be done alongside their actual freeing. Such a
>> transition would apparently imply clearing PGC_extra alongside
>> the new domain_adjust_tot_pages() call.
>> I realize though that the end result won't be much different
>> from the current PGC_xen_heap handling (at least as far as
>> domain cleanup goes, but after all that's what I'm in fact
>> trying to convince you of), so the question would be whether
>> the whole transition then is worth it.
> Yes... with such adjustments the code gets increasingly pointless from a 
> simplification PoV... which is why I coded this patch as it is. I am happy to 
> make the concession of using the extra page list for dumping purposes, and to 
> avoid the need to special-case PGC_extra pages in a few places, but otherwise 
> I want them to be treated as 'normal' domheap pages.
> So, if you're not willing to ack this patch

Well - I'm not the only one in the position to ack this. The
problem is that right now this is just a discussion between
the two of us.

> then I guess I'll have to re-send the series including the
> grant table changes and the removal of the xenpage list.

I'm not sure I see how this would help. My reservations
against the earlier freeing of the extra pages wouldn't go

But let me first get around to look at v6.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.