[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] x86: Perform mem_sharing teardown before paging teardown


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 16 Feb 2023 16:24:05 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=g0qGdofkEYviOrkdZJUD/InP9VYRxbIlSPYOpuigEHo=; b=ip9h0/DxuyGJES9uWtTzcrlYlr4m/pj9oFI7JAeGVO6pgiJSv2tc6nsKkBtq/Uii8uhmG58hHf76qbUMlw1AzVVM0zYkSIbhrJYwPpz0LwpC4s8ppFfHxZwwV7ee7GTXPZbLeysKldhpYzLQsuHfuwlD+SLz2HoFyFdsoa2mtwQUVpBQmNac4fSd0f4GOFdKyETJ9/H94muvbe4L11dbF3lO7u42wvyfdwYDIfqJkjk8bybgMe9BHO3nbgt6ugT1XSRfmgfeD8Gzu2wzfVNlxObhdYsoSEBpcM6DBAA9rhQk2RptuhcHPCFqc8X+9vsHIvmAeSETaaVu2uX7qg11EA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UeOK4/idlVLvpzCrC14Br1PDH2741/UqtTci3n4Ko0bUTRh1r8OetRZ8FjHRB1lMsusRMtmfALworsdCb0PQl+gwh/SmOEW9OdPhf97O57uruXUtvuTBmJSCw9HIlKWPQIRA/DYbwGLAk/PzEQ9qzMQXS5IxRqgOZ0cLkX7ce6gix1nWmGUrju0XTxx1qVxbAaEXZgYHOQSwI2mZe5tg7Wqy8Dr8VfIi15UeDNjEu4ia8svN5VEASrJFH0Lh4G/snp1jXbnZH5AwmYrvcTmF4pqsXNj5herEqb/I5x+148Fu/E9YIIzFiGBUfq1RBDjX/ZoJHKQVzkFy2O0jp5OGHA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 16 Feb 2023 15:24:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.02.2023 16:10, Tamas K Lengyel wrote:
> On Thu, Feb 16, 2023 at 9:27 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 16.02.2023 13:22, Tamas K Lengyel wrote:
>>> On Thu, Feb 16, 2023 at 3:31 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>
>>>> On 15.02.2023 17:29, Tamas K Lengyel wrote:
>>>>> On Wed, Feb 15, 2023 at 5:27 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>> Did you consider the alternative of adjusting the ASSERT() in
> question
>>>>>> instead? It could reasonably become
>>>>>>
>>>>>> #ifdef CONFIG_MEM_SHARING
>>>>>>     ASSERT(!p2m_is_hostp2m(p2m) || !remove_root ||
>>>>> !atomic_read(&d->shr_pages));
>>>>>> #endif
>>>>>>
>>>>>> now, I think. That would be less intrusive a change (helpful for
>>>>>> backporting), but there may be other (so far unnamed) benefits of
>>> pulling
>>>>>> ahead the shared memory teardown.
>>>>>
>>>>> I have a hard time understanding this proposed ASSERT.
>>>>
>>>> It accounts for the various ways p2m_teardown() can (now) be called,
>>>> limiting the actual check for no remaining shared pages to the last
>>>> of all these invocations (on the host p2m with remove_root=true).
>>>>
>>>> Maybe
>>>>
>>>>     /* Limit the check to the final invocation. */
>>>>     if ( p2m_is_hostp2m(p2m) && remove_root )
>>>>         ASSERT(!atomic_read(&d->shr_pages));
>>>>
>>>> would make this easier to follow? Another option might be to move
>>>> the assertion to paging_final_teardown(), ahead of that specific call
>>>> to p2m_teardown().
>>>
>>> AFAICT d->shr_pages would still be more then 0 when this is called
> before
>>> sharing is torn down so the rearrangement is necessary even if we do
> this
>>> assert only on the final invocation. I did a printk in place of this
> assert
>>> without the rearrangement and afaict it was always != 0.
>>
>> Was your printk() in an if() as above? paging_final_teardown() runs really
>> late during cleanup (when we're about to free struct domain), so I would
>> be somewhat concerned if by that time the count was still non-zero.
> 
> Just replaced the assert with a printk. Without calling relinquish shared
> pages I don't find it odd that the count is non-zero, it only gets
> decremented when you do call relinquish. Once the order is corrected it is
> zero.

I may be getting you wrong, but it feels like you're still talking about
early invocations of p2m_teardown() (from underneath domain_kill()) when
I'm talking about the final one. No matter what ordering inside
domain_relinquish_resources() (called from domain_kill()), the freeing
will have happened by the time that process completes. Which is before
the (typically last) last domain ref would be put (near the end of
domain_kill()), and hence before the domain freeing process might be
invoked (which is where paging_final_teardown() gets involved; see
{,arch_}domain_destroy()).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.