[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86: Perform mem_sharing teardown before paging teardown


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 22 Feb 2023 11:47:54 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=czLsrhaeSt88jH/RsPNV2z++4bWSQlkbq8CPkJbo8B8=; b=VG7p5CdNT6tBVIxuvVWU81NAOLSoW5ryr/mpbW88NVW4Vh4YgLyDJU/onsMvJxjhFrCmdYid/7K/4EbwPIbmzJQLdko+8xYd5kbNmgKnCWiqJP9vqq8mnCoLkbfdovWNX2b6ACr7XIxGTsOWzAuA1ow63fA74y9t0dQfH3A9ErM/iNsGTSJfCWDTlGiYHu9fTpUsBWHOv/a/lCLM9RrGzmo4f42CJ1s5GulhOw6fEWCSXFALtYkTZs/YbrFJai22xEYrzfsbrxEFw+vMdMR2ZElh7RJLPvXpyyqE7Ahw1URocuKCqOTZmvkI350vxu/vEWQpYxFA5Yl91+X5YJ8CTg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZaH4onAMUPUthJOVZB7o6UW0ZmGn0ACyhnQ801pqsPYd2LLDWtQbymrpHZamkexYgjI6M4v4I7pBxaGeQH7PPmNUIMnHyWOXyAoP7F9UaG+bnR02dxE3bT7SgiQ9z8ewbyqZbNpmx6f/AVpbKK1KMlqOkbDr9KBTladIYEA94lFbMKgk+bCTksVJgQ8DdD6svN94WzNl//EqWh/8/gsd8nJ4n8zieF8QHzmKOGM7Rj0ics2IACUWGx65WNW2Haqs5PdorEH8Zy1WisIbuX16iL6XvKVFTuu/VPOYxzs8ve2nBj+URlMiwIws7GFWotpBpgEVQLELiCkxtxGucBtLcA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 22 Feb 2023 10:48:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 21.02.2023 16:59, Tamas K Lengyel wrote:
> On Tue, Feb 21, 2023 at 8:54 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 15.02.2023 18:07, Tamas K Lengyel wrote:
>>> An assert failure has been observed in p2m_teardown when performing vm
>>> forking and then destroying the forked VM (p2m-basic.c:173). The assert
>>> checks whether the domain's shared pages counter is 0. According to the
>>> patch that originally added the assert (7bedbbb5c31) the p2m_teardown
>>> should only happen after mem_sharing already relinquished all shared
> pages.
>>>
>>> In this patch we flip the order in which relinquish ops are called to
> avoid
>>> tripping the assert.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
>>> Fixes: e7aa55c0aab3 ("x86/p2m: free the paging memory pool
> preemptively")
>>
>> Especially considering the backporting requirement I'm disappointed to
>> find that you haven't extended the description to explain why this
>> heavier code churn is to be preferred over an adjustment to the offending
>> assertion. As said in reply to v1 - I'm willing to accept arguments
>> towards this being better, but I need to know those arguments.
> 
> The assert change as you proposed is hard to understand and will be thus
> harder to maintain. Conceptually sharing being torn down makes sense to
> happen before paging is torn down.

This is (perhaps) the crucial thing to say. Whereas ...

> Considering that's how it has been
> before the unfortunate regression I'm fixing here I don't think more
> justification is needed.

... I disagree here - that's _not_ how it has been before: paging_teardown()
has always been called first. The regression was with how much of teardown
is performed from there vs at the final stage of domain cleanup.

I'd be fine adding the "Conceptually ..." sentence to the description when
committing, of course provided you agree.

> The code-churn is minimal anyway.

Perspectives here may vary. Plus, as said, being the one to backport this
eventually, the larger the code change (even if pure movement), the larger
the chance of there being a conflict somewhere.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.