[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 03/10] domain: GADDR based shared guest area registration alternative - teardown


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 14 Dec 2022 10:12:31 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=frc3gGYnjpPbkoI87ym2uQ+d9o9UoAXztLkCV2dPPCk=; b=LuZgWVZyGdYQtLM6Dw0hRQCDPC0Mg6M6XsFVfxZ85C0e5cgvnCh/BcMdYd6i9L3ST+a2APKvfg6ATw8mu0u0c4Bp5TVGylo78ES89XTld+vqIWyEatzSSNzevbfK8NqASvouhkUMX8YKwbf1UZRByMvM91VtQBrXy5bwQjoCFwJ6hr/8ek5zwfLE956kXaJKgAqyuf+ja7IE4fRpFZjBpDtKvYp8mP5A/GW8d9aSm6bVzaWXXo+j+ZfCD9eQhafhiXdnreGcaDL8lBJDb1U50aQiqi3OtFwiZfFDQHIkYiOt3rqOtHuQlyQRwzKP7r1OQ1oXp68ROCFctwDuUa1cww==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mFsbB22rpKv73V31mp5CSugZAgslWyBMjc4X+2xLu3jiV3CIDBjaJttP8fFoWFu1zluAbLMtfAXQcqNCaT/tQndYw45wUtPsHns/J9zuYjPGZUZZuRWxosGhK1EUTuwb+7qVnyTMBYsDv30wCV5VD2OiI3KWyWmj2kHdLvMYFbSAMX7JPb2rKtnpOc5yQXAMEXNvHSD5d+cTTCE4H8Bl/fMN0gA6oYqmtxcXDvuSoUPpWRRIlOTKaaY+p/0RyvVOPFDmSZgjaIH7AfU5skJV65jjh4xzaGeMXz44uIZ2uBmamlD95bSsET6uReUTNdbRxDxyjyx0TBspj7NAEyWCEA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 14 Dec 2022 09:12:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13.12.2022 22:44, Julien Grall wrote:
> On 19/10/2022 08:40, Jan Beulich wrote:
>> In preparation of the introduction of new vCPU operations allowing to
>> register the respective areas (one of the two is x86-specific) by
>> guest-physical address, add the necessary domain cleanup hooks.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> RFC: Zapping the areas in pv_shim_shutdown() may not be strictly
>>       necessary: Aiui unmap_vcpu_info() is called only because the vCPU
>>       info area cannot be re-registered. Beyond that I guess the
>>       assumption is that the areas would only be re-registered as they
>>       were before. If that's not the case I wonder whether the guest
>>       handles for both areas shouldn't also be zapped.
> 
> I don't know the code enough to be able to answer it.

Right; I hope the original shim authors to be able to shed some light
on this.

> The code itself looks good to me. With one remark below:
> 
> Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>

Thanks.

>> @@ -1555,6 +1559,15 @@ void unmap_vcpu_info(struct vcpu *v)
>>       put_page_and_type(mfn_to_page(mfn));
>>   }
>>   
>> +/*
>> + * This is only intended to be used for domain cleanup (or more generally 
>> only
>> + * with at least the respective vCPU, if it's not the current one, reliably
>> + * paused).
>> + */
>> +void unmap_guest_area(struct vcpu *v, struct guest_area *area)
>> +{
> 
> IIUC, you will add the ASSERT() we discussed in patch #7 in this patch. 
> I would be fine if you keep my reviewed-by.

And thanks again. Indeed this is what I have pending for v2:

/*
 * This is only intended to be used for domain cleanup (or more generally only
 * with at least the respective vCPU, if it's not the current one, reliably
 * paused).
 */
void unmap_guest_area(struct vcpu *v, struct guest_area *area)
{
    if ( v != current )
        ASSERT(atomic_read(&v->pause_count) | atomic_read(&d->pause_count));
}

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.