[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 27 Sep 2023 14:06:47 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VM6p6qb2FJFitCTznxEHROk9pzQ6PCqp4nEnKrT5lIU=; b=aZEDLhaJ7qEQmOcKUZUbZKL/LGO4PGLHs65QHwIPoWHAOHydFjSiKm843tVcfevIHTpfdVcgk1CIiHIhPfHUgAjMUT92m66q6Oo3y5gt2QePnJCoTsVYw1cPLP+cDZUxxIpJHHc9YVmUU0EGNSMJqK5tguMvh9F5LEIEwS2fHACGMBPpNjXYBNnjXOv9Ok+hDWFDohREQIDmrFNHEzvZnMbgoQYzRH5D2VbHV9xCIKgfICMGPflR3EBOroOLCOxFtNB7QR9g5CGN3y653Q/2TGvKaYPvF6bDIknGmOZ3Q5jXSCsgqu30bYcQHjATMFnjNoUCxCGq9dotwHryRbj2Wg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I28uU4ryiv9T6FYqDo8wcm2DoMZ1glCTKdNA2XJHM5QQCF7rxxSAcklaQTo6+7VwdUNooe8uBahkh/ot5d+kc0oLwhP9JLjtNIw8GH2xUYg5LYCr2UVh79ps5NJrDQDion4BLw5LYqcS/eaAPMsSlfOaAapthC3i4D2/zYcZPHBpJnUjPiIgIvL7BSQLq+x9cb88G2F4+rDUXAt4SaWp/da2bxMj0o62BCIv1eFFIFT8p4QV8dfKkj2vaYbjbsGydES5m8XJyj7wXmL7IJIAQeogZj4tsRoIor+IKq3cixheG1dK0GwffbFwf6SpWDVi6KM7qMOsuxqUP8IpVxVo3Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • Delivery-date: Wed, 27 Sep 2023 12:07:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.09.2023 13:08, Roger Pau Monné wrote:
> On Wed, May 03, 2023 at 05:56:46PM +0200, Jan Beulich wrote:
>> In preparation of the introduction of new vCPU operations allowing to
>> register the respective areas (one of the two is x86-specific) by
>> guest-physical address, add the necessary fork handling (with the
>> backing function yet to be filled in).
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Given the very limited and specific usage of the current Xen forking
> code, do we really need to bother about copying such areas?
> 
> IOW: I doubt that not updating the runstate/time areas will make any
> difference to the forking code ATM.

I expect Tamas'es reply has sufficiently addressed this question.

>> --- a/xen/arch/x86/mm/mem_sharing.c
>> +++ b/xen/arch/x86/mm/mem_sharing.c
>> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>  }
>>  
>> +static int copy_guest_area(struct guest_area *cd_area,
>> +                           const struct guest_area *d_area,
>> +                           struct vcpu *cd_vcpu,
>> +                           const struct domain *d)
>> +{
>> +    mfn_t d_mfn, cd_mfn;
>> +
>> +    if ( !d_area->pg )
>> +        return 0;
>> +
>> +    d_mfn = page_to_mfn(d_area->pg);
>> +
>> +    /* Allocate & map a page for the area if it hasn't been already. */
>> +    if ( !cd_area->pg )
>> +    {
>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>> +        p2m_type_t p2mt;
>> +        p2m_access_t p2ma;
>> +        unsigned int offset;
>> +        int ret;
>> +
>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>> +        {
>> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0);
>> +
>> +            if ( !pg )
>> +                return -ENOMEM;
>> +
>> +            cd_mfn = page_to_mfn(pg);
>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>> +
>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, 
>> p2m_ram_rw,
>> +                                 p2m->default_access, -1);
>> +            if ( ret )
>> +                return ret;
>> +        }
>> +        else if ( p2mt != p2m_ram_rw )
>> +            return -EBUSY;
> 
> Shouldn't the populate of the underlying gfn in the fork case
> be done by map_guest_area itself?

I've used the existing logic for the info area to base my code on. As
noted in the patch switching the info area logic to use the generalize
infrastructure, I've taken the liberty to address an issue in the
original logic. But it was never a goal to re-write things from scratch.
If, as Tamas appears to agree, there a better way of layering things
here, then please as a follow-on patch.

> What if a forked guest attempts to register a new runstate/time info
> against an address not yet populated?

What if the same happened for the info area? Again, I'm not trying to
invent anything new here. But hopefully Tamas'es reply has helped here
as well.

>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -1572,6 +1572,13 @@ void unmap_vcpu_info(struct vcpu *v)
>>      put_page_and_type(mfn_to_page(mfn));
>>  }
>>  
>> +int map_guest_area(struct vcpu *v, paddr_t gaddr, unsigned int size,
>> +                   struct guest_area *area,
>> +                   void (*populate)(void *dst, struct vcpu *v))
> 
> Oh, the prototype for this is added in patch 1, almost missed it.
> 
> Why not also add this dummy implementation in patch 1 then?

The prototype isn't dead code, but the function would be until it gains
users here. If anything, I'd move the prototype introduction here; it
merely seemed desirable to me to touch xen/include/xen/domain.h no
more frequently than necessary.

Also, to be quite frank, I find this kind of nitpicking odd after the
series has been pending for almost a year.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.