[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 1/2] x86/mem_sharing: copy cpuid during vm forking


  • To: "Cooper, Andrew" <andrew.cooper3@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: "Lengyel, Tamas" <tamas.lengyel@xxxxxxxxx>
  • Date: Tue, 5 Jan 2021 15:50:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EIwDntTtRN6RqhPBFGPwgTmZTmSzrA6PrRjWPnDCrWk=; b=nFx8IOTDkzp6/kQrt5XqNRY0u/idt7RSEl6x/unpenkUHlqWnDsV87Dj+rzSfy9DC+X+8KqheKNkStmqCxsrfEEdcCUORtP++zHKzc/kphN75lRBTn8TOAsr93haTgeEQDDr+BWm0cudqiBAwXWaD7bbMzJhAQ5dyiZBPLN0k+1RSbIyGEB7EMlnYApVMmLOeIxcUCfWvAXoWv0JzmDRHjkLPmkmCWX5oJ0kGa6U/ONTYeYccJMgSOQJH04T57pMGxbYQDNuPyKkRFYS/k1IkIM2D/Ukar8pl2EX+L/oqUFU+mHgUMR3/1bvnwY6y1HuIjj9yv8xkVddMSu0PwCqOQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cUvuOH/hpX0rjTsNSyIKGi/C7CCA2OVYUotcOa03qwlLpPxEkMB/I0uEJZt67zzy1l2fTnPMWadRBYeMErM5xiA1npukoQjv9BMehs5nJQmdLmQKTxdLKeCDv5Y/Lww9f2qY5s1KbCDS6C0y6MULNkKgwUuJI7dhz4HDiEDzABQjHU2gbly1ucT0BsZRUsNdCtXSrdc3InUj42aztD1/A1bmDB/61+A++53jZex9PIjyuR97R4lVXHLFtnAOXAUWXSzfEA4mkNzRiCbCbl4Wa8QtcMug6e25qCWMQh/l33YYPipxoOol5p4pCnmYTDcb2a61mpu25fuAycyGQ6EB2A==
  • Authentication-results: citrix.com; dkim=none (message not signed) header.d=none;citrix.com; dmarc=none action=none header.from=intel.com;
  • Cc: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 05 Jan 2021 15:50:56 +0000
  • Dlp-product: dlpe-windows
  • Dlp-reaction: no-action
  • Dlp-version: 11.5.1.3
  • Ironport-sdr: 00T6EoS4afsH+UScbzcE4T1No0OVTLCa7/j31SpN7EPlRYgvFf8dF2Tl2jEkTjtFIvWj3m39Sk gBZnP1wuapXA==
  • Ironport-sdr: Y5lChyTdqN1aXRYuomfMKe8q+sgeGHM2VVnq1n4V6/lr2q5miaENjj1of9xklt8ND6FwLM+yUg Uv6NKTa09mcQ==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHW4sD0GmLIB85B4UqCGafOB8ldQKoY35iAgABOlwA=
  • Thread-topic: [PATCH 1/2] x86/mem_sharing: copy cpuid during vm forking


> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Sent: Tuesday, January 5, 2021 6:05 AM
> To: Lengyel, Tamas <tamas.lengyel@xxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxx>; Roger
> Pau Monné <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>
> Subject: Re: [PATCH 1/2] x86/mem_sharing: copy cpuid during vm forking
> 
> On 04/01/2021 17:41, Tamas K Lengyel wrote:
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>
> > ---
> >  xen/arch/x86/mm/mem_sharing.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/xen/arch/x86/mm/mem_sharing.c
> > b/xen/arch/x86/mm/mem_sharing.c index c428fd16ce..4a02c7d6f2
> 100644
> > --- a/xen/arch/x86/mm/mem_sharing.c
> > +++ b/xen/arch/x86/mm/mem_sharing.c
> > @@ -1764,6 +1764,7 @@ static int fork(struct domain *cd, struct domain
> > *d)
> >
> >          domain_pause(d);
> >          cd->max_pages = d->max_pages;
> > +        memcpy(cd->arch.cpuid, d->arch.cpuid,
> > + sizeof(*d->arch.cpuid));
> >          cd->parent = d;
> >      }
> >
> 
> You need to extend this to d->arch.msr and v->arch.msrs or someone is
> going to have a very subtle bug to debug in the future.

I need more information why v->arch.msrs would need to be copied manually. If 
it's saved/reloaded by hvm_save/hvm_load then we are already covered. If not, 
then why would we need to do that for forks but not for domain save/restore?

Tamas

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.