[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v2 13/40] xen/mpu: introduce unified function setup_early_uart to map early UART


  • To: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Thu, 2 Feb 2023 08:05:06 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=k3/CM8sKhqkKBKnls5AwJqB4jP4SaSDy5KU+HPlQIMs=; b=OfT//t9Ctk3b7jHVngYyV2eXxnsQ67g0k4gv6nuG4HKN13UomvWWYy1tJxD575jCFnEJ/bomZnvgHkRQAy86WbtQ4V9dfyenIsR8b5xyFDH8rQZC1HEPeBiIniiRq/ggt65lndw6oMechDr5LTLcQImXWlQt3DULTWokke2XOSzkO0vTjdUTgIEC8t9n0Kzys+O0ybl1feuGrJh8TD7er6B4D22BMq+9mgSvCcAbP+7RoqSXPXanCrVJ88VmmY+rFTYtYJ6gsI5x6eEHZ4jz0G9H/s8AvE2URBkGAinJG+o5gIuPDxEkrKZ3OCpP64mLqHm88B61VVuqzfGqc1X6mA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dr95XFpkcvZMchzCUf2FDCEOKeBfV6qmEDT38bi4bpEn9Mwb1yp8pwSBrR8zBtzIYveJAOhLxuRass6cGgt/rFaD+w+/3qpXd7RVtYzMbHUAH2TgNNZhsBbFCl0KW5Q4WoQND5CJK02A/Rgjxg4Jc5MI3CBZqu1VL2tZUC90Ok1bWlCUWxKEaZ369WZ7aivhBGq3mR/Rhoqi5TycecbDlowe8vaWxHlDF/ff0tMwpcg1+h0Cktyrk41Xf8AcQ2Xs16KOCf+HE5iwkLtqzkhB3VjnnefPNvFgvsYYMtl4BXHudQWgB7+v+/uvoZta80U/MaXnVgZaBcC07fz+rhJlZw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Wei Chen <Wei.Chen@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 02 Feb 2023 08:05:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHZJxAoM+nWeUC5lUC+09eIkdu2V66uAIMAgAb9FDCAAB73gIABdWjwgABDKoCAATGeAIAAW6iAgAABNtCAAjSCAIAAq5mg
  • Thread-topic: [PATCH v2 13/40] xen/mpu: introduce unified function setup_early_uart to map early UART

> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Sent: Thursday, February 2, 2023 3:27 AM
> To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Wei Chen <Wei.Chen@xxxxxxx>; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Bertrand Marquis <Bertrand.Marquis@xxxxxxx>;
> Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
> Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
> setup_early_uart to map early UART
> 
> 
> 
> On 01/02/2023 05:36, Penny Zheng wrote:
> > Hi Julien
> 
> Hi Penny,
>

Hi Julien,
 
> >
[...]
> >>> xen_mpumap[3] : Xen read-write data
> >>> xen_mpumap[4] : Xen BSS
> >>> xen_mpumap[5] : Xen static heap
> >>> xen_mpumap[6] : Guest memory section
> >>
> >> Why do you need to map the "Guest memory section" for the idle vCPU?
> >>
> >
> > Hmmm, "Guest memory section" here refers to *ALL* guest RAM address
> range with only EL2 read/write access.
> 
> For what purpose? Earlier, you said you had a setup with a limited number
> of MPU entries. So it may not be possible to map all the guests RAM.
> 

The "Guest memory section" I referred here is the memory section defined in my 
new
introducing device tree property, "mpu,guest-memory-section = <...>".  It will 
include
"ALL" guest memory.

Let me find an example to illustrate why I introduced it and how it shall work:
In MPU system, all guest RAM *MUST* be statically configured through 
"xen,static-mem" under domain node.
We found that with more and more guests in,  the scattering of  
"xen,static-mem" may
exhaust MPU regions very quickly. TBH, at that time, I didn't figure out that I 
could map/unmap on demand.
And in MMU system, We will never encounter this issue, setup_directmap_mappings 
will map the whole system
RAM to the directmap area for Xen to access in EL2. 
So instead, "mpu,guest-memory-section" is introduced to limit the scattering 
and map in advance, we enforce
users to ensure all guest RAM(through "xen,static-mem") must be included within 
"mpu,guest-memory-section".
e.g.
mpu,guest-memory-section = <0x0 0x20000000 0x0 0x40000000>;
DomU1:
xen,static-mem = <0x0 0x40000000 0x0 0x20000000>;
DomU2:
xen,static-mem = <0x0 0x20000000 0x0 0x20000000>;

> Xen should only need to access the guest memory in hypercalls and
> scrubbing. In both cases you could map/unmap on demand.
> 

thanks for the explanation.
In my understanding, during boot-up, there are two spots where Xen may touch 
the guest memory:
one is that doing synchronous scrubbing in function unprepare_staticmem_pages.
Another is coping and pasting kernel image to guest memory.
In both cases, we could map/unmap on demand. 
And if you think map/unmap on demand is better than "mpu,guest-memory-section", 
I'll try to fix it in next serie
 
> >
> > For guest vcpu, this section will be replaced by guest itself own RAM with
> both EL1/EL2 access.
> >
> >
> >>> xen_mpumap[7] : Device memory section
> >>
> >> I might be missing some context here. But why this section is not
> >> also mapped in the context of the guest vCPU?
> >>
> >> For instance, how would you write to the serial console when the
> >> context is the guest vCPU?
> >>
> >
> > I think, as Xen itself, it shall have access to all system device memory on
> EL2.
> > Ik, it is not accurate in current MMU implementation, only devices
> > with supported driver will get ioremap.
> 
> So in the MMU case, we are not mapping all the devices in Xen because we
> don't exactly know which memory attributes will be used by the guest.
> 
> If we are using different attributes, then we are risking to break coherency.
> Could the same issue happen with the MPU?
> 
> If so, then you should not mapped those regions in Xen.
> 
> >
> > But like we discussed before, if following the same strategy as MMU
> > does, with limited MPU regions, we could not afford mapping a MPU
> region for each device.
> > For example, On FVPv8R model, we have four uarts, and a GICv3. At
> > most, we may provide four MPU regions for uarts, and two MPU regions
> for Distributor and one Redistributor region.
> > So, I thought up this new device tree property
> > “mpu,device-memory-section = <0x0 0x80000000 0x0 0x7ffff000>;“ to
> roughly map all system device memory for Xen itself.
> 
> Why do you say "roughly"? Is it possible that you have non-device region in
> the range?
> 
> >
> > For guest, it shall only see vgic, vpl011, and its own passthrough
> > device. And here, to maintain safe and isolation, we will be mapping a
> MPU region for each device for guest vcpu.
> > For example, vgic and vpl011 are emulated and direct-map in MPU.
> > Relevant device
> 
> I am confused. If the vGIC/vPL011 is emulated then why do you need to
> map it in the MPU? IOW, wouldn't you receive a fault in the hypervisor if
> the guest is trying to access a region not present in the MPU?
> 
> > mapping(GFN == MFN with only EL2 access)will be added to its *P2M
> mapping table*, in vgic_v3_domain_init [1].
> >
> > Later, on vcpu context switching, when switching from idle vcpu,
> > device memory section gets disabled and switched out in
> > ctxt_switch_from [2], later when switching into guest vcpu, vgic and
> vpl011 device mapping will be switched in along with the whole P2M
> mapping table [3].
> >
> > Words might be ambiguous, but all related code implementation is on
> > MPU patch serie part II - guest initialization, you may have to check the
> gitlab link:
> > [1]
> > https://gitlab.com/xen-project/people/weic/xen/-
> /commit/a51d5b25eb17a5
> > 0a36b27987a2f48e14793ac585 [2]
> > https://gitlab.com/xen-project/people/weic/xen/-
> /commit/c6a069d777d940
> > 7aeda42b7e5b08a086a1c15976 [3]
> > https://gitlab.com/xen-project/people/weic/xen/-
> /commit/d8c6408b6eef11
> > 90d75c9bd4e58557d34fc8b4df
> 
> I have looked at the code and this doesn't entirely answer my question.
> So let me provide an example.
> 
> Xen can print to the serial console at any time. So Xen should be able to
> access the physical UART even when it has context switched to the guest
> vCPU.
> 

I understand your concern on "device memory section" with your
Example here. True, the current implementation is buggy.

Yes, if vpl011 is not enabled in guest and we instead passthrough a UART to
guest, in current design, Xen is not able to access the physical UART on guest 
mode.

All guests in MPU are direct-map, So like you said after, the mapping for
vpl011 on guest mode is the same with the physical UART.  And this hides the 
problem, to
let Xen being able to access to the physical UART.

I'll drop the design on "device memory section", and let device driver map
on demand in boot-time.
  
> But above you said that the physical device would not be accessible and
> instead you map the virtual UART. So how Xen is supported to access the
> physical UART?
> 
> Or by vpl011 did you actually mean the physical UART? If so, then if you
> map the device one by one in the MPU context, then it would likely mean
> to have space to map them one by one in the idle context.
> 
> > xen_mpumap[0] : Xen text
> > xen_mpumap[1] : Xen read-only data
> > xen_mpumap[2] : Xen read-only after init data xen_mpumap[3] : Xen
> > read-write data xen_mpumap[4] : Xen BSS ( Fixed MPU region defined in
> > assembly )
> > ----------------------------------------------------------------------
> > ----
> > xen_mpumap[5]: Xen init data
> > xen_mpumap[6]: Xen init text
> > xen_mpumap[7]: Early FDT
> > xen_mpumap[8]: Guest memory section
> > xen_mpumap[9]: Device memory section
> > xen_mpumap[10]: Static shared memory section ( boot-only and switching
> > regions defined in C )
> > ----------------------------------------------------------------------
> > ----
> > ...
> > xen_mpumap[max_xen_mpumap - 1] : Xen static heap ( Fixed MPU region
> > defined in C )
> > ----------------------------------------------------------------------
> > ----
> >
> > After re-org:
> > xen_mpumap[0] : Xen text
> > xen_mpumap[1] : Xen read-only data
> > xen_mpumap[2] : Xen read-only after init data xen_mpumap[3] : Xen
> > read-write data xen_mpumap[4] : Xen BSS ( Fixed MPU region defined in
> > assembly )
> > ----------------------------------------------------------------------
> > ----
> > xen_mpumap[8]: Guest memory section
> > xen_mpumap[9]: Device memory section
> > xen_mpumap[10]: Static shared memory section ( Switching region )
> > ----------------------------------------------------------------------
> > ----
> > ...
> > xen_mpumap[max_xen_mpumap - 1] : Xen static heap ( Fixed MPU region
> > defined in C )
> >
> > If you're fine with it, then next serie, I'll use this layout, to keep
> > both simple assembly and re-org process.
> 
> I am ok in principle with the layout you propose. My main requirement is
> that the region used in assembly are fixed.
> 
> Cheers,
> 
> --
> Julien Grall

Cheers,

--
Penny Zheng

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.