[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 13/40] xen/mpu: introduce unified function setup_early_uart to map early UART



Hi Penny,

On 02/02/2023 08:05, Penny Zheng wrote:
-----Original Message-----
From: Julien Grall <julien@xxxxxxx>
Sent: Thursday, February 2, 2023 3:27 AM
To: Penny Zheng <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Wei Chen <Wei.Chen@xxxxxxx>; Stefano Stabellini
<sstabellini@xxxxxxxxxx>; Bertrand Marquis <Bertrand.Marquis@xxxxxxx>;
Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
Subject: Re: [PATCH v2 13/40] xen/mpu: introduce unified function
setup_early_uart to map early UART



On 01/02/2023 05:36, Penny Zheng wrote:
Hi Julien

Hi Penny,


Hi Julien,

[...]
xen_mpumap[3] : Xen read-write data
xen_mpumap[4] : Xen BSS
xen_mpumap[5] : Xen static heap
xen_mpumap[6] : Guest memory section

Why do you need to map the "Guest memory section" for the idle vCPU?


Hmmm, "Guest memory section" here refers to *ALL* guest RAM address
range with only EL2 read/write access.

For what purpose? Earlier, you said you had a setup with a limited number
of MPU entries. So it may not be possible to map all the guests RAM.


The "Guest memory section" I referred here is the memory section defined in my 
new
introducing device tree property, "mpu,guest-memory-section = <...>".  It will 
include
"ALL" guest memory.

Let me find an example to illustrate why I introduced it and how it shall work:
In MPU system, all guest RAM *MUST* be statically configured through 
"xen,static-mem" under domain node.
We found that with more and more guests in,  the scattering of  
"xen,static-mem" may
exhaust MPU regions very quickly. TBH, at that time, I didn't figure out that I 
could map/unmap on demand.
And in MMU system, We will never encounter this issue, setup_directmap_mappings 
will map the whole system
RAM to the directmap area for Xen to access in EL2.
So instead, "mpu,guest-memory-section" is introduced to limit the scattering 
and map in advance, we enforce
users to ensure all guest RAM(through "xen,static-mem") must be included within 
"mpu,guest-memory-section".
e.g.
mpu,guest-memory-section = <0x0 0x20000000 0x0 0x40000000>;
DomU1:
xen,static-mem = <0x0 0x40000000 0x0 0x20000000>;
DomU2:
xen,static-mem = <0x0 0x20000000 0x0 0x20000000>;

Xen should only need to access the guest memory in hypercalls and
scrubbing. In both cases you could map/unmap on demand.


thanks for the explanation.
In my understanding, during boot-up, there are two spots where Xen may touch 
the guest memory:
one is that doing synchronous scrubbing in function unprepare_staticmem_pages.
Another is coping and pasting kernel image to guest memory.
In both cases, we could map/unmap on demand.
And if you think map/unmap on demand is better than "mpu,guest-memory-section", 
I'll try to fix it in next serie

I think it would be better for a few reasons:
1) You are making the assumption that all the RAM for the guests are contiguous. This may not be true for various reason (i.e. split bank...).
 2) It reduces the amount of work for the integrator
3) You increase the defense in the hypervisor but it is more difficult to access the guest memory if there is a breakage

I don't expect major rework because you could plug the update to the MPU in map_domain_page().

I have looked at the code and this doesn't entirely answer my question.
So let me provide an example.

Xen can print to the serial console at any time. So Xen should be able to
access the physical UART even when it has context switched to the guest
vCPU.


I understand your concern on "device memory section" with your
Example here. True, the current implementation is buggy.

Yes, if vpl011 is not enabled in guest and we instead passthrough a UART to
guest, in current design, Xen is not able to access the physical UART on guest 
mode.

So all the guests are using the same UART?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.