|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
On 18.05.2021 07:07, Penny Zheng wrote:
> +## Background
> +
> +Cases where needs static allocation:
> +
> + * Static allocation needed whenever a system has a pre-defined non-changing
> +behaviour. This is usually the case in safety world where system must behave
> +the same upon reboot, so memory resource for both XEN and domains should be
> +static and pre-defined.
> +
> + * Static allocation needed whenever a guest wants to allocate memory
> +from refined memory ranges. For example, a system has one high-speed RAM
> +region, and would like to assign it to one specific domain.
> +
> + * Static allocation needed whenever a system needs a guest restricted to
> some
> +known memory area due to hardware limitations reason. For example, some
> device
> +can only do DMA to a specific part of the memory.
This isn't a reason for fully static partitioning. Such devices also exist
in the x86 world, without there having been a need to statically partition
systems. All you want to guarantee is that for I/O purposes a domain has
_some_ memory in the accessible range.
> +Limitations:
> + * There is no consideration for PV devices at the moment.
How would PV devices be affected? Drivers better would use grant
transfers, but that's about it afaics.
> +## Design on Static Allocation
> +
> +Static allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +
> +These pre-defined memory, -- Static Momery, as parts of RAM reserved in the
> +beginning, shall never go to heap allocator or boot allocator for any use.
> +
> +### Static Allocation for Domains
> +
> +### New Deivce Tree Node: `xen,static_mem`
> +
> +Here introduces new `xen,static_mem` node to define static memory nodes for
> +one specific domain.
> +
> +For domains on static allocation, users need to pre-define guest RAM regions
> in
> +configuration, through `xen,static_mem` node under approriate `domUx` node.
> +
> +Here is one example:
> +
> +
> + domU1 {
> + compatible = "xen,domain";
> + #address-cells = <0x2>;
> + #size-cells = <0x2>;
> + cpus = <2>;
> + xen,static-mem = <0x0 0xa0000000 0x0 0x20000000>;
> + ...
> + };
> +
> +RAM at 0xa0000000 of 512 MB are static memory reserved for domU1 as its RAM.
> +
> +### New Page Flag: `PGC_reserved`
> +
> +In order to differentiate and manage pages reserved as static memory with
> +those which are allocated from heap allocator for normal domains, we shall
> +introduce a new page flag `PGC_reserved` to tell.
This contradicts you saying higher up "shall never go to heap allocator
or boot allocator for any use" - no such flag ought to be needed of the
allocators never get to see these pages. And even if such a flag was
needed, I can't see how it would be sufficient to express the page ->
domain relationship.
> +Grant pages `PGC_reserved` when initializing static memory.
I'm afraid I don't understand this sentence at all.
> +### New linked page list: `reserved_page_list` in `struct domain`
> +
> +Right now, for normal domains, on assigning pages to domain, pages allocated
> +from heap allocator as guest RAM shall be inserted to one linked page
> +list `page_list` for later managing and storing.
> +
> +So in order to tell, pages allocated from static memory, shall be inserted
> +to a different linked page list `reserved_page_list`.
> +
> +Later, when domain get destroyed and memory relinquished, only pages in
> +`page_list` go back to heap, and pages in `reserved_page_list` shall not.
If such a domain can be destroyed (and re-created), how would the
association between memory and intended owner be retained / propagated?
Where else would the pages from reserved_page_list go (they need to go
somewhere, as the struct domain instance will go away)?
> +### Memory Allocation for Domains on Static Allocation
> +
> +RAM regions pre-defined as static memory for one specifc domain shall be
> parsed
> +and reserved from the beginning. And they shall never go to any memory
> +allocator for any use.
> +
> +Later when allocating static memory for this specific domain, after acquiring
> +those reserved regions, users need to a do set of verification before
> +assigning.
> +For each page there, it at least includes the following steps:
> +1. Check if it is in free state and has zero reference count.
> +2. Check if the page is reserved(`PGC_reserved`).
If this memory is reserved for a specific domain, why is such verification
necessary?
> +Then, assigning these pages to this specific domain, and all pages go to one
> +new linked page list `reserved_page_list`.
> +
> +At last, set up guest P2M mapping. By default, it shall be mapped to the
> fixed
> +guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`, just like normal
> +domains. But later in 1:1 direct-map design, if `direct-map` is set, the
> guest
> +physical address will equal to physical address.
I think you're missing "host" ahead of the 2nd "physical address"?
> +### Static Allocation for Xen itself
> +
> +### New Deivce Tree Node: `xen,reserved_heap`
> +
> +Static memory for Xen heap refers to parts of RAM reserved in the beginning
> +for Xen heap only. The memory is pre-defined through XEN configuration
> +using physical address ranges.
> +
> +The reserved memory for Xen heap is an optional feature and can be enabled
> +by adding a device tree property in the `chosen` node. Currently, this
> feature
> +is only supported on AArch64.
The earlier "Cases where needs static allocation" doesn't really seem to
cover any case where this would be needed for Xen itself. Without a need,
I don't see the point of having the feature.
> +## Background
> +
> +Cases where domU needs 1:1 direct-map memory map:
> +
> + * IOMMU not present in the system.
> + * IOMMU disabled if it doesn't cover a specific device and all the guests
> +are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
> +a few without, then guest DMA security still could not be totally guaranteed.
> +So users may want to disable the IOMMU, to at least gain some performance
> +improvement from IOMMU disabled.
> + * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
> +To be specific, in a few extreme situation, when multiple devices do DMA
> +concurrently, these requests may exceed IOMMU's transmission capacity.
> + * IOMMU disabled when it adds too much latency on DMA. For example,
> +TLB may be missing in some IOMMU hardware, which may bring latency in DMA
> +progress, so users may want to disable it in some realtime scenario.
> +
> +*WARNING:
> +Users should be aware that it is not always secure to assign a device without
> +IOMMU/SMMU protection.
> +When the device is not protected by the IOMMU/SMMU, the administrator should
> +make sure that:
> + 1. The device is assigned to a trusted guest.
> + 2. Users have additional security mechanism on the platform.
> +
> +Limitations:
> + * There is no consideration for PV devices at the moment.
Again I'm struggling to see how PV devices might be impacted.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |