[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] question about memory allocation for driver domain



Hi Oleksandr,

On 05/02/2015 21:49, Oleksandr Tyshchenko wrote:
On Thu, Feb 5, 2015 at 3:12 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Wed, 2015-02-04 at 18:47 +0200, Oleksandr Tyshchenko wrote:
Hi, all.

We have begun to use the driver domain on OMAP5 platform.
To make driver domain running on OMAP5 platform we need to have it
memory 1 to 1 mapped because of lacking SMMU support on this platform.
To satisfy this requirement we have done temporally solution which
works but not entirely completed and looks not good from Xen
perspective. The main question in memory allocator in Xen.

We did next steps:
1. toolstack:
- allow to allocate 128/256/512 Mb memory chunks
- add ability to set rambase_pfn via cfg file

2. hypervisor:
- alloc driver domain memory 1 to 1
   - mark domain with id 1 as privileged
   - update memory allocation logic for such domain
   - allocate memory in the same way as for domain 0

But we have encountered with one thing related to memory allocation algorithm.
Let me describe what I mean.
Our requirements is to allocate one specified chunk if it is present in domheap.
We have this "spec_mfn" which are a "rambase_pfn" for driver domain we
want to map 1 to 1. We can get it from extent_list. So, we need to
alloc chunk which would be correspond to "spec_mfn".
In other world we need to allocate known chunk of memory. But if I
understood correctly that the existing allocator doesn't allow us to
do that directly.

There are some thoughts how to do that. But, at first, we need to
choose right direction:
1. Add the separate memory allocator which allow us to alloc specified
chunk if it is present. And use it in case when we need to allocate
memory for driver domain 1:1 only. We can pass bool variable via cfg
file to show what we want (1:1 or not).
2. Don't add separate allocator. Modify existing allocator to add
ability to alloc specified chunk.
3. Don't add and modify anything. Let the allocator to work as usual.
Return mfn of allocating chunk and correct the default rambase_pfn in
toolstack.
What we actually have in the moment but do that manually. We see what
the mfn we got and corrected "rambase_pfn" property in cfg file.

Could someone explain me what is a right way?

The approach we've taken with dom0 is not to require specific addresses,
but rather to tailor the address map to the contiguous region the
allocator gives us.

Since I presume you are using a 1:1 layout in the driver domain too I
expect that approach should work there too (I think this is your #3?).
yes

Your #2 might be possible, but would probably involve a reasonably
expensive scan of the various free pools in the hopes of finding the
block you want, since it isn't designed to be looked up in this way.

I suppose #1 would be something like Linux's CMA allocator -- i.e.
carving out 1 or more regions on boot and keeping them away from the
main allocator (but still in e.g. the frametable etc) and a simple way
to allocate one of the chunks.
This is interesting. Can I use /memreserve/ logic or something similar to
keeping them away from the main allocator?
But at the stages where we "initialize" domheap we know nothing about
guest domain (domd) and how we need to allocate memory for it (1:1 or not).

The first approach we had for DOM0 1:1 mapping was book-keeping memory at startup. The main allocator wasn't able to use it.

But this turn out to be very hackish and not scalable (multiple memory banks...).


So I think any of the approaches you suggest could work, I'd probably
say #3, #1, #2 in decreasing order of preference.
I got it.

If I correctly understand the solution #3, it may end up to lots of small banks after domains has been created/destroyed multiple


Now, if you were asking for ideas on how to make this stuff
upstreamable, well that's a harder question. I'm not really sure :-/
Ideally, I would like to make this stuff upstreamable.

As you said on your first mail, this is required to allow DMA on device passthrough when the platform doesn't have an SMMU.

I totally understand a such use case for specific embedded product, because you trust your driver domains.

On upstream, the main use case of driver domain is too protected the platform from buggy drivers. If the drivers crash, then you can restart only this domain.

With a device protected by an SMMU, any DMA requests are safe. Without it, the driver can pretty much do whatever it wants. This could lead to the hypervisor corruption/crash.

If a such feature as to come to Xen upstream, this should be hidden to the main users (maybe via compilation option) or require a specific option to use it.

Regards.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.