[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Accessing Dom0 Physical memory from xen, via direct mappings (PML4:262-271)



On Tue, Mar 13, 2012 at 9:08 AM, Tim Deegan <tim@xxxxxxx> wrote:
At 08:45 -0700 on 13 Mar (1331628358), Shriram Rajagopalan wrote:
> In config.h (include/asm-x86/config.h) I found this:
>
> #if __x86_64__
> ...
>  *  0xffff830000000000 - 0xffff87ffffffffff [5TB, 5*2^40 bytes, PML4:262-271]
>  *    1:1 direct mapping of all physical memory.
> ...
>
> I was wondering if it's possible for dom0 to malloc a huge chunk of memory
> and let xen know the starting address of this range.
> Inside xen, I can translate dom0 virt address to a virt address in the above
> range and access the entire chunk via these virtual addresses.

Eh, maybe?  But it's harder than you'd think.  Memory malloc()ed in dom0
may not be contiguous in PFN-space, and dom0 PFNs may not be contiguous
in MFN-space, so you can't just translate the base of the buffer and use
it with offsets, you have to translate again for every 4k page.  Also, you
need to make sure dom0 doesn't page out or relocate your user-space
buffer while Xen is accessing the MFNs.  mlock() might do what you need
but AIUI there's no guarantee that it won't, e.g., move the buffer
around to defragment memory.


Yep. I am aware of the above issues. As far as contiguity is concerned,
I was hoping (*naively/lazily*) that if I allocate a huge chunk (1G or so)
using posix_memalign, it would start at page boundary and also be contiguous
*most* of the time.  I need this setup only for some temporary analysis and
not for a production quality system.
And the machine has more than enough ram, with swap usage being 0 all the time.
 

If you do handle all that, the correct way to get at the mappings in
this range is with map_domain_page().  But remember, this only works
like this on 64-bit Xen.  On 32-bit, only a certain amount of memory can
be mapped at one time so if the buffer is really big, you'll need to map
an unmap parts of it on demand.


64-bit. The comments I pointed out were under the #if x86_64 region.

But maybe back up a bit: why do you want to do this?  What's the buffer
for?  Is it something you could do more easily by having Xen allocate
the buffer and let dom0 map it?


Well, the buffer acts as a huge log dirty "byte" map (a byte per word).
I am skipping the reason for doing this huge byte map, for the sake of brevity.

Can I have xen allocate this huge buffer ? (a byte per 8-byte word means about
128M for a 1G guest). And if I were to have this byte-map per-vcpu, it would mean
512M worth of RAM, for a 4-vcpu guest.

Is there a way I could increase the xen heap size to be able to allocate this much memory?
And how do I map the xen memory in dom0 ? I vaguely remember seeing similar code in
xentrace, but if you could point me in the right direction, it would be great.


> The catch here is that I want this virtual address range to be accessible
> across all vcpu contexts in xen (whether it's servicing a hypercall from dom0
> or a vmx fault caused by Guest).
>
> So far, I have only been able to achieve the former. In the latter case,
> where the "current" vcpu belongs to a guest (eg in a vmx fault handler),
> I can't access this address range inside xen. Do I have to add EPT
> mappings to guest's p2m to do this ? Or can I do something else ?

If you really have got a pointer into the 1-1 mapping it should work
from any vcpu. 
 But again, that's not going to work on 32-bit Xen.
There, you have to use map_domain_page_global() to get a mapping that
persists across all vcpus, and that's even more limited in how much it
can map at once.

Cheers,

Tim.


cheers
shriram
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.