[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XenPPC] Xencomm virutal Adress translation question?



On Sun, 2007-01-28 at 10:40 -0500, Jimi Xenidis wrote:
> Ok, xencomm is simply a data structure that describes (we'll call it  
> the descriptor or "desc") the physical memory that another data  
> structure occupies (we'll call "data").
> 
> Sometimes this "data" is self described, such that all the data  
> exists on a single page, or proven to be physically contiguous, so it  
> is tagged as "inline" and no separate descriptor is necessary and  
> therefore no allocation (or free) is required, so lets only consider  
> when allocation is required.
> 
> There are two methods to create this desc: (1) _map_early() and (2)  
> _map().
> 
> _map_early() is used when allocation occurs from the stack and the  
> size of data is known to be a small number (less than 4) pages, this  
> is helpfull for performance and in cases where the allocator is  
> unavailable (too early or in interrupt).  The descriptor returned  
> from this call does not require a corresponding free().
> BTW: the above paragraph makes me want to rename _map_early() to  
> _map_auto() or _map_stack() which in an obvious manner explains why  
> no free() is needed. Hollis?
> 
> Now to answer your question:
> 
> In the case of _map(), if the data cannot be "inlined" then a single  
> kernel page is allocated to describe up to 511 (+-1) pages of data,  
> this page is known to easily be translated from kernel_virtual to  
> kernel_physycal using __va() and __pa() function, so there should be  
> no problem in using:
>     desc = __pa(alloc_page());
> and:
>     free_page(__va(desc));


Well this is what is at issue. Acutally this was the first thing I
thought of except with xencomm_pa if you are not not physically
contiguous we will get back a physical address that cannot (I believe)
corretly resolved by __va(). Hinting part of the problem. It's fine if
it's Inline but it's the case that it's not inline that there seems
there would be a problem. 

> 
> BTW: xencomm_pa(), should only be used when building the contents of  
> "desc" in order to describe "data", _not_ in creating the value of  
> "desc", that should always be __pa().

Ok.. this would explain a lot then. If this is the case then forget what
I just said above. I need to change my function to not return a
xencomm_pa(). Then everything should be fine.


> 
> 
> -JX
> 
> On Jan 27, 2007, at 8:13 PM, Jerone Young wrote:
> 
> > I'm facing an interesting problem and I was wondering if anyone on the
> > list (Jimi) could answer it.
> >
> > So in the xencomm patch that has been floating the list I have  
> > functions
> > xencomm_map & xencomm_map_early . Now currently they return physcial
> > addresses regardless of the case. BUT, in the cases where they are not
> > inline I face a BIG problem. As I have no way to truly translate  
> > back to
> > a virtual address to free the memory with xencomm_free (__va() isn't
> > going to cut it). So I currently have a patch (yet to go to the list)
> > that has xencomm_map & early returning a phyiscal address if it's  
> > inline
> > and a virtual address if it is not.
> >
> > Well then I have a hack in xencomm_vtop that says if the address  
> > has bit
> > XENCOMM_INLINE_FLAG. Then just return the address, since you are  
> > already
> > physical. So when xencomm_pa() is used within the code it will return
> > the proper address.
> >
> > Is this acceptable? I'm not sure of any other way of going about this
> > since there is no good way to translate back to a virtual address  
> > to use
> > xencomm_free.
> >
> >
> > _______________________________________________
> > Xen-ppc-devel mailing list
> > Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-ppc-devel
> 


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.