|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 36/38] libxc: add ARM support to xc_dom (PV domain building)
On Thu, 2012-06-07 at 12:38 +0100, Stefano Stabellini wrote:
> > @@ -294,7 +295,7 @@ static inline xen_pfn_t xc_dom_p2m_guest(struct
> > xc_dom_image *dom,
> > {
> > if (xc_dom_feature_translated(dom))
> > return pfn;
> > - return dom->p2m_host[pfn];
> > + return dom->p2m_host[pfn - dom->rambase_pfn];
> > }
>
> I take that rambase_pfn is the offset in the guest physical address
> space where the ram is located. It would be nice to write it down.
I've added this comment to the struct where rambase_pfn is defined:
* A PV guest has a single contiguous block of physical RAM,
* consisting of total_pages starting at rambase_pfn.
[...]
> > + /* setup initial p2m */
> > + for ( pfn = 0; pfn < dom->total_pages; pfn++ )
> > + dom->p2m_host[pfn] = pfn + dom->rambase_pfn;
>
> uhm.. so maybe rambase_pfn is the offset in the machine address space where
> the guest ram has been allocated?
ARM guests are hybrid so there isn't really a distinction between MFN
and PFN space as seen by the tools. We always deal in guest PFNs in the
tools (only the hypervisor can see the MFNs).
This is a bit confusing for a hybrid guest because the generic xc_dom_*
stuff expects to have a p2m, so we setup this weird 1:1 thing (actually
1:rambase_pfn+1 just to add to the confusion).
I've added a comment next to the definition of p2m_host:
/*
* p2m_host maps guest physical addresses an offset from
* rambase_pfn (see below) into gfns.
*
* For a pure PV guest this means that it maps GPFNs into MFNs for
* a hybrid guest this means that it maps GPFNs to GPFNS.
*
* Note that the input is offset by rambase.
*/
I'm not sure this doesn't just add to the confusion.
Also I fully expect Tim to complain that I've got my *FN terminology all
wrong :-P.
> > diff --git a/tools/libxc/xc_dom_armzimageloader.c
> > b/tools/libxc/xc_dom_armzimageloader.c
> > new file mode 100644
> > index 0000000..220176d
> > --- /dev/null
> > +++ b/tools/libxc/xc_dom_armzimageloader.c
[...]
> > +#include "xg_private.h"
> > +#include "xc_dom.h"
> > +
> > +#include <arpa/inet.h> /* XXX ntohl is not the right function... */
>
> Yes, you are right, we should write a be32_to_cpu (the ones in Xen, QEMU
> and Linux are GPLv2 rather than LGLPv2).
I wonder if we can/should just declare that we handle only native endian
zImages.
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index fea9de5..b0d48d5 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -307,15 +307,17 @@ void *xc_dom_pfn_to_ptr(struct xc_dom_image *dom,
> > xen_pfn_t pfn,
> > xen_pfn_t count)
> > {
> > struct xc_dom_phys *phys;
> > + xen_pfn_t offset;
> > unsigned int page_shift = XC_DOM_PAGE_SHIFT(dom);
> > char *mode = "unset";
> >
> > - if ( pfn > dom->total_pages || /* multiple checks to avoid
> > overflows */
> > + offset = pfn-dom->rambase_pfn;
>
> spaces around the '-' please
Done.
> > + if ( offset > dom->total_pages || /* multiple checks to avoid
> > overflows */
> > count > dom->total_pages ||
> > - pfn > dom->total_pages - count )
> > + offset > dom->total_pages - count )
> > {
> > - DOMPRINTF("%s: pfn out of range (0x%" PRIpfn " > 0x%" PRIpfn ")",
> > - __FUNCTION__, pfn, dom->total_pages);
> > + DOMPRINTF("%s: pfn %"PRI_xen_pfn" out of range (0x%" PRIpfn " >
> > 0x%" PRIpfn ")",
> > + __FUNCTION__, pfn, offset, dom->total_pages);
> > return NULL;
> > }
> >
> > @@ -599,6 +601,8 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *xch,
> > dom->parms.virt_hv_start_low = UNSET_ADDR;
> > dom->parms.elf_paddr_offset = UNSET_ADDR;
> >
> > + dom->rambase_pfn = 0;
> > +
> > dom->alloc_malloc += sizeof(*dom);
> > return dom;
>
> There is no need to explicitly set rambase_pfn to 0, because the whole
> dom struct is memset to 0 few lines above.
Removed.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |