[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all guest caches by VMID after domain build.



On Fri, 2014-02-07 at 14:49 +0000, Stefano Stabellini wrote:
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index 77a4e64..b9d1015 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, 
> > xen_pfn_t pfn)
> >          prev->next = phys->next;
> >      else
> >          dom->phys_pages = phys->next;
> > +
> > +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, 
> > phys->count);
> 
> I am not sure I understand the semantic of xc_dom_unmap_one: from the
> name of the function and the parameters I would think that it is
> supposed to unmap just one page. In that case we should flush just one
> page. However the implementation calls
> 
> munmap(phys->ptr, phys->count << page_shift)
> 
> so it actually can unmap more than a single page, so I think that you
> did the right thing by calling xc_domain_cacheflush with phys->first and
> phys->count.

"one" in the name means "one struct xc_dom_phys", which is a range
starting at the given pfn, which is why using phys->{first,count} is
correct.

> > +void sync_page_to_ram(unsigned long mfn)
> > +{
> > +    void *p, *v = map_domain_page(mfn);
> > +
> > +    dsb();           /* So the CPU issues all writes to the range */
> > +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> 
> What about the last few bytes on a page?
> Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?

I sure hope so! A non power of two cache line size would be pretty
crazy!

The cacheline is always a 2^N (the register contains log2(cacheline
size)) and so is PAGE_SIZE.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.