[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/5] xen/x86: introduce more cache maintenance operations



On Fri, 10 Oct 2014, Jan Beulich wrote:
> >>> On 08.10.14 at 15:00, <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -346,6 +346,10 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t 
> > cacheattr)
> >  
> >  /* No cache maintenance required on x86 architecture. */
> >  static inline void flush_page_to_ram(unsigned long mfn) {}
> > +static inline void clean_xen_dcache_va_range(const void *p, unsigned long 
> > size) {}
> > +static inline void invalidate_xen_dcache_va_range(const void *p, unsigned 
> > long size) {}
> > +static inline void clean_and_invalidate_xen_dcache_va_range
> > +    (const void *p, unsigned long size) {}
> 
> Mind explaining what purpose the "_xen" in these names serves?

It means a Xen virtual address (as opposed to a guest p2m address or a
guest virtual address), but I guess it is pretty obvious so I should
remove it?


> Also, is simply stubbing these out correct? I.e. there are caches on
> x86, and those can both be invalidated and cleaned, so why would
> you not do so here?

Good point. The problem is that to be sure that I am doing the right
thing I would need to know what kind of flushes are required with a
non-coherent device on x86, but I don't have any examples at hand.

From the amd64 manual, I would implement invalidate with CLFLUSH, clean
and invalidate with WBINVD (even though it operates on the entire cache
so it is very heavyweight) and I would have no way to implement just
"clean". Do you agree?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.