[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] tools: libxc: flush data cache after loading images into guest memory



On Fri, 2013-12-13 at 12:01 +0000, Stefano Stabellini wrote:
> On Fri, 13 Dec 2013, Ian Campbell wrote:
> > On Fri, 2013-12-13 at 00:49 +0000, Julien Grall wrote:
> > > 
> > > On 12/12/2013 02:23 PM, Ian Campbell wrote:
> > > > On ARM guest OSes are started with MMU and Caches disables (as they are 
> > > > on
> > > > native) however caching is enabled in the domain running the builder and
> > > > therefore we must flush the cache as we load the blobs, otherwise when 
> > > > the
> > > > guest starts running it may not see them. The dom0 build in the 
> > > > hypervisor has
> > > > the same requirements and already does the right thing.
> > > >
> > > > The mechanism for performing a cache flush from userspace is OS 
> > > > specific, so
> > > > implement this as a new osdep hook:
> > > >
> > > >   - On 32-bit ARM Linux provides a system call to flush the cache.
> > > >   - On 64-bit ARM Linux the processor is configured to allow cache 
> > > > flushes
> > > >     directly from userspace.
> > > >   - Non-Linux platforms will need to provide their own implementation. 
> > > > If
> > > >     similar mechanisms are not available then a new privcmd ioctl 
> > > > should be a
> > > >     suitable alternative.
> > > >
> > > > No cache maintenance is required on x86, so provide a stub for all 
> > > > non-Linux
> > > > platforms which returns success on x86 only and log an error otherwise.
> > > >
> > > > This fixes guest building on Xgene which has a very large L3 cache and 
> > > > so is
> > > > particularly susceptible to this problem. It has also been observed
> > > > sporadically on midway.
> > > 
> > > This patch doesn't solve issue on Midway.
> > 
> > That's a shame. I think we should go ahead with this patch regardless,
> > since it does fix arm64 and introduces the infrastructure for arm32. I
> > think there is no harm in adding the syscall on arm32 for now.
> 
> I agree.
> I wonder if QEMU (qdisk) is going to need similar cache flushes.

I think for the PV driver case we are entitled to require that the rings
and the memory under I/O be held in cacheable RAM.

The alternative is that both the front and backend have to do cache
maintenance operations which seems like a bit of a waste of everyone's
time when we know everything is RAM based rather than real DMA.

Obviously for a qemu-dm style emulation we would have to do something,
but we don't support that today.

> > > cacheflush syscall on ARM32 is calling DCCMVAU (Data Clean Cache by MVA
> > > to PoU), that is not enough.
> > > As I understand the ARM ARM B2.2.6 (page B2-1275):
> > >      - PoC means the data will be written to the RAM
> > >      - PoU means, in a same inner shareable domain, instruction/data
> > > cache and translation page table will see the same value for a specific
> > > MVA. It doesn't means that the data will reach the RAM.
> > 
> > This is essentially my understanding as well.
> > 
> > > I did some test and indeed DCCMVAC (Data Clean Cache By MVA to PoC)
> > > resolves the problem on Midway (and generally on ARMv7).
> > 
> > Good.
> > 
> > > Unfortunately Linux doesn't provide any syscall to call this function
> > > for ARMv7 and it's not possible to call cache instruction from
> > > userspace. What we could do is:
> > >      - Use the "flags" parameters of cacheflush syscall and call a
> > > function which DCCMVAC (for instance __cpuc_flush_dcache_area)
> > >      - Extend privcmd to have a flush cache ioctl
> > 
> > Personally I think the first is nicer, but ultimately we need input from
> > l-a-k on this one and would be happy with either.
> 
> I agree. Can you try to come up with such a patch?

I think Julien was going to investigate, but if says not I'll take a
stab at it.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.