[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v3,11/41] mips: reuse asm-generic/barrier.h
Paul, On Thu, Jan 14, 2016 at 02:20:46PM -0800, Paul E. McKenney wrote: > On Thu, Jan 14, 2016 at 01:24:34PM -0800, Leonid Yegoshin wrote: > > It is not so simple, I mean "local ordering for address and data > > dependencies". Local ordering is NOT enough. It happens that current > > MIPS R6 doesn't require in your example smp_read_barrier_depends() > > but in discussion it comes out that it may not. Because without > > smp_read_barrier_depends() your example can be a part of Will's > > WRC+addr+addr and we found some design which easily can bump into > > this test. And that design actually performs "local ordering for > > address and data dependencies" too. > > As noted in another email in this thread, I do not believe that > WRC+addr+addr needs to be prohibited. Sounds like Will and I need to > get our story straight, though. I think you figured this out while I was sleeping, but just to confirm: 1. The MIPS64 ISA doc [1] talks about SYNC in a way that applies only to memory accesses appearing in *program-order* before the SYNC 2. We need WRC+sync+addr to work, which means that the SYNC in P1 must also capture the store in P0 as being "before" the barrier. Leonid reckons it works, but his explanation [2] focussed on the address dependency in P2 as to why this works. If that is the case (i.e. address dependency provides global transitivity), then WRC+addr+addr should also work (even though its not required). 3. It seems that WRC+addr+addr doesn't work, so I'm still suspicious about WRC+sync+addr, because neither the architecture document or Leonid's explanation tell me that it should be forbidden. Will [1] https://imgtec.com/?do-download=4302 [2] http://lkml.kernel.org/r/569565DA.2010903@xxxxxxxxxx (scroll to the end) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |