[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] read_atomic, write_atomic, add_sized



Hi Jan,

On 13/06/17 13:48, Jan Beulich wrote:
On 13.06.17 at 11:42, <julien.grall@xxxxxxx> wrote:
On 12/06/2017 16:38, Jan Beulich wrote:
On 12.06.17 at 17:19, <andre.przywara@xxxxxxx> wrote:
On 12/06/17 11:55, Jan Beulich wrote:
On 12.06.17 at 12:24, <julien.grall@xxxxxxx> wrote:
I am trying to understand why we decided to implement the helpers
read_atomic, write_atomic, add_sized in assembly rather than directly in C.

AFAICT implementation in C similar to Linux helpers WRITE_ONCE/READ_ONCE
would work here. Did I miss anything?

For one at least our current ACCESS_ONCE() doesn't allow non-
scalar types to be read/written, whereas {read,write}_atomic()
solely look at sizeof().

What is the rationale behind this?
I would assume the ACCESS_ONCE() does not care about atomicity, it is
basically a pretty version of a volatile cast to tell the compiler to
cache the value (and not re-read at will). And so it should be able to
read an arbitrarily sized data structure, one chunk after the other, but
only each chunk once? This is what Linux supports, at least:
        switch (size) {
        ...
        default:
                barrier();
                __builtin_memcpy((void *)res, (const void *)p, size);
                barrier();
        }

That's a question to be answered by the parties involved in
commit 9d5617cd89 ("xen/arm: Fix ARM build following c/s
11c397c").

Looking at the usage on Linux, many places is using
ACCESS_ONCE/WRITE_ONCE/READ_ONCE assume this will be done in a single
instructions (see atomic_read/atomic_write on both x86 and ARM).

My understanding is, at least on ARM GCC (?), assignment will be atomic
(though will not prevent the compiler to only write/read once). We make
this assumption in quite a few places (such as PV protocol for the
index) and I am not sure to understand why it cannot be done in other
places...

To a certain degree we can / want to assume compilers don't do
insane things. I.e. why would they commonly emit multiple insns
if a single suffices. However, we would better write our code
such that we don't _depend_ on this, the more that I've explained
(or was that on a different thread) that e.g. read-modify-write
operations on memory operands commonly get carried out by the
compiler by 3 insns even though most of the time it could be done
by just one.

That's from an x86 POV. ARM (at least until ARMv8.0) does not have read-modify-write operations. If the compiler is not using them on x86 then maybe you should look at fixing that in the compiler?

However, looking at the patch that introduced add_sized, the intention was not about preventing the compiler to use 3 insns:

add_sized(ptr, inc) adds inc to the value at ptr using only the correct
size of loads and stores for the type of *ptr.  The add is /not/ atomic.
This is needed for ticket locks to ensure the increment of the head ticket does not affect the tail ticket.

Anyway, I think Xen already make a lot of assumption in the code on the compiler (and flags used). IHMO, It would be very difficult to Xen compiling properly with compiler doing insane things.


And then, as the cited code still visible above shows, atomicity is
not going to be guaranteed for most of the access sizes anyway.
How can you imply that the given constructs are safe to use as
atomic accesses for some variables, but not for others? Such a
property ought to be universal (and the build should fail if it is
being violated). Otherwise a simple type change of some
variable may render the code buggy possibly without anyone
noticing.

It is a choice from Linux side to allow using READ_ONCE/WRITE_ONCE with bigger size. I am not suggesting to do the same on Xen but avoiding the bigger size.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.