[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V13 5/7] xen/arm: Instruction prefetch abort (X) mem_event handling



On Thu, Mar 26, 2015 at 11:50 AM, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote:
> On Tue, 2015-03-24 at 14:06 +0100, Tamas K Lengyel wrote:
>
>> according to the ARM ARM v8 split TLB is possible, see section TLB
>> matching (page 1826): "In some cases, the TLB can hold two mappings
>> for the same address". In fact, it seems like some hardware can even
>> detect such cases and cause a Data-abort or Prefetch abort with error
>> code TLB Conflict (see section TLB conflict aborts, page 1827). They
>> did indeed deprecate the separate maintenance operations but that
>> doesn't really effect the problem I'm describing.
>
> (FWIW this is page 1754 in revision DDI0487A.d of the doc)

Sorry, I was going by the PDF page # not their own.

>
> I'm not sure that the "In some cases" wording is talking about split
> TLBs, but rather a potential occurrence with in a single TLB when
> invalidation is not properly performed. If there were 2 TLBs I would
> expect them to operate independently, so there would be no possibility
> of a conflict.

It is certainly not clear what it refers to but logically thinking
about it, two entries within the same sub-TLB would not be possible to
be created since the look-up that happens for the address would have
succeeded already, thus bypassing the path that inserts a new entry.
The only scenario where I can imagine this happen (and make sense) is
with the split-TLB where one entry is in the ITLB the other in the
DTLB, and where cache maintenance operations were bypassed either by
accident or on purpose. Of course, who knows what the hardware is
actually doing, if it is possible to have two entries for the same
address within the DTLB for example that would sound very strange to
me.

>
>> It's unfortunate we can't make the MMU get us the translation as if
>> doing a prefetch, only as a data access.
>
> Part of me thinks that if split TLBs were possible then an instruction
> to do this would have been provided...
>
> I'm going to ask around and see if I can find a definitive answer on
> this.

Maybe, but who knows? Design flaws do happen.. Keep me posted if you
find anything useful =)

>
>>  It's also unfortunate that during a Permission fault we don't get the
>> IPA associated with the fault, only the GVA.
>
> FWIW I think this is because an allowable implementation is to only
> cache the final PA+perms result of Stage 1+2 rather than caching the two
> stages separately (which is also allowed), so it's possible that the h/w
> wouldn't have the IPA in its hand.

Hm, right that would make sense. This wasn't exactly clear on the x86
either, what does the TLB hold.. separate stage1/stage2 entries, or a
single entry for stage1+2? On the other hand, x86 always gets us the
IPA no matter what so my bet is on separate entries there.

>
>>  I looked at the KVM code and they seem to query HPFAR_EL2 if the
>> fault is during s1ptw, but otherwise they do exactly what we do here.
>
> Yes, reading their comment there lead me to the bit of the ARM ARM and I
> think there may be improvements we could make to the non-xenaccess case
> at least, I was confused about HPFAR_EL2 va FAR_EL2 earlier.

IMHO it's very minor performance tuning and for data access violations
the cache probably already has the same translation so I wouldn't be
surprised if the two would cost the same amount of cycles. For
prefetch it's useful because we don't have to rely on the cache being
valid. Furthermore, I yet to see a single event being delivered that
happened during s1ptw. For some reason my domain's normal operations
don't seem to trigger any, even if I create new processes etc..

>
>> So, IMHO doing a TLB flush here would prevent a primed DTLB becoming a
>> problem. Not sure if that helps if the ITLB is primed however as we
>> would have no way to replicate the translation that is cached..
>
> Ian.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.