[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen/arm: Virtual ITS command queue handling



On 15/05/15 16:38, Ian Campbell wrote:
> On Fri, 2015-05-15 at 16:05 +0100, Julien Grall wrote:
>> On 15/05/15 15:04, Vijay Kilari wrote:
>>> On Fri, May 15, 2015 at 7:14 PM, Julien Grall <julien.grall@xxxxxxxxxx> 
>>> wrote:
>>>> On 15/05/15 14:24, Ian Campbell wrote:
>>>>> On Fri, 2015-05-15 at 18:44 +0530, Vijay Kilari wrote:
>>>>>> On Fri, May 15, 2015 at 6:23 PM, Ian Campbell <ian.campbell@xxxxxxxxxx> 
>>>>>> wrote:
>>>>>>> On Fri, 2015-05-15 at 18:17 +0530, Vijay Kilari wrote:
>>>>>>>> On Fri, May 15, 2015 at 5:33 PM, Julien Grall 
>>>>>>>> <julien.grall@xxxxxxxxxx> wrote:
>>>>>>>>> On 15/05/15 12:30, Ian Campbell wrote:
>>>>>>>>>>> Handling of Single vITS and multipl pITS can be made simple.
>>>>>>>>>>>
>>>>>>>>>>> All ITS commands except SYNC & INVALL has device id which will
>>>>>>>>>>> help us to know to which pITS it should be sent.
>>>>>>>>>>>
>>>>>>>>>>> SYNC & INVALL can be dropped by Xen on Guest request
>>>>>>>>>>>  and let Xen append where ever SYNC & INVALL is required.
>>>>>>>>>>> (Ex; Linux driver adds SYNC for required commands).
>>>>>>>>>>> With this assumption, all ITS commands are mapped to pITS
>>>>>>>>>>> and no need of synchronization across pITS
>>>>>>>>>>
>>>>>>>>>> You've ignored the second bullet its three sub-bullets, I think.
>>>>>>>>>
>>>>>>>>    Why can't we group the batch of commands based on pITS it has
>>>>>>>> to be sent?.
>>>>>>>
>>>>>>> Are you suggesting that each batch we send should be synchronous? (i.e.
>>>>>>> end with SYNC+INT) That doesn't seem at all desirable.
>>>>>>
>>>>>> Not only at the end of batch, SYNC can be appended based on every
>>>>>> command within the batch.
>>>>>
>>>>> Could be, but something to avoid I think?
>>>>
>>>> That would slow down the ITS processing (SYNC is waiting that the
>>>> previous command has executed).
>>>>
>>>> Also, what about INTALL? Sending it everytime would be horrible for the
>>>> performance because it flush the ITS cache.
>>>
>>> INVALL is not required everytime. It can be sent only as mentioned in spec 
>>> Note.
>>> ex; MOVI

BTW, when you quote the spec, can you give the section number/version of
the spec? So far, I'm not able to find anything about the relation
between MOVI and INVALL in my spec.

INV* commands are sent in order to ask the ITS reloading the
configuration tables (see 4.8.4 PRD03-GENC-010745 24.0):

"The effects of this caching are not visible to software except when
reconfiguring an LPI, in which case an explicit invalidate command must
be issued (e.g. an ITS INV command or a write to GICR_INVLPIR)
Note: this means hardware must manage its caches automatically when
moving interrupts"

So, it looks like to me that INV* command are only necessary when
configuration tables is changed.

FWIW, Linux is using INVALL when a collection is map and INV when the
LPI configuration is changed. I don't see any INV* command after MOVI.
So it confirms what the spec says.

>>> Note: this command is expected to be used by software when it changed
>>> the re-configuration
>>> of an LPI in memory to ensure any cached copies of the old
>>> configuration are discarded.
>>
>> INVALL is used when a large number of LPIs has been reconfigured. If you
>> send one by MOVI is not efficient at all and will slowdown all the
>> interrupts for few milliseconds. We need to use them with caution.
>>
>> Usually a guest will send one for multiple MOVI command.
> 
> We should be prepared for a guest which does nothing but send INVALL
> commands (i.e. trying to DoS the host).
> 
> I mentioned earlier about maybe needing to track which pITS's a SYNC
> goes to (based on what SYNC have happened already and what commands the
> guest has sent since).
> 
> Do we also need to track which LPIs a guest has fiddled with in order to
> decide (perhaps via a threshold) whether to use INVALL vs a small number
> of targeted INVALL?

I did some reading about the INV* commands (INV and INVALL). The
interesting section in GICv3 is 4.8.4 PRD03-GENC-010745 24.0.

They are only used to ensure the ITS re-read the LPIs configuration
table. I don't speak about the pending table as the spec (4.8.5) says
that it's maintained solely by a re-distributor. It's up to the
implementation to provide a mechanism to sync the memory (useful for
Power Management).

The LPIs configuration tables is used to enable/disable the LPI and set
the priority. Only the enable/disable bit needs to be replicated to the
hardware.

The pITS LPIs configuration tables is managed by Xen. Each guest will
provide to the vITS his own LPIs configuration table.

The emulation of INV* command will depend on how we decide to emulate
the LPIs configuration table.

Solution 1: Trap every access to the guest LPIs configuration table

For every write access, when the vLPIs is valid (i.e associated to a
device/interrupt), Xen will toggle the enable bit in the hardware LPIs
configuration table and send an INV *. This requiring to be able to
translate the vLPIs to a (device,ID).

INVALL/INV command could be ignored and directly increment CREADR
because it only ensure that the command has been executed, not fully
completed. A SYNC would be required from the guest in order to ensure
the completion.

Therefore we would need more care for the SYNC. Maybe by injecting a
SYNC when it's necessary.

Note that we would need Xen to send command on behalf of the guest (i.e
not part of the command queue).

Solution 2: Emulate "properly" INV and INVALL commands

While emulate the INV is easy (read the configuration table and
replicate it the pITS LPIs configuration table), INVALL requires to read
most of the LPI configuration tables.

AFAICT there is no limitation of the size of the table (driven by
GITS_TYPER.IDbits)

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.