[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action per IRQ

On 02/24/2014 02:32 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 15:08, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
>> (Adding Jan for x86 part).
>> On 02/20/2014 09:29 PM, Julien Grall wrote:
>>> Hi Ian,
>>> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>>>> interrupt.
>>>> Mention here that you are therefore creating a linked list of actions
>>>> for each interrupt.
>>>> If you use xen/list.h for this then you get a load of helpers and
>>>> iterators which would save you open coding them.
>>> After thinking, using xen/list.h won't really remove open code, except
>>> removing "action_ptr" in release_dt_irq.
>>> Calling release_dt_irq to an IRQ with multiple action shouldn't be
>>> called often. Therefore, having both prev and next is a waste of space.
>> Jan, as it's common code, do you have any thoughts?
> In fact I'm not convinced this action chaining is correct in the first
> place, as mentioned by Ian too (considering the potential sharing
> between hypervisor and guest). Furthermore, if this is really just
> about IOMMU handlers, why can't the SMMU code register a single
> action and disambiguate by the dev_id argument passed to the
> handler?

The patch #3 of this serie protects the IRQ to be shared with the domain.

I should have remove "eg ARM SMMU" in the description. ARM SMMU is not
the only the case, we don't know in advance if the IRQ will be shared
(except browsing the DT and checking if this IRQ was used by another
devices...). We may have the same thing with other devices.

The logic is painful to handle internally in ARM SMMU driver while we
can handle it generically. No need to duplicate the code when a new
driver will have the same problem.

Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.