[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/6] xen/arm: IRQ: Store IRQ type in arch_irq_desc



On Mon, 2014-05-12 at 18:57 +0100, Julien Grall wrote:
> @@ -383,6 +403,97 @@ void pirq_set_affinity(struct domain *d, int pirq, const 
> cpumask_t *mask)
>      BUG();
>  }
>  
> +static bool_t irq_check_type(unsigned int curr, unsigned new)

irq_validate_new_type? irq_type_change_allowed?

> +{
> +    return (curr == DT_IRQ_TYPE_INVALID || curr == new );
> +}
> +
> +int irq_set_type(unsigned int spi, unsigned int type)
> +{
> +    unsigned long flags;
> +    struct irq_desc *desc = irq_to_desc(spi);
> +    int ret = -EBUSY;
> +
> +    /* This function should not be used for other than SPIs */

Perhaps name the function irq_set_spi_type or something then?

> +    for_each_cpu( cpu, &cpu_online_map )
> +    {
> +        desc = &per_cpu(local_irq_desc, cpu)[irq];
> +        spin_lock_irqsave(&desc->lock, flags);
> +        desc->arch.type = type;

Don't you need to write ICFGR on each CPU?

> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 5542958..e677da9 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -131,6 +131,7 @@ struct dt_phandle_args {
>   * DT_IRQ_TYPE_LEVEL_LOW       - low level triggered
>   * DT_IRQ_TYPE_LEVEL_MASK      - Mask to filter out the level bits
>   * DT_IRQ_TYPE_SENSE_MASK      - Mask for all the above bits
> + * DT_IRQ_TYPE_INVALID         - Use to initialize the type
>   */
>  #define DT_IRQ_TYPE_NONE           0x00000000
>  #define DT_IRQ_TYPE_EDGE_RISING    0x00000001
> @@ -143,6 +144,8 @@ struct dt_phandle_args {
>      (DT_IRQ_TYPE_LEVEL_LOW | DT_IRQ_TYPE_LEVEL_HIGH)
>  #define DT_IRQ_TYPE_SENSE_MASK     0x0000000f
>  
> +#define DT_IRQ_TYPE_INVALID        0x00000010

0xffffffff perhaps?



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.