[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] IRQ: Group IRQ_MOVE_CLEANUP_VECTOR with other hypervisor IPIs



On 07/09/2011 17:56, Keir Fraser wrote:
> On 07/09/2011 17:03, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>
>>>>> On 07.09.11 at 17:03, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> Are you sure this is correct? I'm suspicious that this may intentionally
>> have been the lowest priority vector...
> I can't see why?

It is probably the lowest priority of the Xen IPIs.  Having said that,
it really doesnt want to be pre-empted by something expecting to use an
irq_desc or irq_cfg.

However, vector 0xf0 seems to be used for IRQ0 before interrupts are set
up, which is probably unintended (although doesn't seem to have any
interaction problems)

>
>>> Also, rename to MOVE_CLEANUP_VECTOR to be in line with the other IPI
>>> names.
>> Why would the removal of part of the descriptive name be in line with
>> the other names? We're dealing with the cleanup after an IRQ move
>> here, so let the name state this. The IRQ_ prefix here has nothing to
>> do with this being the vector for a specific IRQ.
> Agreed.
>
>  -- Keir

Ok - I will resubmit without changing the IRQ_ prefix.  I had not
considered that meaning of the name.

~Andrew

>
>> Jan
>>
>>> This requires bumping LAST_HIPRIORITY_VECTOR, but does mean that the
>>> range FIRST-LAST_HIPRIORITY_VECTORs are free once again.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>
>>> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/apic.c
>>> --- a/xen/arch/x86/apic.c Mon Sep 05 15:10:28 2011 +0100
>>> +++ b/xen/arch/x86/apic.c Wed Sep 07 16:00:55 2011 +0100
>>> @@ -90,7 +90,7 @@ bool_t __read_mostly directed_eoi_enable
>>>   * through the ICC by us (IPIs)
>>>   */
>>>  __asm__(".section .text");
>>> -BUILD_SMP_INTERRUPT(irq_move_cleanup_interrupt,IRQ_MOVE_CLEANUP_VECTOR)
>>> +BUILD_SMP_INTERRUPT(irq_move_cleanup_interrupt,MOVE_CLEANUP_VECTOR)
>>>  BUILD_SMP_INTERRUPT(event_check_interrupt,EVENT_CHECK_VECTOR)
>>>  BUILD_SMP_INTERRUPT(invalidate_interrupt,INVALIDATE_TLB_VECTOR)
>>>  BUILD_SMP_INTERRUPT(call_function_interrupt,CALL_FUNCTION_VECTOR)
>>> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/hvm/vmx/vmx.c
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c Mon Sep 05 15:10:28 2011 +0100
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c Wed Sep 07 16:00:55 2011 +0100
>>> @@ -1986,7 +1986,7 @@ static void vmx_do_extint(struct cpu_use
>>>  
>>>      switch ( vector )
>>>      {
>>> -    case IRQ_MOVE_CLEANUP_VECTOR:
>>> +    case MOVE_CLEANUP_VECTOR:
>>>          smp_irq_move_cleanup_interrupt(regs);
>>>          break;
>>>      case LOCAL_TIMER_VECTOR:
>>> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/io_apic.c
>>> --- a/xen/arch/x86/io_apic.c Mon Sep 05 15:10:28 2011 +0100
>>> +++ b/xen/arch/x86/io_apic.c Wed Sep 07 16:00:55 2011 +0100
>>> @@ -476,7 +476,7 @@ fastcall void smp_irq_move_cleanup_inter
>>>           * to myself.
>>>           */
>>>          if (irr  & (1 << (vector % 32))) {
>>> -            genapic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
>>> +            genapic->send_IPI_self(MOVE_CLEANUP_VECTOR);
>>>              TRACE_3D(TRC_HW_IRQ_MOVE_CLEANUP_DELAY,
>>>                       irq, vector, smp_processor_id());
>>>              goto unlock;
>>> @@ -513,7 +513,7 @@ static void send_cleanup_vector(struct i
>>>  
>>>      cpus_and(cleanup_mask, cfg->old_cpu_mask, cpu_online_map);
>>>      cfg->move_cleanup_count = cpus_weight(cleanup_mask);
>>> -    genapic->send_IPI_mask(&cleanup_mask, IRQ_MOVE_CLEANUP_VECTOR);
>>> +    genapic->send_IPI_mask(&cleanup_mask, MOVE_CLEANUP_VECTOR);
>>>  
>>>      cfg->move_in_progress = 0;
>>>  }
>>> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/irq.c
>>> --- a/xen/arch/x86/irq.c Mon Sep 05 15:10:28 2011 +0100
>>> +++ b/xen/arch/x86/irq.c Wed Sep 07 16:00:55 2011 +0100
>>> @@ -338,7 +338,7 @@ int __init init_irq_data(void)
>>>      set_bit(HYPERCALL_VECTOR, used_vectors);
>>>      
>>>      /* IRQ_MOVE_CLEANUP_VECTOR used for clean up vectors */
>>> -    set_bit(IRQ_MOVE_CLEANUP_VECTOR, used_vectors);
>>> +    set_bit(MOVE_CLEANUP_VECTOR, used_vectors);
>>>  
>>>      return 0;
>>>  }
>>> diff -r 0268e7380953 -r c7884dbb6f7d xen/arch/x86/smpboot.c
>>> --- a/xen/arch/x86/smpboot.c Mon Sep 05 15:10:28 2011 +0100
>>> +++ b/xen/arch/x86/smpboot.c Wed Sep 07 16:00:55 2011 +0100
>>> @@ -1027,7 +1027,7 @@ void __init smp_intr_init(void)
>>>      }
>>>  
>>>      /* IPI for cleanuping vectors after irq move */
>>> -    set_intr_gate(IRQ_MOVE_CLEANUP_VECTOR, irq_move_cleanup_interrupt);
>>> +    set_intr_gate(MOVE_CLEANUP_VECTOR, irq_move_cleanup_interrupt);
>>>  
>>>      /* IPI for event checking. */
>>>      set_intr_gate(EVENT_CHECK_VECTOR, event_check_interrupt);
>>> diff -r 0268e7380953 -r c7884dbb6f7d
>>> xen/include/asm-x86/mach-default/irq_vectors.h
>>> --- a/xen/include/asm-x86/mach-default/irq_vectors.h Mon Sep 05 15:10:28 
>>> 2011
>>> +0100
>>> +++ b/xen/include/asm-x86/mach-default/irq_vectors.h Wed Sep 07 16:00:55 
>>> 2011
>>> +0100
>>> @@ -11,12 +11,14 @@
>>>  #define LOCAL_TIMER_VECTOR 0xf9
>>>  #define PMU_APIC_VECTOR  0xf8
>>>  #define CMCI_APIC_VECTOR 0xf7
>>> +#define MOVE_CLEANUP_VECTOR 0xf6
>>> +
>>>  /*
>>>   * High-priority dynamically-allocated vectors. For interrupts that
>>>   * must be higher priority than any guest-bound interrupt.
>>>   */
>>>  #define FIRST_HIPRIORITY_VECTOR 0xf0
>>> -#define LAST_HIPRIORITY_VECTOR  0xf6
>>> +#define LAST_HIPRIORITY_VECTOR 0xf5
>>>  
>>>  /* Legacy PIC uses vectors 0xe0-0xef. */
>>>  #define FIRST_LEGACY_VECTOR 0xe0
>>> @@ -30,8 +32,6 @@
>>>  #define LAST_DYNAMIC_VECTOR 0xdf
>>>  #define NR_DYNAMIC_VECTORS (LAST_DYNAMIC_VECTOR - FIRST_DYNAMIC_VECTOR + 1)
>>>  
>>> -#define IRQ_MOVE_CLEANUP_VECTOR FIRST_DYNAMIC_VECTOR
>>> -
>>>  #define NR_VECTORS 256
>>>  
>>>  #endif /* _ASM_IRQ_VECTORS_H */
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.