[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/HVM: fix ID handling of x2APIC emulation



>>> On 11.09.14 at 18:28, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 10/09/14 14:44, Jan Beulich wrote:
>> - properly change ID when switching into x2APIC mode (instead of
>>   mimicking necessary behavior in hvm_x2apic_msr_read())
>> - correctly (meaningfully) set LDR (so far it ended up being 1 on all
>>   vCPU-s)
>> - even if we don't support more than 128 vCPU-s in a HVM guest for now,
>>   we should properly handle IDs as 32-bit values (i.e. not ignore the
>>   top 24 bits)
>> - with that, properly do cluster ID and bit mask check in
>>   vlapic_match_logical_addr()
>> - slightly adjust other parameter types of vlapic_match_dest() and
>>   vlapic_lowest_prio() (and related local variable ones)
> 
> I think the addition of arch_domain_unpause() at least needs mentioning
> in the commit message, although...

Hmm, not sure - I tried to describe _what_ gets fixed/changed
here rather than _how_.

>> @@ -207,8 +206,8 @@ static int vlapic_match_logical_addr(str
>>  }
>>  
>>  bool_t vlapic_match_dest(
>> -    struct vlapic *target, struct vlapic *source,
>> -    int short_hand, uint8_t dest, uint8_t dest_mode)
>> +    struct vlapic *target, const struct vlapic *source,
>> +    int short_hand, uint32_t dest, bool_t dest_mode)
> 
> target should be const as well, and looks as if it can be by pushing
> const-ness down into vlapic_match_logical_addr() and vlapic_get_reg().

I specifically didn't want to mess with unrelated functions here. But
yes, this could be further follow-up cleanup.

>> @@ -891,6 +885,15 @@ const struct hvm_mmio_handler vlapic_mmi
>>      .write_handler = vlapic_write
>>  };
>>  
>> +static void set_x2apic_id(struct vlapic *vlapic)
>> +{
>> +    u32 id = vlapic_vcpu(vlapic)->vcpu_id;
>> +    u32 ldr = ((id & ~0xf) << 12) | (1 << (id & 0xf));
>> +
>> +    vlapic_set_reg(vlapic, APIC_ID, id * 2);
> 
> I know this mimics the existing behaviour, but I should point out that
> it is an Intel-ism (which assumes no hyperthreads) which is wrong on AMD
> systems, and confuses algorithms which following the BIOS/Systems
> guides.  I do plan to fix it as part of my cpuid/feature levelling fixes.

Yeah, this certainly should be dealt with better, but again not
something to change in this patch (I'm relatively certain there
are assumptions based on this elsewhere in the code base).

>> @@ -1254,6 +1255,29 @@ HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_s
>>  HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
>>                            1, HVMSR_PER_VCPU);
>>  
>> +void vlapic_domain_unpause(const struct domain *d)
>> +{
>> +    /*
>> +     * Following lapic_load_hidden()/lapic_load_regs() we may need to
>> +     * correct ID and LDR when they come from an old, broken hypervisor.
>> +     */
> 
> This seems like aweful overhead for the domain_{,un}pause() path.

It's the best I could think of (and domain un-pausing shouldn't be that
frequent an operation I would hope), since ...

>  Why can't it be fixed up once in lapic_load_{regs,hidden}(),

... both of these would have to have got carried out and ...

> or possibly deferred to the end of hvm_load()?

... there's apparently no requirement for all state to be loaded in
one go.

>> --- a/xen/include/asm-x86/hvm/vlapic.h
>> +++ b/xen/include/asm-x86/hvm/vlapic.h
>> @@ -30,8 +30,9 @@
>>  #define vlapic_vcpu(x)   (container_of((x), struct vcpu, 
>> arch.hvm_vcpu.vlapic))
>>  #define vlapic_domain(x) (vlapic_vcpu(x)->domain)
>>  
>> -#define VLAPIC_ID(vlapic)   \
>> -    (GET_xAPIC_ID(vlapic_get_reg((vlapic), APIC_ID)))
>> +#define _VLAPIC_ID(vlapic, id) (vlapic_x2apic_mode(vlapic) \
>> +                                ? (id) : GET_xAPIC_ID(id))
>> +#define VLAPIC_ID(vlapic) _VLAPIC_ID(vlapic, vlapic_get_reg(vlapic, 
>> APIC_ID))
> 
> Some comment regarding the difference between these two?

Not sure what would need explaining here - one operates on a passed
in ID while the other uses the active one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.