|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] XSA-60 security hole: cr0.cd handling
Jan Beulich wrote:
>>>> On 14.10.13 at 11:28, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
>> Jan Beulich wrote:
>>>>>> On 11.10.13 at 20:25, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
>>>>>> wrote:
>>>> +void hvm_handle_cd_traditional(struct vcpu *v, unsigned long
>>>> value) +{ + if ( (value & X86_CR0_CD) && !(value & X86_CR0_NW)
>>>> ) + { + /* Entering no fill cache mode. */
>>>> + spin_lock(&v->domain->arch.hvm_domain.uc_lock);
>>>> + v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE; +
>>>> + if ( !v->domain->arch.hvm_domain.is_in_uc_mode ) +
>>>> { + /* Flush physical caches. */
>>>> + on_each_cpu(local_flush_cache, NULL, 1);
>>>> + hvm_set_uc_mode(v, 1);
>>>
>>> So you continue to use the problematic function, and you also
>>> don't remove the problematic code from it's VMX backend - what
>>> am I missing here? If there was no room for getting here with
>>> !paging_mode_hap(), the problematic code should be removed
>>> from vmx_set_uc_mode(). And if we still can get here with
>>> !paging_mode_hap(), then what's the purpose of the patch?'
>>>
>>
>> The new solution w/ PAT depends on whether Intel processor support
>> 'load IA32_PAT' VM-entry control bit (which in very old Intel
>> processors it indeed didn't support it).
>>
>> Though I'm pretty sure Intel processors w/ EPT capability also
>> support 'load IA32_PAT' VM-entry control bit, Intel SDM didn't
>> explicitly guarantee it. Hence we still keep old function, though
>> I'm sure it will never be called, just for safety or logic
>> integration.
>
> If any such processor exists, the security issue would still be there
> with the patch as is. Hence either use of EPT needs to be
> suppressed in that case, or some other solution allowing the broken
> code to be deleted (or replaced) needs to be found.
>
Well, I have to go back to Intel library:) a long search for Intel VT history:
both 'load IA32_PAT' and 'EPT' features does not exist at Intel SDM Order
Number: 253669-026US (Feb 2008), but co-exist as early as Intel SDM Order
Number: 253669-027US (July 2008).
Per my understanding 'load IA32_PAT' co-exist/co-work with EPT:
1. shadow don't need 'load IA32_PAT' feature
2. EPT, from its earliest version, has 'iPAT' bit to control guest PAT and host
EPT memory type combining -- without 'load IA32_PAT' how can guest PAT memory
type take effect?
So we have 2 choices for 'load IA32_PAT':
1. aggressive choice: remove vmx_set_uc_mode logic
if ( shadow )
drop shadows and new ones would be created on demand;
else /* Intel EPT */
IA32_PAT solution
2. conservative choice: as what the patch did, use if (!cpu_has_vmx_pat) to
keep logic integration -- it didn't make thing worse.
Thoughts?
>>>> + /*
>>>> + * When cr0.cd setting
>>>> + * 1. For guest w/o VT-d, and for guest with VT-d but snooped,
>>>> Xen need not + * do anything, since hardware snoop mechanism
>>>> has ensured cache coherency; + * 2. For guest with VT-d but
>>>> non-snooped, cache coherency cannot be + * guaranteed by h/w so
>>>> need emulate UC memory type to guest. + */ + if ( ((value ^
>>>> old_value) & X86_CR0_CD) && + has_arch_mmios(v->domain)
>>>> &&
>>>
>>> has_arch_mmios() has been wrong (too weak) here originally, so I
>>> strongly suggest replacing it here as you modify this logic anyway:
>>> This check must consider not only non-empty iomem_caps, but also
>>> non-empty ioport_caps.
>>
>> OK, or, how about has_arch_pdevs()?
>
> Yes, precisely that one.
>
>>>> +static void vmx_handle_cd(struct vcpu *v, unsigned long value) +{
>>>> + if ( !paging_mode_hap(v->domain) ||
>>>> + unlikely(!cpu_has_vmx_pat) ) /* Too old for EPT, just
>>>> for safe */ + { + /*
>>>> + * For shadow, 'load IA32_PAT' VM-entry control is 0, so
>>>> it cannot + * set guest memory type as UC via IA32_PAT. Xen
>>>> drop all shadows + * so that any new ones would be created
>>>> on demand. + */ + hvm_handle_cd_traditional(v,
>>>> value); + } + else
>>>> + {
>>>> + u64 *pat = &v->arch.hvm_vcpu.pat_cr;
>>>> +
>>>> + if ( value & X86_CR0_CD )
>>>> + {
>>>> + /*
>>>> + * For EPT, set guest IA32_PAT fields as UC so that
>>>> guest + * memory type are all UC.
>>>> + */
>>>> + u64 uc_pat =
>>>> + ((uint64_t)PAT_TYPE_UNCACHABLE) | /*
>>>> PAT0 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 8) |
>>>> /* PAT1 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 16) |
>>>> /* PAT2 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 24) |
>>>> /* PAT3 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 32) |
>>>> /* PAT4 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 40) |
>>>> /* PAT5 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 48) |
>>>> /* PAT6 */ + ((uint64_t)PAT_TYPE_UNCACHABLE << 56);
>>>> /* PAT7 */ + + vmx_get_guest_pat(v, pat);
>>>> + vmx_set_guest_pat(v, uc_pat);
>>>> +
>>>> + wbinvd();
>>>> + v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE; + }
>>>> + else + {
>>>> + vmx_set_guest_pat(v, *pat);
>>>> + v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
>>>> + }
>>>
>>> All this is done only on the current vCPU, and I don't see any
>>> mechanism by which the other vCPU-s of the domain would
>>> similarly get their PAT overridden (yet the whole domain gets
>>> transitioned to UC mode). This minimally needs more cleanup (i.e.
>>> it should be uniformly just the local vCPU or uniformly the whole
>>> domain that is affected here).
>>>
>>> And if going the local-vCPU-only route you need to make clear
>>> (via code comment I would think) that this is safe to do when
>>> two of the guest's vCPU-s with different CR0.CD settings run
>>> on two hyperthreads on the same core.
>>>
>>
>> For Intel processors CR0.CD operation is per vCPU, not per domain.
>> It's OK running w/ different CR0.CD and different cacheability for
>> different CPUs (For AMD CR0.CD seems should be identical but AMD has
>> not XSA-60 issue we are talking about).
>>
>> IA32_PAT is separate MSR per each logic processor, so it's safe
>> running two vCPUs on two hyperthreads on same core. I will add code
>> comments.
>
> So are you saying that no conflicts will arise in TLBs getting
> populated with different cachabilty information from the two
> hyperthreads?
>
No, I meant it's irrelated to hyperthreads/core,
but yes, we indeed need add TLB flush to invalidate old memory type info
cached, via hvm_asid_flush_vcpu():
Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |