|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/shadow: Correct guest behaviour when creating PTEs above maxphysaddr
On 14/02/17 17:56, Andrew Cooper wrote:
> On 14/02/17 17:49, George Dunlap wrote:
>> On 14/02/17 17:45, Andrew Cooper wrote:
>>> On 14/02/17 17:42, George Dunlap wrote:
>>>> On 13/02/17 11:00, Andrew Cooper wrote:
>>>>> XSA-173 (c/s 8b1764833) introduces gfn_bits, and an upper limit which
>>>>> might be
>>>>> lower than the real maxphysaddr, to avoid overflowing the superpage shadow
>>>>> backpointer.
>>>>>
>>>>> However, plenty of hardware has a physical address width less that 44
>>>>> bits,
>>>>> and the code added in shadow_domain_init() is a straight assignment. This
>>>>> causes gfn_bits to be increased beyond the physical address width on most
>>>>> Intel consumer hardware (typically a width of 39, which is the number
>>>>> reported
>>>>> to the guest via CPUID).
>>>>>
>>>>> If the guest intentionally creates a PTE referencing a physical address
>>>>> between 39 and 44 bits, the result should be #PF[RSVD] for using the
>>>>> virtual
>>>>> address. However, the shadow code accepts the PTE, shadows it, and the
>>>>> virtual address works normally.
>>>>>
>>>>> Introduce paging_max_paddr_bits() to calculate the largest guest physical
>>>>> address supportable by the paging infrastructure, and update
>>>>> recalculate_cpuid_policy() to take this into account when clamping the
>>>>> guests
>>>>> cpuid_policy to reality. Remove gfn_bits and rework its users in terms
>>>>> of a
>>>>> guests maxphysaddr.
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>> ---
>>>>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>>>>> CC: Tim Deegan <tim@xxxxxxx>
>>>>> CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>>>>> CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
>>>>> CC: Kevin Tian <kevin.tian@xxxxxxxxx>
>>>>>
>>>>> v2:
>>>>> * Introduce paging_max_paddr_bits() rather than moving paging logic into
>>>>> recalculate_cpuid_policy().
>>>>> * Rewrite half of the commit message.
>>>>> ---
>>>>> xen/arch/x86/cpuid.c | 7 +++----
>>>>> xen/arch/x86/hvm/vmx/vvmx.c | 2 +-
>>>>> xen/arch/x86/mm/guest_walk.c | 3 ++-
>>>>> xen/arch/x86/mm/hap/hap.c | 2 --
>>>>> xen/arch/x86/mm/p2m.c | 3 ++-
>>>>> xen/arch/x86/mm/shadow/common.c | 10 ----------
>>>>> xen/arch/x86/mm/shadow/multi.c | 3 ++-
>>>>> xen/include/asm-x86/domain.h | 3 ---
>>>>> xen/include/asm-x86/paging.h | 16 ++++++++++++++++
>>>>> 9 files changed, 26 insertions(+), 23 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
>>>>> index e0a387e..3378f7a 100644
>>>>> --- a/xen/arch/x86/cpuid.c
>>>>> +++ b/xen/arch/x86/cpuid.c
>>>>> @@ -6,6 +6,7 @@
>>>>> #include <asm/hvm/nestedhvm.h>
>>>>> #include <asm/hvm/svm/svm.h>
>>>>> #include <asm/hvm/vmx/vmcs.h>
>>>>> +#include <asm/paging.h>
>>>>> #include <asm/processor.h>
>>>>> #include <asm/xstate.h>
>>>>>
>>>>> @@ -502,11 +503,9 @@ void recalculate_cpuid_policy(struct domain *d)
>>>>>
>>>>> cpuid_featureset_to_policy(fs, p);
>>>>>
>>>>> - p->extd.maxphysaddr = min(p->extd.maxphysaddr,
>>>>> max->extd.maxphysaddr);
>>>>> p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
>>>>> - d->arch.paging.gfn_bits + PAGE_SHIFT);
>>>>> - p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
>>>>> - (p->basic.pae || p->basic.pse36) ? 36 :
>>>>> 32);
>>>>> + paging_max_paddr_bits(d));
>>>>> + p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr, 32);
>>>>>
>>>>> p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
>>>>>
>>>>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> index 9c61b5b..774a11f 100644
>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> @@ -1381,7 +1381,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>>>>> }
>>>>>
>>>>> if ( (gpa & ~PAGE_MASK) ||
>>>>> - (gpa >> (v->domain->arch.paging.gfn_bits + PAGE_SHIFT)) )
>>>>> + (gpa >> v->domain->arch.cpuid->extd.maxphysaddr) )
>>>>> {
>>>>> vmfail_invalid(regs);
>>>>> return X86EMUL_OKAY;
>>>>> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
>>>>> index a67fd5a..5ad8cf6 100644
>>>>> --- a/xen/arch/x86/mm/guest_walk.c
>>>>> +++ b/xen/arch/x86/mm/guest_walk.c
>>>>> @@ -435,7 +435,8 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain
>>>>> *p2m,
>>>>> /* If this guest has a restricted physical address space then the
>>>>> * target GFN must fit within it. */
>>>>> if ( !(rc & _PAGE_PRESENT)
>>>>> - && gfn_x(guest_l1e_get_gfn(gw->l1e)) >> d->arch.paging.gfn_bits
>>>>> )
>>>>> + && gfn_x(guest_l1e_get_gfn(gw->l1e)) >>
>>>>> + (d->arch.cpuid->extd.maxphysaddr - PAGE_SHIFT) )
>>>> This pattern, of taking a gfn and shifting it by
>>>> (cpuid->ectd.maxphysaddr-PAGE_SHIFT) to see if it's valid happens
>>>> several times; it seems like for both clarity and avoiding mistakes, it
>>>> would be better if it were put into a macro.
>>>>
>>>> Everything else looks good to me. (No opinion on the other questions
>>>> raised so far.)
>>> static inline unsigned int gfn_bits(const struct domain *d)
>>> {
>>> return d->arch.cpuid->extd.maxphysaddr - PAGE_SHIFT;
>>> }
>>>
>>> ?
>>>
>>> I do like that idea. It would certainly make all of the callsites more
>>> readable.
>>>
>>> I can happily fold that change in if others agree.
>> I was thinking of going further:
>>
>> static inline bool guest_gfn_valid(domain *d, gfn_t gfn)
>> {
>> return !!(gfn_x(gfn) >> (d->arch.cpuid...) )
>> }
>>
>> ("Valid" might be a bit ambiguous, but I can't think of a better
>> description off the top of my head.)
>
> Hmm. That would be ok for the mm code, but more awkward in
> nvmx_handle_vmxon() which works in terms of gpa rather than gfn.
>
> Of course, that could be worked around with _gfn(gpa >> PAGE_SHIFT) so
> isn't the end of the world.
Or you could make guest_gpa_valid(), and have guest_gfn_valid. But I
don't mind too much as long as there's some sort of macro that makes it
easier to read and harder to make a mistake.
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |