|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/PV: properly populate descriptor tables
On 26/10/15 15:08, Jan Beulich wrote:
>>>> On 26.10.15 at 15:58, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 26/10/15 14:55, Andrew Cooper wrote:
>>> On 26/10/15 14:43, David Vrabel wrote:
>>>> On 23/09/15 16:34, Jan Beulich wrote:
>>>>> Us extending the GDT limit past the Xen descriptors so far meant that
>>>>> guests (including user mode programs) accessing any descriptor table
>>>>> slot above the original OS'es limit but below the first Xen descriptor
>>>>> caused a #PF, converted to a #GP in our #PF handler. Which is quite
>>>>> different from the native behavior, where some of such accesses (LAR
>>>>> and LSL) don't fault. Mimic that behavior by mapping a blank page into
>>>>> unused slots.
>>>>>
>>>>> While not strictly required, treat the LDT the same for consistency.
>>>> This change causes a 32-bit userspace process running in a 32-bit PV
>>>> guest to segfault.
>>>>
>>>> The process is a Go program and it is using the modify_ldt() system call
>>>> (which is successful) but loading %gs with the new descriptor causes a
>>>> fault. Even a minimal (empty main()) go program faults.
>>> D'uh - its obvious now you point it out.
>>>
>>> By filling the shadow ldt slots as present, zero entries, we break their
>>> demand-faulting.
>>>
>>> We can't be safe to incorrect faults from LAR/LSL, *and* perform demand
>>> faulting of the LDT.
>>
>> Wait. Yes we can. I am talking nonsense.
>>
>> Hunk 2 should be reverted, and the demand fault handler should populate
>> a zero entry rather than passing #GP back to the guest.
>
> Considering this
>
> "While not strictly required, treat the LDT the same for consistency."
>
> in the changelog, simply reverting the LDT part would seem
> sufficient to me (albeit that's more than just hunk 2 afaics).
Apply this partial revert fixes the problem for me.
8<------------------------
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -502,8 +502,8 @@ void update_cr3(struct vcpu *v)
static void invalidate_shadow_ldt(struct vcpu *v, int flush)
{
l1_pgentry_t *pl1e;
- unsigned int i;
- unsigned long pfn, zero_pfn = PFN_DOWN(__pa(zero_page));
+ int i;
+ unsigned long pfn;
struct page_info *page;
BUG_ON(unlikely(in_irq()));
@@ -524,9 +523,8 @@ static void invalidate_shadow_ldt(struct vcpu *v, int flush)
for ( i = 16; i < 32; i++ )
{
pfn = l1e_get_pfn(pl1e[i]);
- if ( !(l1e_get_flags(pl1e[i]) & _PAGE_PRESENT) || pfn == zero_pfn )
- continue;
- l1e_write(&pl1e[i], l1e_from_pfn(zero_pfn, __PAGE_HYPERVISOR_RO));
+ if ( pfn == 0 ) continue;
+ l1e_write(&pl1e[i], l1e_empty());
page = mfn_to_page(pfn);
ASSERT_PAGE_IS_TYPE(page, PGT_seg_desc_page);
ASSERT_PAGE_IS_DOMAIN(page, v->domain);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |