|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/6] x86/xpti: avoid copying L4 page table contents when possible
>>> On 02.03.18 at 09:13, <jgross@xxxxxxxx> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -509,6 +509,8 @@ void make_cr3(struct vcpu *v, mfn_t mfn)
>
> void write_ptbase(struct vcpu *v)
> {
> + get_cpu_info()->root_pgt_changed = this_cpu(root_pgt) && is_pv_vcpu(v) &&
> + !is_pv_32bit_vcpu(v);
Why is_pv_vcpu() when you already check is_pv_32bit_vcpu()?
> @@ -3704,18 +3706,22 @@ long do_mmu_update(
> break;
> rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
> cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> - /*
> - * No need to sync if all uses of the page can be
> accounted
> - * to the page lock we hold, its pinned status, and uses
> on
> - * this (v)CPU.
> - */
> - if ( !rc && !cpu_has_no_xpti &&
> - ((page->u.inuse.type_info & PGT_count_mask) >
> - (1 + !!(page->u.inuse.type_info & PGT_pinned) +
> - (pagetable_get_pfn(curr->arch.guest_table) ==
> mfn)
> +
> - (pagetable_get_pfn(curr->arch.guest_table_user) ==
> - mfn))) )
> - sync_guest = true;
> + if ( !rc && !cpu_has_no_xpti )
> + {
> + get_cpu_info()->root_pgt_changed = true;
Why would you set this when a foreign domain's L4 got updated?
And don't you need to disallow updating L4s of running guests now
(which is a bad idea anyway)?
> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -207,6 +207,8 @@ void invalidate_interrupt(struct cpu_user_regs *regs)
> unsigned int flags = flush_flags;
> ack_APIC_irq();
> perfc_incr(ipis);
> + if ( flags & FLUSH_ROOT_PGTBL )
> + get_cpu_info()->root_pgt_changed = true;
While for the caller in do_mmu_update() you don't need it, for
full correctness you also want to do this in flush_area_mask()
for the sender, if it's part of the CPU mask.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |