|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/1] xen: move TLB-flush filtering out into populate_physmap
On Wed, Sep 07, 2016 at 12:02:33AM -0700, Dongli Zhang wrote:
> > But since what Dongli cares about at the moment is domain creation, it
> > certainly won't hurt to limit this optimization to domain creation time;
> > then we can discuss enabling it for ballooning when someone finds it to
> > be an issue.
>
> Thank you all very much for the feedback. The current limitation only
> impacts vm creation time unless someone would balloon 100+GB memory.
>
> To limit this optimization to domain creation time, how do you think if we
> add the following rules to xen and toolstack? Are the rules reasonable?
>
> Rule 1. It is toolstack's responsibility to set the "MEMF_no_tlbflush" bit
> in memflags. The toolstack developers should be careful that
> "MEMF_no_tlbflush" should never be used after vm creation is finished.
>
Is it possible to have a safety catch for this in the hypervisor? In
general IMHO we should avoid providing an interface that is possible to
create a security problem.
> Rule 2. xen should check at hypervisor level that MEMF_no_tlbflush is
> allowed to be set only when current populate_physmap operation is initiated
> by dom0. Otherwise, MEMF_no_tlbflush should be masked in memflags if ( d
> == curr_d && d->domain_id != 0 ). Therefore, this patch would not impact
> the security of memory ballooning operations.
>
Really this reads as some sort of (incomplete) safety check.
We don't need Rule 1 if the hypervisor knows when or who is allowed to
use that flag. I understand there might be difficulty in achieving that
though.
> Would you please let me know if the rules above are reasonable? Are there
> any suggestions for better solutions? xc_dom_boot_mem_init is very close to
> the end of vm creation and I could not get a better solution unless we add
> a new XEN_DOMCTL operation to intentionally notify xen when
> xc_dom_boot_mem_init is finished so that MEMF_no_tlbflush should no longer
> be used.
>
> I copy and paste patches related to the above rules in this email. I will send
> out patches for review if the above solution works.
>
>
> @@ -150,6 +152,11 @@ static void populate_physmap(struct memop_args *a)
> max_order(curr_d)) )
> return;
>
> + /* MEMF_no_tlbflush is masked out if current populate_physmap operation
> is
> + * not initiated by dom0 */
> + if ( d == curr_d && d->domain_id != 0 )
> + a->memflags &= ~MEMF_no_tlbflush;
> +
This check is incomplete. Please take into account a scenario in which a
domain builder domain is used.
> for ( i = a->nr_done; i < a->nr_extents; i++ )
> {
> if ( i != a->nr_done && hypercall_preempt_check() )
>
>
> @@ -1185,7 +1185,7 @@ static int meminit_pv(struct xc_dom_image *dom)
> xen_pfn_t extents[SUPERPAGE_BATCH_SIZE];
> xen_pfn_t pfn_base_idx;
>
> - memflags = 0;
> + memflags = MEMF_no_tlbflush;
> if ( pnode != XC_NUMA_NO_NODE )
> memflags |= XENMEMF_exact_node(pnode);
>
> @@ -1273,7 +1273,7 @@ static int meminit_hvm(struct xc_dom_image *dom)
> int rc;
> unsigned long stat_normal_pages = 0, stat_2mb_pages = 0,
> stat_1gb_pages = 0;
> - unsigned int memflags = 0;
> + unsigned int memflags = MEMF_no_tlbflush;
> int claim_enabled = dom->claim_enabled;
> uint64_t total_pages;
> xen_vmemrange_t dummy_vmemrange[2];
>
>
> Dongli Zhang
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |