[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/3] VMX: allocate VMCS pages from domain heap
>>> On 20.10.15 at 12:12, <andrew.cooper3@xxxxxxxxxx> wrote: > On 19/10/15 16:22, Jan Beulich wrote: >> -static struct vmcs_struct *vmx_alloc_vmcs(void) >> +static paddr_t vmx_alloc_vmcs(void) >> { >> + struct page_info *pg; >> struct vmcs_struct *vmcs; >> >> - if ( (vmcs = alloc_xenheap_page()) == NULL ) >> + if ( (pg = alloc_domheap_page(NULL, 0)) == NULL ) > > As an observation, it would be good to pass v from the caller, and NUMA > allocate against v->domain here. Yes, in another patch. >> @@ -580,7 +583,7 @@ int vmx_cpu_up_prepare(unsigned int cpu) >> void vmx_cpu_dead(unsigned int cpu) >> { >> vmx_free_vmcs(per_cpu(vmxon_region, cpu)); >> - per_cpu(vmxon_region, cpu) = NULL; >> + per_cpu(vmxon_region, cpu) = 0; > > While this is currently safe (as pa 0 is not part of the available heap > allocation range), might it be worth introducing a named sentential? I > can forsee a DMLite nested Xen scenario where we definitely don't need > to treat the low 1MB magically. I guess there are more things to adjust if we ever cared to recover the few hundred kb below 1Mb. And then I don't see why nested Xen would matter here: One major main reason for reserving that space is that we want to put the trampoline there. Do you think DMlite would allow us to get away without? But even if so, this would again fall under what I've said in the first sentence. >> --- a/xen/arch/x86/hvm/vmx/vvmx.c >> +++ b/xen/arch/x86/hvm/vmx/vvmx.c >> @@ -56,13 +56,14 @@ int nvmx_vcpu_initialise(struct vcpu *v) >> { >> struct nestedvmx *nvmx = &vcpu_2_nvmx(v); >> struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v); >> + struct page_info *pg = alloc_domheap_page(NULL, 0); > > Again - this can be NUMA allocated with v->domain. In that same other patch I would say. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |