[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 (resend) 07/27] x86: Map/unmap pages in restore_all_guests
On 30.04.2024 18:08, Elias El Yandouzi wrote: >>> --- a/xen/arch/x86/pv/domain.c >>> +++ b/xen/arch/x86/pv/domain.c >>> @@ -288,6 +288,19 @@ static void pv_destroy_gdt_ldt_l1tab(struct vcpu *v) >>> 1U << GDT_LDT_VCPU_SHIFT); >>> } >>> >>> +static int pv_create_shadow_root_pt_l1tab(struct vcpu *v) >>> +{ >>> + return create_perdomain_mapping(v->domain, >>> SHADOW_ROOT_PT_VCPU_VIRT_START(v), >> >> This line looks to be too long. But ... >> >>> + 1, >>> v->domain->arch.pv.shadow_root_pt_l1tab, >>> + NULL); >>> +} >>> + >>> +static void pv_destroy_shadow_root_pt_l1tab(struct vcpu *v) >>> + >>> +{ >>> + destroy_perdomain_mapping(v->domain, >>> SHADOW_ROOT_PT_VCPU_VIRT_START(v), 1); >>> +} >> >> ... I'm not convinced of the usefulness of these wrapper functions >> anyway, even more so that each is used exactly once. > > The wrappers have been introduced to remain consistent with what has > been done with GDT/LDT table. I would like to keep them if you don't mind. Hmm, yes, I can see your point. >>> @@ -371,6 +394,12 @@ int pv_domain_initialise(struct domain *d) >>> goto fail; >>> clear_page(d->arch.pv.gdt_ldt_l1tab); >>> >>> + d->arch.pv.shadow_root_pt_l1tab = >>> + alloc_xenheap_pages(0, MEMF_node(domain_to_node(d))); >>> + if ( !d->arch.pv.shadow_root_pt_l1tab ) >>> + goto fail; >>> + clear_page(d->arch.pv.shadow_root_pt_l1tab); >> >> Looks like you simply cloned the GDT/LDT code. That's covering 128k >> of VA space per vCPU, though, while here you'd using only 4k. Hence >> using a full page looks like a factor 32 over-allocation. And once >> using xzalloc() here instead a further question would be whether to >> limit to the domain's actual needs - most domains will have far less >> than 8k vCPU-s. In the common case (up to 512 vCPU-s) a single slot >> will suffice, at which point a yet further question would be whether >> to embed the "array" in struct pv_domain instead in that common case >> (e.g. by using a union). > > I have to admit I don't really understand your suggestion. Could you > elaborate a bit more? The (per vCPU) GDT and LDT are together taking up 128k of VA space. Whereas you need only 4k. Therefore I was asking why you're over- allocating by so much. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |