[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/5] x86/hyperv: setup hypercall page
On Sun, Jan 05, 2020 at 05:37:44PM +0000, Andrew Cooper wrote: > On 05/01/2020 16:47, Wei Liu wrote: > > diff --git a/xen/arch/x86/guest/hyperv/Makefile > > b/xen/arch/x86/guest/hyperv/Makefile > > index 68170109a9..1a8887d2f4 100644 > > --- a/xen/arch/x86/guest/hyperv/Makefile > > +++ b/xen/arch/x86/guest/hyperv/Makefile > > @@ -1 +1,2 @@ > > +obj-y += hypercall_page.o > > obj-y += hyperv.o > > diff --git a/xen/arch/x86/guest/hyperv/hypercall_page.S > > b/xen/arch/x86/guest/hyperv/hypercall_page.S > > new file mode 100644 > > index 0000000000..6d6ab913be > > --- /dev/null > > +++ b/xen/arch/x86/guest/hyperv/hypercall_page.S > > @@ -0,0 +1,21 @@ > > +#include <asm/asm_defns.h> > > +#include <asm/page.h> > > + > > + .section ".text.page_aligned", "ax", @progbits > > + .p2align PAGE_SHIFT > > +GLOBAL(hv_hypercall_page) > > + /* Return -1 for "not yet ready" state */ > > + mov -1, %rax > > + ret > > +1: > > + /* Fill the rest with `ret` */ > > + .fill PAGE_SIZE - (1b - hv_hypercall_page), 1, 0xc3 > > If you want to fill with rets, you can do this more simply with: > > .p2lign PAGE_SHIFT, 0xc3 > > which will do the size calculation for you. > > That said, I retract my statement about wanting this in the middle of > .text. (Sorry. See below.) > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c > > b/xen/arch/x86/guest/hyperv/hyperv.c > > index 8d38313d7a..381be2a68c 100644 > > --- a/xen/arch/x86/guest/hyperv/hyperv.c > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c > > @@ -72,6 +72,27 @@ const struct hypervisor_ops *__init hyperv_probe(void) > > return &ops; > > } > > > > +static void __init setup_hypercall_page(void) > > +{ > > + union hv_x64_msr_hypercall_contents hypercall_msr; > > + > > + rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); > > + hypercall_msr.enable = 1; > > + hypercall_msr.guest_physical_address = > > + __pa(hv_hypercall_page) >> HV_HYP_PAGE_SHIFT; > > + wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); > > +} > > + > > +static void __init setup(void) > > +{ > > + setup_hypercall_page(); > > +} > > The TLFS says that writing enable will fail until the OS identity is > set, which AFACIT, isn't done anywhere in the series. The whole > sequence is described in "3.13 Establishing the Hypercall Interface" Good catch. I will make up an identity number for Xen. I will also follow the sequence strictly. > > The locked bit is probably a good idea, but one aspect missing here is > the check to see whether the hypercall page is already enabled, which I > expect is for a kexec crash scenario. > > However, the most important point is the one which describes the #GP > properties of the guest trying to modify the page. This can only be > achieved with an EPT/NPT mapping lacking the W permission, which will > shatter host superpages. Therefore, putting it in .text is going to be > rather poor, perf wise. > > I also note that Xen's implementation of the Viridian hypercall page > doesn't conform to these properties, and wants fixing. It is going to > need a new kind identification of the page (probably a new p2m type) > which injects #GP if we ever see an EPT_VIOLATION/NPT_FAULT against it. > > As for suggestions here, I'm struggling to find any memory map details > exposed in the Viridian interface, and therefore which gfn is best to > choose. I have a sinking feeling that the answer is ACPI... TLFS only says "go find one suitable page yourself" without further hints. Since we're still quite far away from a functioning system, finding a most suitable page isn't my top priority at this point. If there is a simple way to extrapolate suitable information from ACPI, that would be great. If it requires writing a set of functionalities, than that will need to wait till later. Wei. > > ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |