[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/7] x86/hyperv: setup hypercall page



On Wed, Jan 22, 2020 at 09:31:52PM +0000, Andrew Cooper wrote:
> On 22/01/2020 20:23, Wei Liu wrote:
> > Use the top-most addressable page for that purpose. Adjust e820 code
> > accordingly.
> >
> > We also need to register Xen's guest OS ID to Hyper-V. Use 0x300 as the
> > OS type.
> >
> > Signed-off-by: Wei Liu <liuwe@xxxxxxxxxxxxx>
> > ---
> > XXX the decision on Xen's vendor ID is pending.
> 
> Presumably this is pending a published update to the TLFS?  (And I
> presume using 0x8088 is out of the question?  That is an X in the bottom
> byte, not a reference to an 8 bit microprocessor.)

We're discussing internally what Xen (and other OSes) should use.

There will be an update to TLFS for this.

At this point I can say Xen can use the ID I wrote in this patch.

0x8088 is out of the question. :-)

> 
> > diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
> > index 082f9928a1..5a4ef27a0b 100644
> > --- a/xen/arch/x86/e820.c
> > +++ b/xen/arch/x86/e820.c
> > @@ -36,6 +36,22 @@ boolean_param("e820-verbose", e820_verbose);
> > @@ -357,6 +373,21 @@ static unsigned long __init find_max_pfn(void)
> >              max_pfn = end;
> >      }
> >  
> > +#ifdef CONFIG_HYPERV_GUEST
> > +    {
> > +   /*
> > +    * We reserve the top-most page for hypercall page. Adjust
> > +    * max_pfn if necessary.
> 
> It might be worth leaving a "TODO: Better algorithm/guess?" here.
> 

Ack.

> > +    */
> > +        unsigned int phys_bits = find_phys_addr_bits();
> > +        unsigned long hcall_pfn =
> > +          ((1ull << phys_bits) - 1) >> PAGE_SHIFT;
> 
> (1ull << (phys_bits - PAGE_SHIFT)) - 1 is equivalent, and doesn't
> require a right shift.  I don't know if the compiler is smart enough to
> make this optimisation automatically.

OK. I can change it.

> 
> > +
> > +        if ( max_pfn >= hcall_pfn )
> > +          max_pfn = hcall_pfn - 1;
> 
> Indentation looks weird.
> 

I blame Emacs.

> > @@ -446,13 +477,7 @@ static uint64_t __init mtrr_top_of_ram(void)
> >           return 0;
> >  
> >      /* Find the physical address size for this CPU. */
> > -    eax = cpuid_eax(0x80000000);
> > -    if ( (eax >> 16) == 0x8000 && eax >= 0x80000008 )
> > -    {
> > -        phys_bits = (uint8_t)cpuid_eax(0x80000008);
> > -        if ( phys_bits > PADDR_BITS )
> > -            phys_bits = PADDR_BITS;
> > -    }
> > +    phys_bits = find_phys_addr_bits();
> >      addr_mask = ((1ull << phys_bits) - 1) & ~((1ull << 12) - 1);
> 
> Note for whomever is next doing cleanup in this area.  This wants to be
> & PAGE_MASK.
> 

Ack.

> > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c 
> > b/xen/arch/x86/guest/hyperv/hyperv.c
> > index 8d38313d7a..f986c1a805 100644
> > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > @@ -72,6 +82,43 @@ const struct hypervisor_ops *__init hyperv_probe(void)
> >      return &ops;
> >  }
> >  
> > +static void __init setup_hypercall_page(void)
> > +{
> > +    union hv_x64_msr_hypercall_contents hypercall_msr;
> > +    union hv_guest_os_id guest_id;
> > +    unsigned long mfn;
> > +
> > +    rdmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> > +    if ( !guest_id.raw )
> > +    {
> > +        guest_id.raw = generate_guest_id();
> > +        wrmsrl(HV_X64_MSR_GUEST_OS_ID, guest_id.raw);
> > +    }
> > +
> > +    rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
> > +    if ( !hypercall_msr.enable )
> > +    {
> > +        mfn = ((1ull << paddr_bits) - 1) >> HV_HYP_PAGE_SHIFT;
> > +        hypercall_msr.enable = 1;
> > +        hypercall_msr.guest_physical_address = mfn;
> > +        wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
> 
> Is it worth reading back, and BUG() if it is different?  It will be a
> more obvious failure than hypercalls disappearing mysteriously.
> 

Sure. I don't expect that BUG_ON to trigger in practice though.

> > +    } else {
> > +        mfn = hypercall_msr.guest_physical_address;
> > +    }
> 
> Style.
> 

Will fix.

Wei.

> Otherwise, LGTM.
> 
> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.