[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/hvm: unify HVM and PVH hypercall tables.



At 14:39 -0400 on 08 May (1399556383), Konrad Rzeszutek Wilk wrote:
> On Thu, May 08, 2014 at 04:31:30PM +0100, Tim Deegan wrote:
> > Stage one of many in merging PVH and HVM code in the hypervisor.
> > 
> > This exposes a few new hypercalls to HVM guests, all of which were
> > already available to PVH ones:
> > 
> >  - XENMEM_memory_map / XENMEM_machine_memory_map / XENMEM_machphys_mapping:
> >    These are basically harmless, if a bit useless to plain HVM.
> > 
> >  - VCPUOP_send_nmi / VCPUOP_initialise / VCPUOP[_is]_up / VCPUOP_down
> >    This will eventually let HVM guests bring up APs the way PVH ones do.
> >    For now, the VCPUOP_initialise paths are still gated on is_pvh.
> 
> I had a similar patch to enable this under HVM and found out that
> if the guest issues VCPUOP_send_nmi we get in Linux:
> 
> [    3.611742] Corrupted low memory at c000fffc (fffc phys) = 00029b00
> [    2.386785] Corrupted low memory at ffff88000000fff8 (fff8 phys) = 
> 2990000000000
> 
> http://mid.gmane.org/20140422183443.GA6817@xxxxxxxxxxxxxxxxxxx

Right, thanks.  Do you think that's likely to be a hypervisor bug, or
just a "don't do that then"?

AFAICT PVH domains need this as they have no other way of sending
NMIs.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.