[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH 0 of 12] PV on HVM Xen
The subject should be [PATCH 0 of 11] PV on HVM Xen, sorry about that. On Mon, 24 May 2010, Stefano Stabellini wrote: > Hi all, > this is another update of the PV on HVM Xen series that addresses > Jeremy's comments. > The platform_pci hooks have been removed, suspend/resume for HVM > domains is now much more similar to the PV case and shares the same > do_suspend function. > Alloc_xen_mmio_hook has been removed has well, now the memory allocation for > the grant table is done by the xen platform pci driver directly. > The per_cpu xen_vcpu variable is set by a cpu_notifier function so that > secondary vcpus have the variable set correctly no matter what the xen > features are on the host. > The kernel command line option xen_unplug has been renamed to > xen_emul_unplug and the code that makes use of it has been moved to a > separate file (arch/x86/xen/platform-pci-unplug.c). > Xen_unplug_emulated_devices is now able to detect if blkfront, netfront > and the Xen platform PCI driver have been compiled, and set the default > value of xen_emul_unplug accordingly. > The patch "Initialize xenbus device structs with ENODEV as > default" has been removed from the series and it will be sent > separately. > Finally the comments on most of the patches have been improved. > > The series is based on 2.6.34 and supports Xen PV frontends running > in a HVM domain, including netfront, blkfront and the VIRQ_TIMER. > > In order to be able to use VIRQ_TIMER and to improve performances you > need a patch to Xen to implement the vector callback mechanism > for event channel delivery. > > A git tree is also available here: > > git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git > > branch name 2.6.34-pvhvm-v2. > > Cheers, > > Stefano > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |