[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] RE: Xen: Hybrid extension patchset for hypervisor
Keir Fraser wrote on Wed, 16 Sep 2009 at 07:04:10: > On 16/09/2009 10:08, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote: > >> The principle is okay I guess. These changes would have to be trickled >> in with a really good explanation and justification for each one. For >> example, I'm not clear why the enable-hybrid hypercall is needed. Why >> not just provide access to evtchn and timer hypercalls always, and >> guest sues them if it is capable of it? I'm also not sure why PV timer >> events get routed to irq0 -- why not via an event channel as usual, now >> that you are enabling HVM guests to use the evtchn subsystem? What's a >> hybrid gnttab, and why does it need an explciit reserved e820 region? >> And so on. >> >> The general principle of these patches seems to be to create a set of >> individual, and perhaps largely independent, >> accelerations/enlightenments to the HVM interface. I can at least agree >> with and support that aim. > By the way, if your intention is to speed up 64-bit guest performance, > then I think you should compare with running a full PV guest in a VMCS > container. That is runs in VMX non-root mode but still retains the usual > full-PV interfaces. I think that would be no more code than you are > proposing here, and would avoid scattering a bunch more code around the > guest OS, to which there is bound to be resistance. Do you mean running the existing 64-bit PV kernel binaries in a VMCS container? Based on our data, what we would want in PV 64-bit guests are, fundamentally: - have the kernel run in ring 0 (so that it can regain the performance enhancements) - use hardware-based MMU virtualization (e.g. EPT-based) if present > > -- Keir > Jun ___ Intel Open Source Technology Center _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |