[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [HYBRID]: status update...
On 01/08/12 23:34, Mukesh Rathor wrote: I understand the idea, but I think it's not very accurate. I would call Stefano's "PVHVM" stuff hybrid -- it has the legacy boot and emulated devices, but uses the PV interfaces for event delivery extensively. The mode you're working on is too far towards the "PV" side to be called "hybrid". (And as we've seen, the term has already confused people, who interpreted it as basically PVHVM.)On Wed, 1 Aug 2012 16:25:01 +0100 George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:I hope this isn't bikeshedding; but I don't like "Hybrid" as a name for this feature, mainly for "marketing" reasons. I think it will probably give people the wrong idea about what the technology does. PV domains is one of Xen's really distinct advantages -- much simpler interface, lighter-weight (no qemu, legacy boot), &c &c. As I understand it, the mode you've been calling "hybrid" still has all of these advantages -- it just uses some of the HVM hardware extensions to make the interface even simpler / faster. I'm afraid "hybrid" may be seen as, "Even Xen has had to give up on PV." Can I suggest something like "PVH" instead? That (at least to me) makes it clear that PV domains are still fully PV, but just use some HVM extensions. Thoughts?Hi George, We gave some thought looking for name. I figured pure PV will be around for a while at least. So there's PV on one side and HVM on the other, hybrid somewhere in between. In general, I think "PV" should mean, "Doesn't use legacy boot, doesn't need emulated devices". So I don't think "PVH" places any limitations on what particular subset of HVM hardware you use. For things that specifically depend on knowing whether guest PTs are using mfns or gpfns, I think we should have checks for specific things -- for instance, "xen_mm_translate()" or something like that.The issue with PV in HVM is that it limits PV to HVM container only. The vision I had was that hybrid, a PV ops kernel that is somewhere in between PV and HVM, could be configured with options. So, one could run hybrid with say EPT off (altho, won't be supported this anymore). But generic name like hybrid allows it in future to be flexible, instead of confined to a specific. I suppose a PV guest could just be started with various options. Also, don't confuse EPT (which is Intel-specific) with HAP (which is the generic term for either EPT or RVI); and don't confuse either of those with what is called "translate" mode. Translate mode (where Xen translates the guest PTs from gpfns to mfns) can be done either with HAP or with shadow; and given the performance issues HAP has with certain workloads, we need to make sure that the HVM container mode can use both. As for name in code, 'pvh' was confusing, as PVHVM is now routinely used to refer to HVM with PV drivers. 'hpv' for HVM/hybrid PV, well, thats a certain virus ;). So I just used hybrid in the code to refer to PV guest that runs in HVM container. I suppose I could change the flag to pv_in_hvm or something. But is "pvhvm" ever actually used in the code? If not, it's not a problem.Actually, perhaps it would be better in any case, rather than having checks for "pvh" mode, to have checks for specific things -- e.g., is translation on or off (i.e., are running in classic PV mode, or with HAP)? I'm not sure the other things you're doing with HVM, but it should be possible to come up with a descriptive name to use in the code for those options, rather than limiting to a specific mode. In ancient days, there used to be options, both within Xen and within the classic Xen kernel, to run a PV guest in fully-translated mode (i.e., the guest PTs contained gpfns, not mfns), and "shadow_translate" was a mode used across guest types (both PV and HVM) to determine whether this was the case or not. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |