[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v12 05/21] Introduce pv guest type and has_hvm_container macros
On 18/09/13 14:46, Jan Beulich wrote: On 13.09.13 at 18:25, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:@@ -72,7 +72,7 @@ void *map_domain_page(unsigned long mfn) #endifv = mapcache_current_vcpu();- if ( !v || is_hvm_vcpu(v) ) + if ( !v || has_hvm_container_vcpu(v) ) return mfn_to_virt(mfn);dcache = &v->domain->arch.pv_domain.mapcache;@@ -177,7 +177,7 @@ void unmap_domain_page(const void *ptr) ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);v = mapcache_current_vcpu();- ASSERT(v && !is_hvm_vcpu(v)); + ASSERT(v && is_pv_vcpu(v));This is inconsistent with the above change to map_domain_page() and the changes later in this file. You mean, making this "is_pv_vcpu" instead of "!has_hvm_container_vcpu"? I guess that's true. @@ -1237,7 +1237,7 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c) bool_t compat = is_pv_32on64_domain(v->domain); #define c(fld) (!compat ? (c.nat->fld) : (c.cmp->fld))- if ( is_hvm_vcpu(v) )+ if ( has_hvm_container_vcpu(v) ) memset(c.nat, 0, sizeof(*c.nat)); memcpy(&c.nat->fpu_ctxt, v->arch.fpu_ctxt, sizeof(c.nat->fpu_ctxt));I think best would be to drop the condition here altogether. Ack. --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -119,6 +119,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs) unsigned long *stack, addr; unsigned long mask = STACK_SIZE;+ /* Avoid HVM as we don't know what the stack looks like */if ( is_hvm_vcpu(v) )So why do you not change this one to !is_pv_vcpu()? In particular the do_page_walk() further down is inherently PV. I guess it depends. I was thinking that the reason we didn't do this for HVM guests was that there was no guarantee what the stack looked like -- it looks like another aspect is that for VMs that are not current, whether the PTs are controlled by Xen or the guest is a factor. But if we make do_page_walk gated on is_pv_vcpu(), then it seems like it should work when v==current, and fail gracefully when v!=current. @@ -732,8 +736,12 @@ void watchdog_domain_destroy(struct domain *d);#define VM_ASSIST(_d,_t) (test_bit((_t), &(_d)->vm_assist)) -#define is_hvm_domain(d) ((d)->is_hvm)+#define is_pv_domain(d) ((d)->guest_type == guest_type_pv) +#define is_pv_vcpu(v) (is_pv_domain((v)->domain)) +#define is_hvm_domain(d) ((d)->guest_type == guest_type_hvm) #define is_hvm_vcpu(v) (is_hvm_domain(v->domain)) +#define has_hvm_container_domain(d) ((d)->guest_type != guest_type_pv) +#define has_hvm_container_vcpu(v) (has_hvm_container_domain((v)->domain))So in the end is_pv_...() = !has_hvm_container_...(), i.e. I don't really see the point in having both. Yes, is_pv <=> !has_hvm_container. My original goal with having something different between the two was to try to hint to the coder whether the check had something to do with PV stuff or with HVM stuff. For example, in arch_set_info_guest(), at the top, it uses "is_pv" because the various selectors are only fixed up for PV guests; in arch_vcpu_reset() it's "is_pv" because it's related to destroying PV-only structures. Whereas in, say, domain_relinquish_resources, the call to hvm_domain_relinquish_resources() is gated on has_hvm_container, because it has to do with HVM structures; and in vmcs_dump() it's "!has_hvm_container" instead if "is_pv" for the same reasons. I can see now that this hasn't been consistently applied; for one, in many places the patch still assumes two alternatives. And in the map_domain_page() functions, it looks like it should be "!is_pv", since it appears only PV guests have per-domain mapcache structures. It still makes more sense to me to have two different labels, because to me they don't seem like the same thing: Taking a codepath because you *don't* have PV structures is different than taking a codepath because you *do* have HVM structures, even if it happens that at the moment the two happen to coincide 100% of the time. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |