[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 7/9] xen: use domid check in is_hardware_domain
>>> On 12.04.13 at 16:11, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote: > On 04/12/2013 04:00 AM, Jan Beulich wrote: >>>>> On 11.04.13 at 22:13, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote: >>> Instead of checking is_privileged to determine if a domain should >>> control the hardware, check that the domain_id is equal to zero (which >>> is currently the only domain for which is_privileged is true). This >>> allows other places where domain_id is checked for zero to be replaced >>> with is_hardware_domain. >> >> Isn't this moving in the wrong direction? I.e. wouldn't you rather >> want to see the domid == 0 checks go away, rather than replacing >> the ->is_privileged one in is_hardware_domain()? > > I don't have a strong opinion on the difference between (domid == 0) and > ->is_privileged; I chose domid == 0 in part because more of the changes > I was making were already comparing domid, and in part because it makes > changing "0" to a variable hardware domain ID simpler. (I have working > patches that do this, but they aren't yet ready for upstreaming). > > Either way it is done, some of the code paths controlled by this check > can only be allowed to one Linux domain at a time - the PCI accesses in > traps.c are the most obvious example of this, since multiple Linux domains > will all try to access the PCI config space and step on one another. Right. But my point was that a change like this should make things more clear and easier to change in the future (ideally without having to touch all the same places again that you touch now). And I don't think the way it is done either of this is being achieved. >>> --- a/xen/common/cpupool.c >>> +++ b/xen/common/cpupool.c >>> @@ -546,13 +546,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op >>> *op) >>> { >>> struct domain *d; >>> >>> - ret = -EINVAL; >>> - if ( op->domid == 0 ) >>> - break; >>> ret = -ESRCH; >>> d = rcu_lock_domain_by_id(op->domid); >>> if ( d == NULL ) >>> break; >>> + if ( is_hardware_domain(d) ) >> >> Here this is surely wrong - the operation has nothing to do with >> hardware control. This would need something like is_control_domain(). >> >> Which puts under question the earlier changes in this series: Wouldn't >> you be better of having is_control_domain() resolving to >> ->is_privileged and is_hardware_domain() resolving to domid == 0 >> (for the time being), and use the two according to the respective >> contexts? > > That is what I was aiming for, but with is_control_domain not needed due to > every occurrence being covered by an XSM hook. I assumed that the reason > this sysctl was not allowed on dom0 was to avoid moving the possibly-pinned > hardware domain's CPUs; should this instead be rcu_lock_remote_domain_by_id > to avoid a domain moving itself, or is there another reason why a control > domain's cpupool should not be changed? Yes, afaiu the main point here is to disallow a domain to affect itself (due to deadlock potential). You may want to double check with Juergen though. >>> + { >>> + ret = -EINVAL; >>> + rcu_unlock_domain(d); >>> + break; >>> + } >>> if ( d->cpupool == NULL ) >>> { >>> ret = -EINVAL; >>> --- a/xen/common/domain.c >>> +++ b/xen/common/domain.c >>> @@ -235,7 +235,7 @@ struct domain *domain_create( >>> if ( domcr_flags & DOMCRF_hvm ) >>> d->is_hvm = 1; >>> >>> - if ( domid == 0 ) >>> + if ( is_hardware_domain(d) ) >> >> And this one isn't hardware related either. While the subsequent ones >> both are. > > I had assumed this one was hardware related: we can't migrate the hardware > domain and pinning its vcpus makes sense for things like MSRs or frequency > control by that domain. I don't see any reason to pin the vcpus of a control > domain, and while I doubt it is useful to migrate it, there is no reason to > forbid migration unlike the hardware domain. Hmm, right, looks light I didn't pay enough attention to the patch context, and judged merely by the function you change. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |