[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/4] xen/libxc: Allow changes to hypervisor CPUID leaf from config file



On 03/19/2014 05:29 AM, Ian Campbell wrote:
On Wed, 2014-03-19 at 09:26 +0000, Jan Beulich wrote:
On 19.03.14 at 01:58, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
Currently only "real" cpuid leaves can be overwritten by users via
'cpuid' option in the configuration file. This patch provides ability to
do the same for hypervisor leaves (but for now only 0x40000000 is allowed).
I'm still not really happy with the approach:

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -690,6 +690,9 @@ int cpuid_hypervisor_leaves( uint32_t idx, uint32_t sub_idx,
      if ( idx > limit )
          return 0;
+ if ( domain_cpuid_exists(d, base + idx, sub_idx, eax, ebx, ecx, edx) )
+        return 1;
You basically rely here on the tools having fed back the values the
hypervisor returned to them. What if this gets tweaked at some
point based on some other domain characteristics? I'd really like to
see the hypervisor, and only the hypervisor, being in charge of
returning the hypervisor leaf values. The sole restriction the tools
may place ought to be on the maximum leaf.
That does sound like it would simplify things.


I'll move the enforcement of cpuid values to cpuid_hypervisor_leaves() then. It will also address (to some extent) the comment below.

-boris

Sadly I didn't see anyone else commenting on this model of yours,
so I can't judge whether I'm the only one having reservations
against the model you have been proposing so far.
Well, the libxc side is certainly very complicated if all which is
needed is the ability to limit the maximum hypervisor leaf which is
exposed. That ought to be a dozen lines max across tools and hypervisor
I would have said.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.