[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 ]libxl: allow to set more than 31 vcpus



On Fri, 2012-06-01 at 11:23 +0100, Dario Faggioli wrote:
> On Fri, 2012-06-01 at 10:41 +0100, Ian Campbell wrote: 
> > > Mmm... Maybe this is still related to the fact that on all the test
> > > boxes I've used, libxl_get_max_cpus() returns something higher than the
> > > actual physical CPU count of those boxes themselves, but I just created
> > > an 18 VCPUs VM on my 16 PCPUs test machine... I take the above like you
> > > can't, can you?
> > 
> > I think libxl_get_max_cpus and/or libxl_cpumap_alloc involved some
> > amount of rounding up, if you tried to create a 33 vcpu guest on that
> > machine (or a machine with <= 32 cpus) it may not work...
> > 
> It does:
> 
>     max_cpus = libxl_get_max_cpus(ctx);
>     if (max_cpus == 0)
>         return ERROR_FAIL;
> 
>     sz = (max_cpus + 7) / 8;
> 
> So in my case it should be 16 + 7 = 23 / 8 = 2 ... Right?

Yeah, it seems we do this at byte rather than word granularity like I
first thought.

>  Then we have:
> 
>     cpumap->map = calloc(sz, sizeof(*cpumap->map));
> 
> Which make me thinking I'm getting a 2 elements uint8_t array for
> storing the cpumap (please correct me if I'm wrong, I frequently am when
> it comes to math! :-P). That's way I wasn't expecting to be able to
> exceed 16 VCPUs.
> 
> Anyway, I just tried 25 and 33 and 65, and creation of the domain worked
> without raising any errors! Then I double-checked, and saw that, in the
> 'above 16' cases, Xen deliberately paused a lot of VCPUs. Also, if I log
> into the guest /proc/cpuinfo reports only CPUs 0 and 32 (and 64 in the
> 65 VCPUs case).

All 64 online?

It might be that the issue being fixed here only manifests on HVM. I'm
not really sure how 64 would work otherwise since cur_vcpus in the IDL
is definitely an int, which is what needs fixing!

libxl__build_post is also probably buggy with max_vcpus >
nr-bits-in(curr_vcpus) and from the looks of it it just overflows off
the end(!), which is also fixed here...

> To conclude, I'm not sure what's going on, but I don't think is
> something we would want... :-/ 
> 
> > > Maybe it is that *_max_cpus() logic that needs some attention? :-O
> > 
> > max_cpus returns the max number of physical cpus, and I think it does so
> > correctly (perhaps with some slop at the top end). 
> >
> As we also saw in another thread, it seems to return the max_cpu_id+1,
> which is different from the number of physical CPUs (at least in my
> case). And in fact, I'm sure it returns 64 on my box. However, that does
> not appear to be the main issue here, as creation seem to succeed no
> matter how much VCPUs I ask for, but then a number of them are off. :-O
> 
> If that is a known/documented behaviour, fine, I just haven't found it.
> Otherwise, perhaps I can investigate a bit what's going on, if that is
> considered interesting...
> 
> > In some cases we want
> > to talk about virtual cpus and this change lets us size cpumap's of
> > virtual cpus more appropriately (be that larger or smaller than the
> > number of physical cpus).
> > 
> I have no argument against this. As I tried to explain, I thought
> 
> /* get max. number of cpus supported by hypervisor */
> int libxl_get_max_cpus(libxl_ctx *ctx);
> 
> "max. number of cpus supported by hypervisor" to be different from the
> actual number of physical processors, and I was sort-of mislead by the
> machine I use to test Xen every day (where that is actually happening!).
> If it is not like that, I guess I can agree with you on this change.

It's certainly supposed to be "get max. number of physical cpus", quite
how that relates to the actual number of physical cpus I'm not sure.

It's definitely not something to do with virtual cpus (for which there
is a limit, but not this one...)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.