|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1 2/2] libxl: vcpu-set - allow to decrease vcpu count on overcommitted guests (v2)
On Fri, Jun 06, 2014 at 10:07:37AM +0100, Ian Campbell wrote:
> On Thu, 2014-06-05 at 13:44 -0400, Konrad Rzeszutek Wilk wrote:
> > > > - /* NB: This also limits how many are set in the bitmap */
> > > > - max_vcpus = (max_vcpus > host_cpu ? host_cpu : max_vcpus);
> > >
> > > Where did this go?
> >
> > No need for it actually. As we already do the action if 'max_vcpus >
> > host_cpu' - which is that we return. So in essence that code will set
> > max_vcpus
> > to max_vcpus.
>
> What about if dominfo.vcpu_online > max_vcpus? iN that case the
> max_vcpus > host_cpu check doesn't occur.
Let me split that change out to a different patch. But in case you
do remember this conversation - that is the purpose of this patch - to
bypass the check.
>
> You could be in this state if someone had previously forced overcommit I
> think.
Right, or the guest was constructed with values greater than pCPU.
Since I am sure you don't remember the context of this patch, I am
resending them here (they grew to four patches)
Let me rehash what we had in set in stone way back in 4.4:
- The guest config ('maxvcpus') is permitted to be greater than the pCPUs.
Ditto for the initially allocated ('vcpus') amounts. It is also
OK to be different - 'vcpus' < 'maxvcpus', etc.
- If the 'vcpus' < pCPUs and we want to increase it above pCPUs we should
error out and print out a warning telling them to use --ignore-host.
Regardless of the dominfo.max_vcpu_id - so if the max_vpcu_id is
greater than pCPU and 'vcpu' < pCPU, we should still warn the user
when increasing.
- If the 'vcpus' > pCPUs and we want to decrease to be below pCPUs then
we should do that without the warning.
(this is what the patch was fixing).
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |