[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH CPU hotplug design document



On Tue, Jan 17, 2017 at 02:12:59AM -0700, Jan Beulich wrote:
> >>> On 16.01.17 at 18:44, <roger.pau@xxxxxxxxxx> wrote:
> > On Mon, Jan 16, 2017 at 09:50:53AM -0700, Jan Beulich wrote:
> >> >>> On 16.01.17 at 17:31, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Mon, Jan 16, 2017 at 09:09:55AM -0700, Jan Beulich wrote:
> >> >> >>> On 16.01.17 at 16:14, <roger.pau@xxxxxxxxxx> wrote:
> >> > This clearly isn't a requirement when doing PV vCPU hotplug, but it's a
> >> > violation of the spec (proving x2APIC entries without matching processor
> >> > objects), so I wouldn't be surprised if ACPICA or any other ACPI 
> > implementation
> >> > refuses to boot on systems with x2APIC entries but no processor objects.
> >> 
> >> Good point, but what do you suggest short of declaring PVH v2 Dom0
> >> impossible to properly implement? I think that the idea of multiplexing
> >> ACPI for different purposes is simply going too far. For PV there's no
> >> such problem, as the Dom0 OS is expected to be aware that processor
> >> information coming from ACPI is not applicable to the view on CPUs it
> >> has (read: vCPU-s). And therefore, unless clean multiplexing is possible,
> >> I think PVH will need to retain this requirement (at which point there's
> >> no spec violation anymore).
> > 
> > But we definitely want to use ACPI to pass the boot vCPU information, using 
> > the
> > MADT for both DomU and Dom0.
> 
> Is that really set in stone?

If the PVH domain has access to an APIC and wants to use it it must parse the
info from the MADT, or else it cannot get the APIC address or the APIC ID (you
could guess those, since their position is quite standard, but what's the
point?)

> > Then for PVH DomU using ACPI vCPU hotplug makes perfect sense, it requires 
> > less
> > Xen specific code in the OS and it's fairly easy to implement inside of
> > Xen/toolstack. But I understand that using different methods for DomU vs 
> > Dom0
> > is very awkward. I still think that ACPI vCPU hotplug for Dom0 this is not 
> > so
> > far-fetched, and that it could be doable.
> > 
> > Could we introduce a new CPUID flag to notify the guest of whether it should
> > expect ACPI vCPU hotplug or PV vCPU hotplug?
> 
> That would be an easy addition.

My proposition would be to notify the usage of PV vCPU hotplug, and not notify
anything when using ACPI vCPU hotplug.

> > I don't really like having Xen-specific checks inside of OSes, like "it's 
> > PVH
> > guest" then short circuiting a bunch of native logic. For example the ACPICA
> > ACPI shutdown hooks for Xen Dom0 never made it upstream, and it's very hard 
> > for
> > me to argue with the FreeBSD ACPICA maintainer about why those are needed,
> > and why he has to maintain a patch on top of upstream ACPICA only for Xen.
> 
> I understand all those concerns, but we shouldn't replace one ugliness
> by another. I.e. without a reasonably clean concept of how to use
> ACPI here I can't help thinking that the PV model here is the cleaner one
> despite the (little) extra code it requires in OSes.

Right. Do you agree to allow Boris DomU ACPI CPU hotplug to go in when ready,
and the PVH Dom0 series to continue using the same approach? (MADT entries for
vCPUs, unmodified processor objects in the DSDT, PV hotplug for vCPUs).

Is there anyway to get in touch with the ACPI guys in order to see whether this
can be solved in a nice way using ACPI? I know that's not something that's
going to change in the near future, but maybe by bringing it up with them we
can make our life easier in the future?

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.