[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH CPU hotplug design document



>>> On 17.01.17 at 15:13, <roger.pau@xxxxxxxxxx> wrote:
> On Tue, Jan 17, 2017 at 05:33:41AM -0700, Jan Beulich wrote:
>> >>> On 17.01.17 at 12:43, <roger.pau@xxxxxxxxxx> wrote:
>> > If the PVH domain has access to an APIC and wants to use it it must parse 
>> > the
>> > info from the MADT, or else it cannot get the APIC address or the APIC ID 
>> > (you
>> > could guess those, since their position is quite standard, but what's the
>> > point?)
>> 
>> There's always the option of obtaining needed information via hypercall.
> 
> I think we should avoid that and instead use ACPI only, or else we are
> duplicating the information provided in ACPI using another interface, which is
> pointless IMHO.
> 
> There's only one kind of PVHv2 guest that doesn't require ACPI, and that guest
> type also doesn't have emulated local APICs. We agreed that this model was
> interesting from things like unikernels DomUs, but that's the only reason why
> we are providing it. Not that full OSes couldn't use it, but it seems
> pointless.

You writing things this way makes me notice another possible design
issue here: Requiring ACPI is a bad thing imo, with even bare hardware
going different directions for at least some use cases (SFI being one
example). Hence I think ACPI should - like on bare hardware - remain
an optional thing. Which in turn require _all_ information obtained from
ACPI (if available) to also be available another way. And this other
way might by hypercalls in our case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.