[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/11] pvh/acpi: Handle ACPI accesses for PVH guests



On 17/11/2016 00:00, Boris Ostrovsky wrote:
>> When we want to enable ACPI vcpu hotplug for HVM guests, 
>>>> What do you mean by "when"? We *are* doing ACPI hotplug for HVM guests,
>>>> aren't we?
>>> Are we?  If so, how?
>>>
>>> I don't see any toolstack or qemu code able to cope with APCI CPU
>>> hotplug.  I can definitely see ACPI PCI hotplug in qemu, but that does
>>> make sense.
>> piix4_acpi_system_hot_add_init():
>>    acpi_cpu_hotplug_init(parent, OBJECT(s), &s->gpe_cpu,
>>                             PIIX4_CPU_HOTPLUG_IO_BASE);
>>
>>
>>>> Or are you thinking about moving this functionality to the hypervisor?
>>> As an aside, we need to move some part of PCI hotplug into the
>>> hypervisor longterm.  At the moment, any new entity coming along and
>>> attaching to an ioreq server still needs to negotiate with Qemu to make
>>> the device appear.  This is awkward but doable if all device models are
>>> in dom0, but is far harder if the device models are in different domains.
>>>
>>> As for CPU hotplug, (if I have indeed overlooked something), Qemu has no
>>> business in this matter. 
>> Yes. And if we are going to do it for PVH we might as well do it for HVM
>> --- I think most of the code will be the same, save for how SCI is sent.
>
> So I discovered that we actually cannot unplug an HVM VCPU with qemu,
> there is no support for that via QMP (which is what we use).
>
> 'xl vcpu-set <domid> N' is nop when we unplug.

Lovely!

Sounds like an even better reason to implement it properly when someone
has some TUITs.

Anyway, so long as the PVH implementation will be clean to reuse when
someone gets time to retrofit it to plain HVM guests, I am happy.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.