[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] upstream QEMU and cpu hotplug Was: [QEMU-traditional] ACPI AML code races with QEMU updating the vCPU count when hotplugging more than ~14 VCPUs.
FYI the appended commits are missing in upstream QEMU. Given the timeframe available I would be happy to carry a straight port of these commits, as close as possible to the originals, in the qemu-upstrem-4.3 tree. We can revert them later (Xen 4.4) and implement the feature properly by sending the patches to qemu-devel and following the development process there. There is no time to do that in time for the 4.3 release. commit 12d0187099adeb242eb7c232a0f3bf9d153716c3 Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Mon Jan 4 17:12:44 2010 +0000 HVM vcpu add/remove: qemu logic for vcpu add/revmoe -- at qemu side, get vcpu_avail which used for original cpu avail map; -- setup gpe ioread/iowrite at qmeu; -- setup vcpu add/remove user interface through monitor; -- setup SCI logic; Signed-off-by: Liu, Jinsong <jinsong.liu@xxxxxxxxx> [ PATCH 4/4 ] HVM vcpu add/remove: qemu logic for vcpu add/revmoe commit c95358206acd768f06b4fb6c645033094d106775 Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Thu Mar 18 16:45:51 2010 +0000 Fix vcpu hotplug bug: get correct vcpu_avail bitmap Currently qemu has a bug: When maxvcpus > 64, qemu will get wrong vcpu bitmap (s->cpus_sts[i]) since it only get bitmap from a long variable. This patch, cooperate with another xend python patch, is to fix this bug. This patch get hex string from xend, transfer it to correct vcpu_avail bitmap which saved at an uint32_t array. Signed-off-By: Liu, Jinsong <jinsong.liu@xxxxxxxxx> (This is [PATCH 2/2], the other half is in xen-unstable.hg) commit 376c64a4e068b3dc83f066b4050ed34d983a5c75 Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Fri Apr 30 17:41:18 2010 +0100 Update vcpu hotplug logic Add vcpu online/offline check to avoid redundant SCI interrupt. Signed-off-by: Liu, Jinsong <jinsong.liu@xxxxxxxxx> commit 01626771cf2e9285fbfddcbded2820fc77745e4b Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Fri Apr 30 17:41:45 2010 +0100 Implement 'xm vcpu-set' command for HVM guest Currently Xen has 'xm vcpu-set' command for PV domain, but not available for HVM domain. This patch is use to enable 'xm vcpu-set' command for HVM domain. It setup vcpu watch at xenstore, and at qemu side, handle vcpu online/offline accordingly. With this patch, 'xm vcpu-set' command works for both PV and HVM guest with same format. command for HVM domain. It setup vcpu watch at xenstore, and at qemu side, handle vcpu online/offline accordingly. With this patch, 'xm vcpu-set' command works for both PV and HVM guest with same format. Signed-off-by: Liu, Jinsong <jinsong.liu@xxxxxxxxx> commit ce3b7ce68426ea6249bb411f26b376d459c45450 Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> Date: Tue Nov 9 18:01:13 2010 +0000 piix4acpi, xen: change in ACPI to match the change in the BIOS. Some change have been introduced in the Xen firmware to match QEMU's BIOS. So this patch adds the new sleep state values and handle old and new ACPI IOPort mapping. QEMU-Xen uses new ioport by default, but if it's a saved state with old firmware, it unmaps the new ioport and maps the old one. Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> On Fri, 10 May 2013, Stefano Stabellini wrote: > This reminded me to look for cpu hotplug in upstream QEMU and it looks > like we don't support this feature at all there :-( > > We don't even parse the cpu field on xenstore. > > What are we going to do about this in 4.3? > > On Thu, 9 May 2013, Konrad Rzeszutek Wilk wrote: > > This is a race so the amount varies but on a 4PCPU box > > I seem to get only ~14 out of 16 vCPUs I want to online. > > > > The issue at hand is that QEMU xenstore.c hotplug code > > does to the command: xl vcpu-set latest 16 > > (guest config has vcpus=1, maxvcpus=32) this: > > > > > > QEMU: Guest OS: > > -xenstore_process_vcpu_set_event > > -> Gets an XenBus notification for CPU1 > > -> Updates the gpe_state.cpus_state bitfield. > > -> Pulses the ACPI SCI > > - ACPI SCI kicks in > > > > -> Gets an XenBus notification for CPU2 > > -> Updates the gpe_state.cpus_state bitfield. > > -> Pulses the ACPI SCI > > > > -> Gets an XenBus notification for CPU3 > > -> Updates the gpe_state.cpus_state bitfield. > > -> Pulses the ACPI SCI > > ... > > - Method(PRST) invoked > > > > -> Gets an XenBus notification for CPU12 > > -> Updates the gpe_state.cpus_state bitfield. > > -> Pulses the ACPI SCI > > - reads AF00 for CPU state > > [gets 0xff] > > - reads AF02 [gets 0x7f] > > > > > > -> Gets an XenBus notification for CPU13 > > -> Updates the gpe_state.cpus_state bitfield. > > -> Pulses the ACPI SCI > > > > .. until VCPU 16 > > - Method PRST updates > > PR01 through 13 FLG > > entry. > > - PR01->PR13 _MAD > > invoked. > > > > - Brings up 13 CPUs. > > > > While QEMU updates the rest of the cpus_state bitfields the ACPI AML > > only does the CPU hotplug on those it had read. For reference > > please see the debug patch and also the QEMU log. Look for > > 'gpe_cpus_readb'. > > > > My thinking of how to fix this is just to add in > > xenstore_process_vcpu_set_event > > - a scan for all of the other availability/cpu states. > > - for each of the cpu availability states query the > > gpe_state.cpus_state and if different modify them (and set > > a bool that one of them was modified). > > - When done scanning and if the bool was set, then kick of the > > qemu_irq_pulse. > > > > Then if the other events are triggered we can just check the > > gpe_state.cpus_state against what XenBus thinks and if they > > are the same just return without doing the qemu_irq_pulse. > > > > Thoughts? > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |