[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] acpi: set correct address of the control/event blocks in the FADT
>>> On 29.08.17 at 15:24, <andrew.cooper3@xxxxxxxxxx> wrote: > On 29/08/17 09:50, Roger Pau Monne wrote: >> Commit 149c6b unmasked an issue long present in Xen: the control/event >> block addresses provided in the ACPI FADT table where hardcoded to the >> V1 version. This was papered over because hvmloader would also always >> set HVM_PARAM_ACPI_IOPORTS_LOCATION to 1 regardless of the BIOS >> version. >> >> Fix this by passing the address of the control/event blocks to >> acpi_build_tables, so the values can be properly set in the FADT >> table provided to the guest. >> >> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> >> --- >> Cc: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> >> Cc: Jan Beulich <jbeulich@xxxxxxxx> >> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> >> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> >> --- >> This commit should fix the qumu-trad Windows errors seen by osstest. > > This changes windows behaviour, but does not fix windows. Windows now > boots, but waits forever while trying to reboot after installing PV > drivers. There is no hint in the qemu log that the ACPI shutdown event > was received. Sounds to me like matching exactly the question I've raised: It would help to understand why things have worked originally. While PM1a/b are generally meant to help split brain environments like our Xen/qemu one, iirc we don't make use of PM1b, and hence it seems quite likely that both Xen and qemu monitor PM1a port accesses. If that's the case and things have worked before, it's quite possible that qemu-trad is no servicing the wrong port. > Unless someone has some very quick clever ideas, the original fix will > need reverting. I agree. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |