[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] "MMIO emulation failed" from booting OVMF on Xen v4.9.0

On 17/08/17 09:49, Jan Beulich wrote:
>>>> On 16.08.17 at 20:47, <andri@xxxxxx> wrote:
>> Hey,
>> As per Andrew [Cooper]'s suggestion, writing here instead of #xen on 
>> Freenode.
>> I'm trying out Xen (4.9.0) with OVMF (r21243.3858b4a1ff-1) and having it 
>> crash right on boot both with the 32b and 64b OVMF binaries. This is on 
>> Arch Linux, AMD Ryzen on a X370 motherboard.
>> Given the following minimal VM declaration:
>>> builder = "hvm"
>>> maxmem = 512
>>> memory = 512
>>> vcpus = 1
>>> on_poweroff = "destroy"
>>> on_reboot = "destroy"
>>> on_crash = "destroy"
>>> bios = "ovmf"
>>> device_model_version = "qemu-xen"
>>> bios_path_override = "/usr/share/ovmf/ovmf_code_ia32.bin"
>> and running it with `xl create vm.cfg`, I see it crash while booting 
>> with the following displayed by `xl dmesg`:
>>> (XEN) MMIO emulation failed: d1v0 16bit @ f000:0000ff54 -> 66 ea 5c ff 
>>> ff ff 10 00 b8 40 06 00 00 0f 22
>>> (XEN) d1v0 Triple fault - invoking HVM shutdown action 1
>> I've run the hypervisor with `guest_loglvl=all` for more output and 
>> attached it here and uploaded it at 
>> https://gist.github.com/moll/a46dffc7466ced93a0365a6916a4db96 in case 
>> the file doesn't go through.
> Looks to be an ordinary 32-bit far branch after having switched to
> protected mode. I'm afraid without seeing the involved GDT entry
> there's little chance of guessing what may go wrong in this case.
> One question is why the emulator is being invoked in the first place:
> Since you've truncated the log at the beginning, it's impossible to
> tell whether you're using old Intel hardware lacking the Unrestricted
> Guest feature.

(As included above), This is on Arch Linux, AMD Ryzen on a X370 motherboard.

I can't work out why we hitting the MMIO path in this case,
independently of why the emulation of this instruction failed.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.