[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] "X86_PV_VCPU_MSRS record truncated" during domain restore (was: Re: [qubes-users] DispVM Doesn't work)



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Wed, Jul 20, 2016 at 02:33:20PM +0200, Massimo Colombi wrote:
> I retried (it's not the first time) to regenerate new savefile, but DispVM
> doesn't work.
> I attach the results.

For me it looks like a bug in savefile handling. Moving to xen-devel. Any
idea?
Background info: it's about restoring a domain through libvirt->libxl
(virsh restore equivalent). Xen 4.6.1. Full error:

> 2016-07-20 14:23:01 CEST xc: error: X86_PV_VCPU_MSRS record truncated: length 
> 8, min 9: Internal error
> 2016-07-20 14:23:01 CEST xc: error: Restore failed (0 = Success): Internal 
> error
> 2016-07-20 14:23:01 CEST libxl: error: 
> libxl_stream_read.c:749:libxl__xc_domain_restore_done: restoring domain: 
> Successo
> 2016-07-20 14:23:01 CEST libxl: error: 
> libxl_create.c:1145:domcreate_rebuild_done: cannot (re-)build domain: -3


If relevant, here is fragment of /proc/cpuinfo (just one core):
processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 22
model           : 48
model name  : AMD A8-6410 APU with AMD Radeon R5 Graphics
stepping        : 1
microcode   : 0x7030105
cpu MHz         : 1996.290
cache size  : 2048 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores   : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu de tsc msr pae mce cx8 apic mca cmov pat clflush
mmx fxsr sse sse2 ht
syscall nx mmxext fxsr_opt lm constant_tsc rep_good nopl nonstop_tsc
extd_apicid eagerfpu pni
pclmulqdq ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c
rdrand hypervisor lahf_lm
cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
perfctr_nb bpext perfctr_l2
arat cpb hw_pstate vmmcall bmi1 xsaveopt
bugs            : fxsave_leak
bogomips        : 3992.58
TLB size        : 1024 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb [12] [13]


> Best regards,
>  Massimo
> 
> On 07/20/2016 01:37 PM, Marek Marczykowski-Górecki wrote:
> > Try `qvm-create-default-dvm --default-template` to regenerate new savefile.
> > It looks like your savefile is somehow broken.
> 

> [user@dom0 logs]$ qvm-create-default-dvm --default-template
> --> Creating volatile image: 
> /var/lib/qubes/appvms/fedora-23-dvm/volatile.img...
> --> Loading the VM (type = AppVM)...
> --> Starting Qubes DB...
> --> Setting Qubes DB info for the VM...
> --> Updating firewall rules...
> --> Starting the VM...
> --> Starting Qubes GUId...
> Connecting to VM's GUI agent: ..................connected
> Waiting for DVM fedora-23-dvm ...
> /qubes-used-mem
> Disco scollegato correttamente
> 
> DVM boot complete, memory used=303380. Saving image...
> 
> 
> 
> Domain fedora-23-dvm saved to /var/lib/qubes/appvms/fedora-23-dvm/dvm-savefile
> 
> DVM savefile created successfully.
> [user@dom0 logs]$ echo xterm | /usr/lib/qubes/qfile-daemon-dvm qubes.VMShell 
> dom0 DEFAULT red
> time=1469017378.68, qfile-daemon-dvm init
> time=1469017378.92, creating DispVM
> time=1469017380.48, collection loaded
> time=1469017380.49, VM created
> time=1469017380.59, VM starting
> time=1469017380.6, creating config file
> time=1469017380.86, calling restore
> Traceback (most recent call last):
>   File "/usr/lib/qubes/qfile-daemon-dvm", line 200, in <module>
>     main()
>   File "/usr/lib/qubes/qfile-daemon-dvm", line 188, in main
>     dispvm = qfile.get_dvm()
>   File "/usr/lib/qubes/qfile-daemon-dvm", line 150, in get_dvm
>     return self.do_get_dvm()
>   File "/usr/lib/qubes/qfile-daemon-dvm", line 103, in do_get_dvm
>     dispvm.start()
>   File 
> "/usr/lib64/python2.7/site-packages/qubes/modules/01QubesDisposableVm.py", 
> line 193, in start
>     domain_config, libvirt.VIR_DOMAIN_SAVE_PAUSED)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4405, in 
> restoreFlags
>     if ret == -1: raise libvirtError ('virDomainRestoreFlags() failed', 
> conn=self)
> libvirt.libvirtError: internal error: libxenlight failed to restore domain 
> 'disp1'
> 


- -- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXkActAAoJENuP0xzK19csHZ4IAJF35BwBKbKzNFo0yOcTJxng
2NqVIUmHg3hgVZuNdkoNm09L7f5yIUai+0AqsFvM8BbPmwC9C6EBcu7FICMub/lT
NUbfyf4rW5YRuEVG+OB7+Zge4mz3++kb1cLqobn2vA5Z9ayCDW32Yq5yYFff9zd7
/qMtjuj35oG25fKEs1PIZbDtMkdnq2ef1rg7KDj695SpDSt0g3AadtTjnJVh7f2V
v7K6aUKR6O2xcHWADWhqxLNaKEBA8cd2yHeGYmbQwD9cICqQNVFwH/jIHWsAqGun
d/IKvRetZnLajFm42X7862UIgQWE1qTpqDXV6OCWT/KNh/cL4SLwn+0RW8RbfSQ=
=gdks
-----END PGP SIGNATURE-----

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.