[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [libvirt test] 77788: regressions - FAIL

flight 77788 libvirt real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw  6 xen-boot                  fail REGR. vs. 77684

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-libvirt-vhd  9 debian-di-install            fail   like 77684

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-qcow2 11 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt-qcow2 13 guest-saverestore            fail never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              fde937bda044b0a1aa2244a00d0b449d99758e25
baseline version:
 libvirt              03569fda63d6dd8cb5cd63c34eb23faf5193da74

Last test of basis    77684  2016-01-10 01:57:02 Z    1 days
Testing same since    77788  2016-01-11 03:22:23 Z    0 days    1 attempts

People who touched revisions under test:
  Cole Robinson <crobinso@xxxxxxxxxx>

 build-amd64-xsm                                              pass    
 build-armhf-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-armhf-armhf-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-armhf-armhf-libvirt-qcow2                               fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    

sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at

Explanation of these reports, and of osstest in general, is at

Test harness code can be found at

Not pushing.

commit fde937bda044b0a1aa2244a00d0b449d99758e25
Author: Cole Robinson <crobinso@xxxxxxxxxx>
Date:   Sat Jan 9 16:00:01 2016 -0500

    qemu: command: wire up usage of q35/ich9 disable s3/s4
    If the q35 specific disable s3/s4 setting isn't supported, fallback to
    specifying the PIIX setting, which is the previous behavior. It doesn't
    have any effect, but qemu will just warn about it rather than error:
      qemu-system-x86_64: Warning: global PIIX4_PM.disable_s3=1 not used
      qemu-system-x86_64: Warning: global PIIX4_PM.disable_s4=1 not used
    Since it doesn't error, I don't think we should either, since there
    may be configs in the wild that already have q35 + disable_s3/4 (via

commit c77fd89000a756b93a0b2087c1e541e698850094
Author: Cole Robinson <crobinso@xxxxxxxxxx>
Date:   Sat Jan 9 15:58:50 2016 -0500

    qemu: caps: check for q35/ICH9 disable S3/S4
    Update test data to match

commit 5900356efb1d0fa8e8ddf2096410cdffc02d7b47
Author: Cole Robinson <crobinso@xxxxxxxxxx>
Date:   Mon Jan 4 17:57:06 2016 -0500

    qemu: caps: Rename CAPS_DISABLE_S[34] to CAPS_PIIX_DISABLE_S[34]
    These settings are specific to PIIX, so clarify it

commit ab963449dcc821530f1d15d6bb891cca41f1eb3f
Author: Cole Robinson <crobinso@xxxxxxxxxx>
Date:   Mon Jan 4 17:54:25 2016 -0500

    qemu: capabilities: s/Pixx/Piix/g
    The chipset is called PIIX; the functions are misnamed

commit da176bf6b7564721ce6bd18776523a8ad77f4884
Author: Cole Robinson <crobinso@xxxxxxxxxx>
Date:   Sat Jan 9 18:03:56 2016 -0500

    examples: Use one top level makefile
    Using one Makefile per example subdirectory essentially serializes 'make'
    calls. Convert to one example/Makefile that builds and distributes
    all the subdir files. This reduces example/ rebuild time from about 5.8
    seconds to 1.5 seconds on my machine.
    One slight difference is that we no longer ship Makefile.am with the
    examples in the rpm. This was virtually useless anyways since the Makefile
    was very specific to libvirt infrastructure, so wasn't generically
    reusable anyways.
    Tested with 'make distcheck' and 'make rpm'

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.