[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [ovmf baseline-only test] 71342: tolerable FAIL



This run is configured for baseline tests only.

flight 71342 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71342/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 build-amd64-libvirt           5 libvirt-build                fail   like 71338
 build-i386-libvirt            5 libvirt-build                fail   like 71338

version targeted for testing:
 ovmf                 11a6cc5bda811513d2fbe47d8cb1a70b48077800
baseline version:
 ovmf                 a8321feebb6af978478e0da559806602bd2dcc7d

Last test of basis    71338  2017-05-17 18:51:25 Z    0 days
Testing same since    71342  2017-05-18 06:51:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiewen Yao <jiewen.yao@xxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
    http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

------------------------------------------------------------
commit 11a6cc5bda811513d2fbe47d8cb1a70b48077800
Author: Jiewen Yao <jiewen.yao@xxxxxxxxx>
Date:   Sat Apr 29 16:57:54 2017 +0800

    MdeModulePkg/PciBus: Add IOMMU support.
    
    If IOMMU protocol is installed, PciBus need call IOMMU
    to set access attribute for the PCI device in Map/Ummap.
    
    Only after the access attribute is set, the PCI device can
    access the DMA memory.
    
    Cc: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Cc: Leo Duran <leo.duran@xxxxxxx>
    Cc: Brijesh Singh <brijesh.singh@xxxxxxx>
    Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    
    Previous patch Tested-by: Brijesh Singh <brijesh.singh@xxxxxxx>
    Previous patch Tested-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    Contributed-under: TianoCore Contribution Agreement 1.0
    Signed-off-by: Jiewen Yao <jiewen.yao@xxxxxxxxx>
    Reviewed-by: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Reviewed-by: Leo Duran <leo.duran@xxxxxxx>

commit c15da8eb3587f2371df330bc4bcb5a31b2efac0d
Author: Jiewen Yao <jiewen.yao@xxxxxxxxx>
Date:   Sat Apr 29 16:23:58 2017 +0800

    MdeModulePkg/PciHostBridge: Add IOMMU support.
    
    If IOMMU protocol is installed, PciHostBridge just calls
    IOMMU AllocateBuffer/FreeBuffer/Map/Unmap.
    
    PciHostBridge does not set IOMMU access attribute,
    because it does not know which device request the DMA.
    This work is done by PciBus driver.
    
    Cc: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Cc: Leo Duran <leo.duran@xxxxxxx>
    Cc: Brijesh Singh <brijesh.singh@xxxxxxx>
    Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    
    Previous patch Tested-by: Brijesh Singh <brijesh.singh@xxxxxxx>
    Previous patch Tested-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    Contributed-under: TianoCore Contribution Agreement 1.0
    Signed-off-by: Jiewen Yao <jiewen.yao@xxxxxxxxx>
    Reviewed-by: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Reviewed-by: Leo Duran <leo.duran@xxxxxxx>

commit d1fddc4533bf084eff172ff920619197da53b48e
Author: Jiewen Yao <jiewen.yao@xxxxxxxxx>
Date:   Sat Mar 25 10:53:28 2017 +0800

    MdeModulePkg/Include: Add IOMMU protocol definition.
    
    This protocol is to abstract DMA access from IOMMU.
    1) Intel "DMAR" ACPI table.
    2) AMD "IVRS" ACPI table
    3) ARM "IORT" ACPI table.
    
    There might be multiple IOMMU engines on one platform.
    For example, one for graphic and one for rest PCI devices
    (such as ATA/USB).
    All IOMMU engines are reported by one ACPI table.
    
    All IOMMU protocol provider should be based upon ACPI table.
    This single IOMMU protocol can handle multiple IOMMU engines on one system.
    
    This IOMMU protocol provider can use UEFI device path to distinguish
    if the device is graphic or ATA/USB, and find out corresponding
    IOMMU engine.
    
    The IOMMU protocol provides 2 capabilities:
    A) Set DMA access attribute - such as write/read control.
    B) Remap DMA memory - such as remap above 4GiB system memory address
    to below 4GiB device address.
    It provides AllocateBuffer/FreeBuffer/Map/Unmap for DMA memory.
    The remapping can be static (fixed at build time) or dynamic (allocate
    at runtime).
    
    4) AMD "SEV" feature.
    We can have an AMD SEV specific IOMMU driver to produce IOMMU protocol,
    and manage SEV bit.
    
    Cc: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Cc: Leo Duran <leo.duran@xxxxxxx>
    Cc: Brijesh Singh <brijesh.singh@xxxxxxx>
    Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    
    Previous patch Tested-by: Brijesh Singh <brijesh.singh@xxxxxxx>
    Previous patch Tested-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
    Contributed-under: TianoCore Contribution Agreement 1.0
    Signed-off-by: Jiewen Yao <jiewen.yao@xxxxxxxxx>
    Reviewed-by: Ruiyu Ni <ruiyu.ni@xxxxxxxxx>
    Reviewed-by: Leo Duran <leo.duran@xxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.