[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] Xen/IPF Unstable CS#18158 (dma heap logic) Status Report --- no issue



On Fri, Aug 01, 2008 at 06:31:14PM +0800, Zhang, Jingke wrote:
>     For IPF has not enabled remote-qemu and stubdom yet, I built the
> source with old-qemu and disable stubdom. Xen-ia32 has checked in dma
> heap logic patch (18157:445681d122c0 and 18158:b25fa9df7375), so I did a
> regression test on the Cset#18158 (staging tree). No regression was
> found for IPF.

Great.

>      BTW, we found in some old Cset, Dom0 will print out some unknown
> hypercalls on my tiger4 platform (by 'dmesg' command). Also met this
> before?

This issue was resolved by c/s 616:0deba952a6d3
in the ia64 linux tree which haven't been pulled up
to the x86 tree yet.

thanks,

> ======================
> ......
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> > xencomm_hypercall_physdev_op: unknown physdev op 16
> > xencomm_hypercall_physdev_op: unknown physdev op 15
> There are a lot of unknown hypercalls.
> ======================
> 
> Detail Xen/IA64 Unstable Cset #18158 Status Report
> ============================================================
> Test Result Summary:
>       # total case:   17
>       # passed case: 17
>       # failed case:   0
> ============================================================
> Testing Environment:
>       platform: Tiger4
>       processor: Itanium 2 Processor
>       logic Processors number: 8 (2 processors with Dual Core)
>       pal version: 9.68
>       service os: RHEL5u2 IA64 SMP with 2 VCPUs
>       vti guest os: RHEL5u2 & RHEL4u3
>       xenU guest os: RHEL4u4
>       xen ia64 unstable tree: 18158
>       xen schedule: credit
>       gfw: open guest firmware Cset#126
> ============================================================
> Detailed Test Results:
> 
> Passed case Summary                   Description
>       Save&Restore                    Save&Restore
>       VTI_Live-migration                      Linux VTI live-migration
>       Two_UP_VTI_Co           UP_VTI (mem=256)
>       One_UP_VTI                         1 UP_VTI (mem=256)
>       One_UP_XenU                     1 UP_xenU(mem=256)
>       SMPVTI_LTP                      VTI (vcpus=4, mem=512) run LTP
>       SMPVTI_and_SMPXenU      1 VTI + 1 xenU (mem=256 vcpus=2)
>       Two_SMPXenU_Coexist     2 xenU (mem=256, vcpus=2)
>       One_SMPVTI_4096M                1 VTI (vcpus=2, mem=4096M)
>       SMPVTI_Network                  1 VTI (mem=256,vcpu=2) and'ping'
>       SMPXenU_Network         1 XenU (vcpus=2) and 'ping'
>       One_SMP_XenU                    1 SMP xenU (vcpus=2)
>       One_SMP_VTI                     1 SMP VTI (vcpus=2)
>       SMPVTI_Kernel_Build             VTI (vcpus=4) and do KernelBuild
>       UPVTI_Kernel_Build              1 UP VTI and do kernel build
>       SMPVTI_Windows                  SMPVTI windows(vcpu=2)
>       SMPWin_SMPVTI_SMPxenU   SMPVTI Linux/Windows & XenU     
> -----------------------------------------------------------------------
> 
> 
> Thanks,
> Zhang Jingke
> 
> 
> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel
> 

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.