[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update, show blank screen last 13s or so for windows XP guest



Hi, all
I compared the traditional qemu-dm and upstream qemu,
and I found the upstream qemu will create the expansion ROM at f3000000, but 
qemu-dm don't. 
the details as below (by "lspci -vnnn"):

1) traditional qemu-dm:
00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8] 
(prog-if 00 [VGA controller])
        Subsystem: XenSource, Inc. Device [5853:0001]
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at f0000000 (32-bit, prefetchable) [size=32M]
        Memory at f3000000 (32-bit, non-prefetchable) [size=4K]
        Kernel modules: cirrusfb

2) upstream qemu:
00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8] 
(prog-if 00 [VGA controller])
        Subsystem: XenSource, Inc. Device [5853:0001]
        Physical Slot: 2
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at f0000000 (32-bit, prefetchable) [size=32M]
        Memory at f3020000 (32-bit, non-prefetchable) [size=4K]
        Expansion ROM at f3000000 [disabled] [size=64K]
        Kernel modules: cirrusfb

I tried to simply delete the expansion ROM at function pci_add_option_rom, such 
as 

  //pci_register_bar(pdev, PCI_ROM_SLOT, 0, &pdev->rom);
  
but the VNC viewer only can see black screen, but RDP works well.
Any ideas about the cirrus's expansion ROM?
Slow screen refresh has any relationship with cirrus's expansion ROM?
Thank you in advanceï

BTW, I added some log in stdvga.c(xen-4.1.2/xen/arch/x86/hvm):
static int stdvga_intercept_mmio(ioreq_t *p)
{
    struct domain *d = current->domain;
    struct hvm_hw_stdvga *s = &d->arch.hvm_domain.stdvga;
    int buf = 0, rc;
    static int count_1 = 0;
    static int count_2 = 0;
    
    if ( p->size > 8 )
    {
        gdprintk(XENLOG_WARNING, "invalid mmio size %d\n", (int)p->size);
        return X86EMUL_UNHANDLEABLE;
    }

    spin_lock(&s->lock);

    if ( s->stdvga && s->cache )
    {
        switch ( p->type )
        {
        case IOREQ_TYPE_COPY:
            buf = mmio_move(s, p);
            count_1++;
            if (count_1 % 1000 == 0)
                gdprintk(XENLOG_WARNING, " =uvp= enter mmio_move, 
count_1=%d\n", count_1);
            if ( !buf )
                s->cache = 0;
            break;
        default:
            gdprintk(XENLOG_WARNING, "unsupported mmio request type:%d "
                     "addr:0x%04x data:0x%04x size:%d count:%d state:%d "
                     "isptr:%d dir:%d df:%d\n",
                     p->type, (int)p->addr, (int)p->data, (int)p->size,
                     (int)p->count, p->state,
                     p->data_is_ptr, p->dir, p->df);
            s->cache = 0;
        }
    }
    else
    {
        buf = (p->dir == IOREQ_WRITE);
        count_2++;
        if (count_2 % 5000 == 0)
            gdprintk(XENLOG_WARNING, " >>> vga mmio count_2=%d\n", count_2);
    }

    rc = (buf && hvm_buffered_io_send(p));

    spin_unlock(&s->lock);
    
    return rc ? X86EMUL_OKAY : X86EMUL_UNHANDLEABLE;
}

and I got the below result with upstream qemu and tranditional qemu-dm:

 1) traditional qemu-dm:
(XEN) stdvga.c:152:d2 entering stdvga and caching modes
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=460000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=461000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=462000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=463000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=464000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=465000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=466000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=467000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=468000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=469000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=470000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=471000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=472000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=473000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=474000
(XEN) stdvga.c:577:d2  =uvp= enter mmio_move, count_1=475000
(XEN) stdvga.c:156:d2 leaving stdvga

 2) upstream qemu:
(XEN) stdvga.c:152:d1 entering stdvga and caching modes
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=233000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=234000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=235000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=236000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=237000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=238000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=239000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=240000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=241000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=242000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=243000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=244000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=245000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=246000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=247000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=248000
(XEN) stdvga.c:577:d1  =uvp= enter mmio_move, count_1=249000
(XEN) stdvga.c:156:d1 leaving stdvga
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=8400000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=8405000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=8410000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=8415000
  ... ... //Omit some similar
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=12570000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=12575000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=12580000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=12585000
(XEN) stdvga.c:596:d1  >>> vga mmio count_2=12590000

Upstream qemu has lot's of MMIO access Memory region 0xa0000~0xc0000 than 
traditional qemu-dm.

Best Regards!

-Gonglei

> -----Original Message-----
> From: Gonglei (Arei)
> Sent: Monday, August 05, 2013 8:50 PM
> To: 'Gonglei (Arei)'; 'Pasi KÃrkkÃinen'
> Cc: 'Gerd Hoffmann'; 'Andreas FÃrber'; 'Hanweidong'; 'Luonengjun';
> 'qemu-devel@xxxxxxxxxx'; 'xen-devel@xxxxxxxxxxxxx'; 'Anthony Liguori';
> 'Huangweidong (Hardware)'; 'Ian.Jackson@xxxxxxxxxxxxx'; 'Anthony Liguori'
> Subject: RE: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update, show
> blank screen last 13s or so for windows XP guest
> 
> Hi,
> 
> > -----Original Message-----
> > From: Gonglei (Arei)
> > Sent: Tuesday, July 30, 2013 10:01 AM
> > To: 'Pasi KÃrkkÃinen'
> > Cc: Gerd Hoffmann; Andreas FÃrber; Hanweidong; Luonengjun;
> > qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx; Anthony Liguori;
> > Huangweidong (Hardware); 'Ian.Jackson@xxxxxxxxxxxxx'; Anthony Liguori;
> > 'aliguori@xxxxxxxxxx'
> > Subject: RE: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update, show
> > blank screen last 13s or so for windows XP guest
> >
> > > On Mon, Jul 29, 2013 at 08:48:54AM +0000, Gonglei (Arei) wrote:
> > > > > -----Original Message-----
> > > > > From: Pasi KÃrkkÃinen [mailto:pasik@xxxxxx]
> > > > > Sent: Saturday, July 27, 2013 7:51 PM
> > > > > To: Gerd Hoffmann
> > > > > Cc: Andreas FÃrber; Hanweidong; Luonengjun;
> qemu-devel@xxxxxxxxxx;
> > > > > xen-devel@xxxxxxxxxxxxx; Gonglei (Arei); Anthony Liguori; Huangweidong
> > > > > (Hardware)
> > > > > Subject: Re: [Xen-devel] [Qemu-devel] Cirrus VGA slow screen update,
> > show
> > > > > blank screen last 13s or so for windows XP guest
> > > > >
> > > > > On Fri, Jul 26, 2013 at 12:19:16PM +0200, Gerd Hoffmann wrote:
> > > > > >
> > > > > > Maybe the xen guys did some optimizations in qemu-dm which where
> > not
> > > > > > merged upstream.  Try asking @ xen-devel.
> > > > > >
> > > > >
> > > > > Yeah, xen qemu-dm must have some optimization for cirrus that isn't in
> > > > > upstream qemu.
> > > >
> > > > Hi, Pasi. Would you give me some more details? Thanks!
> > > >
> > >
> > > Unfortunately I don't have any more details.. I was just assuming if xen
> > > qemu-dm is fast,
> > > but upstream qemu is slow, there might be some optimization in xen
> > > qemu-dm ..
> > >
> > > Did you check the xen qemu-dm (traditional) history / commits?
> > >
> > > -- Pasi
> >
> > Yes, I did. I tried to reproduce the issue with qemu-dm by fall backing 
> > history
> > commit.
> > But the qemu-dm mainline merged other branches, such as branch
> 'upstream'
> > and branch 'qemu',
> > so that I can not complied the xen-4.1.2/tools project well. CC'ing Ian and
> > Anthony.
> >
> By analyzing xentrace data, I found that there is lot's of VMEXIT between
> memory region 0xa0000~0xaffff with upstream qemu,
> but only 256 VMEXIT on traditional qemu-dm, when Windows XP guest booting
> up or change the method connecting the VM from VNC(RDP) to RDP(VNC):
> 
> linux-sAGhxH:# cat qemu-upstrem.log |grep 0x00000000000a|wc -l
> 640654
> linux-sAGhxH:# cat qemu-dm.log |grep 0x00000000000a|wc -l
> 256
> And the 256 VMEXIT of qemu-dm as the same as the top 256 VMEXIT of
> qemu-upstream
> 
> linux-sAGhxH:# cat qemu-upstream.log |grep 0x00000000000a| tail
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b08 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0a mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0c mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b0e mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b10 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b12 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b14 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b16 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b18 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU0  0 (+       0)  NPF         [ gpa = 0x00000000000a0b1a mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> linux-sAGhxH:# cat qemu-dm.log |grep 0x00000000000a| tail
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1ec0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1ee0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f00 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f20 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f40 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f60 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1f80 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fa0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fc0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> CPU2  0 (+       0)  NPF         [ gpa = 0x00000000000a1fe0 mfn =
> 0xffffffffffffffff qual = 0x0182 p2mt = 0x0004 ]
> 
> Please see attachment for more information.
> 
> Because of the lot's of VMEXIT, the cirrus_vga_ioport_write funciton receive
> little scheduling,
> and cirrus_vga_write_gr execution interval of more than 32ms twice per, so
> that the blank screen time over 13 second with upstream qemu on Xen.
> I don't know why the Windows XP guest access the memory region
> 0xa0000~0xaffff with upstream qemu but with traditional qemu-dm is not,
> Anyone can give me some suggestion?
> 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.