[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] Windows on Xen bad IO performance


  • To: 'Jakub Kulesza' <jakkul@xxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Sun, 30 Sep 2018 10:07:38 +0000
  • Accept-language: en-GB, en-US
  • Cc: "win-pv-devel@xxxxxxxxxxxxxxxxxxxx" <win-pv-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Sun, 30 Sep 2018 10:07:46 +0000
  • List-id: Developer list for the Windows PV Drivers subproject <win-pv-devel.lists.xenproject.org>
  • Thread-index: AQHUKBddpgP4V3zgh064pZm0uiacGqSo9TUA///zOQCAACyqQIBb1iqAgADJWiCAAA98AIAALHuA///xkwCAADVmEIAAAHBAgAA/ZACAAqLRcA==
  • Thread-topic: [win-pv-devel] Windows on Xen bad IO performance

> -----Original Message-----
> From: Jakub Kulesza [mailto:jakkul@xxxxxxxxx]
> Sent: 28 September 2018 20:51
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] Windows on Xen bad IO performance
> 
> Well, this turns out strange. It is not better but went worse.
> 
> Atto provides such results: https://imgur.com/gallery/D4erdER
> So it's on par to 8.2.1 with gnttab at 32. But the stability is worse
> then before.
> 
> Settings from kernel:
> # cat /sys/module/xen_blkback/parameters/*
> 0
> 1024
> 1056
> 1
> 1

I'm guessing the grant table exhaustion has gone, but the bounce buffering is 
still going to hurt... and that's just a consequence of the benchmark not 
honouring the alignment requirements :-(

  Paul


> pt., 28 wrz 2018 o 16:04 Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> napisał(a):
> >
> > > -----Original Message-----
> > > From: Paul Durrant
> > > Sent: 28 September 2018 15:04
> > > To: 'Jakub Kulesza' <jakkul@xxxxxxxxx>
> > > Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> > > Subject: RE: [win-pv-devel] Windows on Xen bad IO performance
> > >
> > > > -----Original Message-----
> > > > From: Jakub Kulesza [mailto:jakkul@xxxxxxxxx]
> > > > Sent: 28 September 2018 13:51
> > > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > > Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> > > > Subject: Re: [win-pv-devel] Windows on Xen bad IO performance
> > > >
> > > > pt., 28 wrz 2018 o 14:00 Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> > > > napisał(a):
> > > > [cut]
> > > >
> > > > > > >   Also the master branch should default to a single (or maybe
> 2?)
> > > > page
> > > > > > ring, even if the backend can do 16 whereas all the 8.2.X
> drivers
> > > will
> > > > use
> > > > > > all 16 pages (which is why you need a heap more grant entries).
> > > > > > >
> > > > > >
> > > > > > can this be tweaked somehow on current 8.2.X drivers? to get a
> > > single
> > > > > > page ring? max_ring_page_order on xen_blkback in dom0?
> > > > >
> > > > > Yes, tweaking the mod param in blkback will do the trick.
> > > > >
> > > > > >
> > > >
> > > > Current debian defaults are:
> > > > log_stats=0
> > > > max_buffer_pages=1024
> > > > max_persistent_grants=1056
> > > > max_queues=4
> > > > max_ring_page_order=4
> > > >
> > > > what would you tweak? max_queues and max_ring_page_order to 1?
> > >
> > > 1 will give you a 2 page ring, which should be fine.
> >
> > Sorry.. should have said set max_queues to 1 too. Multi-queue isn't that
> much use yet.
> >
> >   Paul
> >
> > >
> > > >
> > > > [cut]
> > > > > > >   You could try setting up a logo kit yourself and try testing
> > > > XENVBD to
> > > > > > see if it passes... that would be useful knowledge.
> > > > > >
> > > > > > seems fun. Where can I read on how to set up the logo kit?
> > > > > >
> > > > >
> > > > > See https://docs.microsoft.com/en-us/windows-
> > > hardware/test/hlk/windows-
> > > > hardware-lab-kit
> > > > >
> > > > > > Is there an acceptance testplan that should be run?
> > > > > >
> > > > >
> > > > > I've not use the kit in a while but I believe it should
> automatically
> > > > select all the tests relevant to the driver you elect to test (which
> is
> > > > XENVBD in this case).
> > > >
> > > > I will read and see what I can do about this. I can sacrifice a few
> > > > evenings for sure.
> > > >
> > >
> > > Cool.
> > >
> > > > >
> > > > > > Is there a list of issues that you'll want to get fixed for 9.0?
> Is
> > > > > > Citrix interested right now in getting Windows VMs of their
> > > customers
> > > > > > running better :)?
> > > > >
> > > > > Indeed Citrix should be interested, but testing and updating the
> > > branded
> > > > drivers has to be prioritized against other things. Whether Citrix
> wants
> > > > to update branded drivers does not stop me signing and releasing the
> Xen
> > > > Project drivers though... it just means they won't get as much
> testing,
> > > so
> > > > I'd rather wait... but only if it doesn't take too long.
> > > >
> > > > ech, priorities, resources, deadlines. I'll hook you up on Linkedin
> :)
> > > >
> > >
> > > :-)
> > >
> > > Cheers,
> > >
> > >   Paul
> > >
> > > > >
> > > > > > Testing windows VMs on VMware the same way (with
> > > > > > VMware's paravirtual IO) is not stellar anyway, looks crap when
> you
> > > > > > compare it to virtio on KVM. And 9.0-dev I'd say would be on par
> > > with
> > > > > > the big competitor.
> > > > > >
> > > > > > Funny story, I've tried getting virtio qemu devices running
> within a
> > > > > > XEN VM, but this is not stable enough. I have managed to get the
> > > > > > device show up in Windows, didn't manage to put a filesystem on
> it
> > > > > > under windows.
> > > > > >
> > > > >
> > > > > A lot of virtio's performance comes from the fact that KVM is a
> type-2
> > > > and so the backend always has full privilege over the frontend. This
> > > means
> > > > that QEMU is set up in such a way that it has all of guest memory
> mapped
> > > > all the time. Thus virtio has much less overhead, as it does not
> have to
> > > > care about things like grant tables.
> > > >
> > > > clear.
> > > >
> > > > --
> > > > Pozdrawiam
> > > > Jakub Kulesza
> 
> 
> 
> --
> Pozdrawiam
> Jakub Kulesza
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.