[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv11 4/4] gnttab: use per-VCPU maptrack free lists
At 15:51 +0100 on 05 Jun (1433519478), Jan Beulich wrote: > >>> On 02.06.15 at 18:26, <david.vrabel@xxxxxxxxxx> wrote: > > Performance analysis of aggregate network throughput with many VMs > > shows that performance is signficantly limited by contention on the > > maptrack lock when obtaining/releasing maptrack handles from the free > > list. > > > > Instead of a single free list use a per-VCPU list. This avoids any > > contention when obtaining a handle. Handles must be released back to > > their original list and since this may occur on a different VCPU there > > is some contention on the destination VCPU's free list tail pointer > > (but this is much better than a per-domain lock). > > > > Increase the default maximum number of maptrack frames by 4 times > > because: a) struct grant_mapping is now 16 bytes (instead of 8); and > > b) a guest may not evenly distribute all the grant map operations > > across the VCPUs (meaning some VCPUs need more maptrack entries than > > others). > > > > Signed-off-by: Malcolm Crossley <malcolm.crossley@xxxxxxxxxx> > > Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx> > > Acked-by: Tim Deegan <tim@xxxxxxx> > > What version was that ack given for? v7, IIRC. > Iirc before both of these changes, and the v10 ones imo should > have invalidated it. Tim, I'm particularly trying to understand > whether you're okay with the original's (potentially even heavier) > resource use and/or this version's (risking to run out of maptrack > entries _much_ earlier than currently). The concern with the earlier version being that the maximum maptrack limit gets a lot higher with many vcpus? I was OK with that. There are a lot of things that scale with #vcpus, and xenheap pages are not particularly scarce any more. So let's say I don't find one 128-vcpu guest much different from 128 1-vcpu guests for this purpose. Cheers, Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |