[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] VM memory allocation speed with cs 26056
On 13/11/2012 16:13, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote: >> I did it second time. It seems the result (attached) is promising. >> >> I think the strange result is due to the order of testing: >> start_physical_machine -> test_hvm -> test_pvm. >> >> This time, I did: start_physical_machine -> test_pvm -> test_hvm. >> >> You can see the pvm memory allocation speed is not affected by your patch >> this time. >> >> So I believe this patch is excellent now. > > I don't know about Keir's opinion, but to me the performance dependency > on ordering is even more bizarre and still calls into question the > acceptability of the patch. Customers aren't going to like being > told that, for best results, they should launch all PV domains before > their HVM domains. Yes it's very odd. I would want to see more runs to be convinced that nothing else weird is going on and affecting the results. If it is something to do with the movement of the TLB-flush filtering logic, it may just be a tiny difference being amplified by the vast number of times alloc_heap_pages() gets called. We should be able to actually *improve* PV build performance by pulling the flush filtering out into populate_physmap and increase_reservation. > Is scrubbing still/ever done lazily? That might explain some of > the weirdness. No lazy scrubbing any more. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |