[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: Poor performance on HVM (kernbench)
So, the problem appears to be with a ton of brute-force searches to remove writable mappings, both during resync and promotion. My analysis tool is reporting that of the 30 seconds or so in the trace from xen-unstable, the guest spent a whopping 67% in the hypervisor: * 26% doing resyncs as a result of marking another page out-of-sync * 9% promoting pages * 27% resyncing as a result of cr3 switches And almost the entirety of all of those can be attributed to brute-force searches to remove writable mappings. (Caveat emptor: My tool was designed to analyze XenServer product traces, which have a different trace file format than xen-unstable. I've just taught it to read the xen-unstable trace formats, so the exact percentages may be incorrect still. But the preponderance of brute-force searches is unmistakable.) The good news is that if we can finger the cause of the brute-force searches, we should be able to reduce all those numbers down to respectable levels; my guess is totaling not more than 5%. -George On Thu, Sep 11, 2008 at 6:25 PM, Gianluca Guida <gianluca.guida@xxxxxxxxxxxxx> wrote: > Todd Deshane wrote: >> >> Can you also run kernbench on native for comparison? > > Here tehy are, with a two CPUs dom0. > > Thu Sep 11 13:03:57 EDT 2008 > 2.6.18.8-xen > Average Optimal load -j 8 Run (std deviation): > Elapsed Time 181.51 (0.550318) > User Time 300.494 (0.965572) > System Time 54.198 (0.784391) > Percent CPU 195 (0.707107) > Context Switches 26611.8 (205.029) > Sleeps 29778.8 (330.637) > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |