[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Question] PARSEC benchmark has smaller execution time in VM than in native?



On Mon, Feb 29, 2016 at 12:59 PM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
>> > Hey!
>> >
>> > CC-ing Elena.
>>
>> I think you forgot you cc.ed her..
>> Anyway, let's cc. her now... :-)
>>
>> >
>> >> We are measuring the execution time between native machine environment
>> >> and xen virtualization environment using PARSEC Benchmark [1].
>> >>
>> >> In virtualiztion environment, we run a domU with three VCPUs, each of
>> >> them pinned to a core; we pin the dom0 to another core that is not
>> >> used by the domU.
>> >>
>> >> Inside the Linux in domU in virtualization environment and in native
>> >> environment,  We used the cpuset to isolate a core (or VCPU) for the
>> >> system processors and to isolate a core for the benchmark processes.
>> >> We also configured the Linux boot command line with isocpus= option to
>> >> isolate the core for benchmark from other unnecessary processes.
>> >
>> > You may want to just offline them and also boot the machine with NUMA
>> > disabled.
>>
>> Right, the machine is booted up with NUMA disabled.
>> We will offline the unnecessary cores then.
>>
>> >
>> >>
>> >> We expect that execution time of benchmarks in xen virtualization
>> >> environment is larger than the execution time in native machine
>> >> environment. However, the evaluation gave us an opposite result.
>> >>
>> >> Below is the evaluation data for the canneal and streamcluster benchmarks:
>> >>
>> >> Benchmark: canneal, input=simlarge, conf=gcc-serial
>> >> Native: 6.387s
>> >> Virtualization: 5.890s
>> >>
>> >> Benchmark: streamcluster, input=simlarge, conf=gcc-serial
>> >> Native: 5.276s
>> >> Virtualization: 5.240s
>> >>
>> >> Is there anything wrong with our evaluation that lead to the abnormal
>> >> performance results?
>> >
>> > Nothing is wrong. Virtualization is naturally faster than baremetal!
>> >
>> > :-)
>> >
>> > No clue sadly.
>>
>> Ah-ha. This is really surprising to me.... Why will it speed up the
>> system by adding one more layer? Unless the virtualization disabled
>> some services that occur in native and interfere with the benchmark.
>>
>> If virtualization is faster than baremetal by nature, why we can see
>> that some experiment shows that virtualization introduces overhead?
>
> Elena told me that there were some weird regression in Linux 4.1 - where
> CPU burning workloads were _slower_ on baremetal than as guests.

Hi Elena,
Would you mind sharing with us some of your experience of how you
found the real reason? Did you use some tool or some methodology to
pin down the reason (i.e,  CPU burning workloads in native is _slower_
on baremetal than as guests)?



>
> Updating to a later kernel fixed that  -where one could see that
> baremetal was faster (or on par) with the guest.

Thank you very much, Konrad! We are giving it a shot. :-D

Best Regards,

Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.