[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Performance Results




On 12/22/2017 01:18 PM, George Dunlap wrote:
> On Fri, Dec 22, 2017 at 10:41 AM, Sergej Proskurin
> <proskurin@xxxxxxxxxxxxx> wrote:
>> Hi George,
>>
>> Thank you for your reply.
>>
>> On 12/22/2017 11:26 AM, George Dunlap wrote:
>>> On Thu, Dec 21, 2017 at 2:42 PM, Sergej Proskurin
>>> <proskurin@xxxxxxxxxxxxx> wrote:
>>>> Hi all,
>>>>
>>>> For the sake of completeness: the solution to the issue stated in my
>>>> last email was deactivating Intel's Turbo Boost technology directly in
>>>> UEFI (deactivating Turbo Boost through xenpm was not enough). Apparently
>>>> Turbo Boost affects Linux and KVM differently than Xen, which led to the
>>>> phonomenon, in which the benchmark execution on Xen appeared faster than
>>>> on bare metal.
>>>
>>> *Appeared* faster than on bare metal, or *was* faster than on bare metal?
>>>
>>> If the source of the change was the Turbo Boost, it's entirely
>>> possible that the difference is due to the placement of workers on
>>> cpus -- i.e., that Linux's bare metal scheduler makes a worse choice
>>> for this particular workload than Xen's scheduler does.
>>>
>>
>> Given the fact that for this particular benchmark I configured both dom0
>> and domu to using only one core (in fact I have pinned both domains to
>> the same physical core), I do not believe that the performance increase
>> was due to a better placement on all available CPU's. However, I
>> absolutely agree that there might be a difference in handling Turbo
>> Boosts between Linux and Xen.
> 
> in which case, *that* may actually be your problem. :-)
> 
> I've got a "scheduler microbenchmark" program that I wrote that has
> unikernel workloads doing simplistic "burn / sleep" cycles.  There's a
> certain point -- somewhere between 50% busy and 80% busy -- where
> adding more work actually *improves* the performance of existing
> workloads. Presumably when the cpu is busy more of the time, the
> microcode puts it into a higher performance state.
> 
> If you pinned them to different sockets, you might actually get a
> result more in line with your expectations.
> 
>  

I see your point. Yet, I just ran the benchmark with domu pinned to a
different physical core. Unfortunately, the results where the same as
before. Nevertheless, it is a good hint, thank you :)

Besides, I also believe that if the additional cpu load through dom0
running on the same core would have been the issue, we would experience
a similar effect with KVM, as, in my experiment, the guest runs on the
same physical core as the underlying Linux host.

Thanks,
~Sergej

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.