[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/4] x86/svm: cleanup svm.h


  • To: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Tue, 21 Feb 2023 17:24:38 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Cjw+4q3s9dMqWHqaE/OpaprXq9r1/fGZDGw7AGy2+JY=; b=Psz+PfPfJmSGl335fh95mS2+N870R5GYJcep1yQesSwXYBLs5IBJfrBI1Sb6YP1J+ECcGgr2umLg5nO6PZfJzb1phUSPJ/570GN2nelG+on6ZMMJ5dRrWzu2UjrCOIB/Jk0XYlItxHvGXwHyGI/v3WAswWkLHFn3RJI7QLSQQ2v2SuhP6KIqhoYvlr59+AXvsSA4akPCUZ868Bjax4MtfBZrE9qgY2KcTcF3RpC2q5DG0tyXke2UE87Oj5RXAlBTB5OPt9e2nwVEiCZEdo5qZC5Ww0OxCo6CeDUKBbcVT0Kzil0qkW87s37QyXsFaQMJMOrOWf8K2kYiqGXNRqmlpw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BSRFhh+vwgkLkBtbQzNRyHlHcV6oCqFBXX4rQOk8z20/lbGpHXSgNPWOUaTZI0oDooG+YeyivfopFOPUlxh7WXiZUQuUhxsGe9lxO5NE/k/tqo9dOYVCb5m10gkenpJkmlUZgC7hy19tSJqukg/32N7lftk0V/fw5y0paHrEcgIgCoHAoEVvpi/ASbXRiLJQ1aiiZw9wIdVOcDC8bH5IFdO77WvwUM9iBy2XFPsrpbeO0TN8AoJFTQKRXk7Pf0J5KULUGyQfVmv7aCRap5s6HLtLgLLVWbzIMJp8SqkaDZjewdEZLgwmLqAFFHfjP/Cxt7ZGthGYjnELVkHnL6+0xQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 21 Feb 2023 17:24:53 +0000
  • Ironport-data: A9a23:h5XZea0+ir5zfoVYOfbD5ehwkn2cJEfYwER7XKvMYLTBsI5bpzwBm GcWW2qEbq2CNmb3ctByPoq/9k5TsZOGzYNrG1Q9pC1hF35El5HIVI+TRqvS04F+DeWYFR46s J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo tq3qMDEULOf82cc3lk8tuTS93uDgNyo4GlD5gZkPKgQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfXGRvt vMpIQsxfBW8nv/x7ZyJZ85Kv5F2RCXrFNt3VnBI6xj8VK9jbbWdBqLA6JlfwSs6gd1IEbDGf c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxqvC6Kk1QZPLvFabI5fvSjQ8lPk1nej WXB52njWTkRNcCFyCrD+XWp7gPKtXKkCd5JSubpnhJsqEOT3T0OSyVLb0eAn8CUrlGAYfh1F 3VBr0LCqoB3riRHVOLVVQCisneAuRIbRNN4HOgz6QXLwa3Riy6JC25BQjNfZdgOsM4tWSdsx lKPh8nuBzFkrPuSU3313rWeoC62OCMVBXQffiJCRgwAi/H8pKkjgxSJScxseJNZlfXwEDD0h jyP8i43guxKidZRjvrju1fanziru57FCBYv4RnaVX6k6QU/Y5O5Y4uv6h7Q6vMowJulc2Rtd UMsw6C2hN3ix7nU/MBRaI3hxI2U2ss=
  • Ironport-hdrordr: A9a23:smz3TquNp7r3eajNfbvAcqvr7skDZ9V00zEX/kB9WHVpm62j+v xG+c5xvyMc5wxhO03I5urwWpVoLUmzyXcX2+Us1NWZPDUO0VHARL2KhrGM/9SPIUzDH+dmpM JdT5Q=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 21/02/2023 7:58 am, Xenia Ragiadakou wrote:
>
> On 2/21/23 01:08, Andrew Cooper wrote:
>> On 17/02/2023 6:48 pm, Xenia Ragiadakou wrote:
>>> Remove the forward declaration of struct vcpu because it is not used.
>>
>> Huh, turns out that was my fault in c/s b158e72abe, shortly after I
>> introduced them in the first place.
>>
>> Also, looking into that, there's one legitimate use of svm.h from
>> outside, which is svm_load_segs*() which means we can't make all the
>> headers be local.
>>
>> But still, most of svm.h shouldn't be includable in the rest of Xen.
>> Perhaps we can make a separate dedicated header for just this.
>>
>> [edit]  And svm_{doman,vcpu}.  Perhaps we want a brand new
>> include/asm/hvm/svm.h with only the things needed elsewhere.
>
> This can be done as part of the series aiming to make svm/vmx support
> configurable.

Ok, that's fine.

Honestly, there's a lot of cleanup which can be done.  We probably want
to end up making a number of $foo-types.h headers so we can construct
struct vcpu/domain without xen/sched.h including the majority of Xen in
one fell swoop.

>
>>
>> That said, we do need to stea^W borrow adaptive PLE, and make the
>
> I cannot understand what do you mean by "stea^W borrow adaptive PLE".

Pause Loop Exiting is tricky to get right.

The common expectation is that PLE hits in a spinlock or other mutual
exclusion primitive.

It is generally good to let the vCPU spin for a bit, in the expectation
that the other vCPU holding lock will release it in a timely fashion. 
Spinning for a few iterations (even a few hundred) is far lower overhead
than taking a vmexit.

But if the other vCPU isn't executing, it can never release the lock,
and letting the current vCPU spin does double damage because it consumes
the domain's scheduler credit, which in turn pushes out the reschedule
of the vCPU that's actually holding the lock.  (This is why paravirt
spinlocks are so useful in virt.  If you get them right, you end up with
only the vcpus that can usefully do work to release a lock executing.)


For overall system performance, there is a tradeoff between how long you
let a vCPU spin, and when it's better to force such a vCPU to yield. 
This point depends on the typical spinlock contention inside the guest,
and the overall system busyness, both of which vary over time.

Picking fixed numbers for PLE is better than not having PLE in the first
place, but only just.  There is an algorithm called adaptive-PLE which
tries to balance the PLE settings over time to optimise overall system
performance.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.