[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v20210701 15/40] tools: prepare to allocate saverestore arrays once


  • To: Olaf Hering <olaf@xxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Mon, 5 Jul 2021 14:01:07 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=m3jpDvQLw9xYzPTPWM+y2B2j74+F/Yyap4tkoG69O7Q=; b=lR4iwzGsCV9O9qluLx8dgkC6TOtRdivMshQj/G+1CtUz8iPxj9Izp1hMERGMmiyWUz2t0bMnyc+WCN+kYRCIT7qtbn094pUFD9IH2r09zjRz9ZLgmb/F7zEPobPY0PJ+9nj8IAIUPQGPVU4JHUyOf882n+c2O5HNbAVK9aEFJZcjAP5IU3Yg35sOcnMifuOFTssVtBNWgi9fl6FE6+XxHIdxmkTvWl8ePAp7Mje2IRbcM720+KPLkhzhvJEN4YtZ4fPRImgJqCZPDPkeazVgD0Wt7exszIeqsyAuoAYmKbcNFbqjQtqa9jX1tPljWcKpjhvCedipd4J5EBLh2Vf2DA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JaTSSIQ1VwC8xANgLr/qhpIaOtK9EwuX+iJHqtcZpXlyZeTLFDxyOU0mztmn+vxDJLX92TOY42Gu5hgSKz+Q5s9y685t0rcF8HM5mN5YE4mMoeNoik1httrRrYRDmXhCiSGK/bT7113EJzvZpnt7heje438FnH9f+5jTZlyzzDhtt9cyIs6zYAgTq7qBeQENWLYSj1qJyAEtT9aIOpbN7hb4R3OWKv+gujCoJSDdGK6hAfhvvmFMnESfchukMt5XEXtRG54eDn4qmlC5rbBg0uY2lO5v6XiyxwqOjiJkfX8GjUWHdYmZIBRNbQ2XxIFySc3q7AgI18HRk5JIOUsLow==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Mon, 05 Jul 2021 13:01:34 +0000
  • Ironport-hdrordr: A9a23:0s+PWqCcLTuUplzlHehHsceALOsnbusQ8zAXPh9KJiC9I/b1qy nxppkmPH/P6Qr4WBkb6LW90dq7MAzhHPlOkPUs1NaZLXTbUQ6TQr2KgrGSuwEIdxeOkdK1kJ 0QCZSWa+eAfmSS7/yKmTVQeuxIqLLskNHK9JLjJjVWPGVXgslbnndE422gYytLrWd9dPgE/d anl7F6T23KQwVnUi33PAhIY8Hz4/nw0L72ax8PABAqrCGIkDOT8bb/VzyVxA0XXT9jyaortT GtqX222oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuwYMC/lhAqEbJloH5eCoDc2iuey70tCqq iCnz4Qe+BIr1/BdGC8phXgnyP61iw11nPkwViExVP+vM3QXlsBeol8rLMcViGcx1srvdl63q 4O9XmerYBrARTJmzm4z8TUVittilG/rRMZ4K4uZkRkIM4jgYJq3MgiFBs/KuZGIMu60vFnLA BWNrCf2B4MGmnqKUww1wJUsa6RtndaJGbMfqFNgL3N79FspgEP86Iv/r1Wop4xzuNId3B63Z WzDk1JrsAFciZEV9MxOA8+KfHHQ1AlFii8Tl56Z26XTJ06Bw==
  • Ironport-sdr: TSZHqcEvbQP+N4NddP8TD8qKOgGXq+uRh5I9VBGwgGEL6LEB3znKz11mlVOr9PKmnxEXIdHgZM zWk3GwzGmNfYdKR1KfIXtWwE0UiN1i0BhflOcsDi7BG4BjOReoRn0QmKhbOzzJ0OdWSbyOAfsn 9+PcF3GH7xfEaxE9TXgXgebVLwPOfYemyctHIolvi1eQf7dgVZw7+wGAyKnxLOEj64oTW+DogY 68gGv3DlOcDIrRrZZYcDo5VtG2L1wUBjET+18NgedTd5AAEdL/mQgnWTtzXgNOVSpoQon0nqzf EoU=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05/07/2021 12:27, Olaf Hering wrote:
> Am Mon, 5 Jul 2021 11:44:30 +0100
> schrieb Andrew Cooper <andrew.cooper3@xxxxxxxxxx>:
>
>>> This patch is just prepartion, subsequent changes will populate the arrays.
>>>
>>> Once all changes are applied, migration of a busy HVM domU changes like 
>>> that:
>>>
>>> Without this series, from sr650 to sr950 
>>> (xen-4.15.20201027T173911.16a20963b3 xen_testing):
>>> 2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 
>>> 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
>>> 2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 
>>> 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
>>> 2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 
>>> 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
>>> 2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 
>>> 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
>>> 2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 
>>> 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
>>> 2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 
>>> 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error
>>>
>>> With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 
>>> xen_unstable):
>>> 2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 
>>> 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
>>> 2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 
>>> 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
>>> 2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 
>>> 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
>>> 2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 
>>> 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
>>> 2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 
>>> 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
>>> 2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 
>>> 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error  
>> These are good numbers, and clearly show that there is some value here,
>> but shouldn't they be in the series header?  They're not terribly
>> relevant to this patch specifically.
> The cover letter is unfortunately not under version control.
> Perhaps there are ways with git notes, I never use it.

In the end, we'll want some kind of note in the changelog, but that
wants to be a single line.  Its probably fine to say "Improve migration
performance.  25% better bandwidth when NIC link speed is the
bottleneck, due to optimising the data handling logic."

>> Also, while I can believe that the first sample is slower than the later
>> ones (in particular, during the first round, we've got to deal with the
>> non-RAM regions too and therefore spend more time making hypercalls),
>> I'm not sure I believe the final sample.  Given the byte/page count, the
>> substantially smaller elapsed time looks suspicious.
> The first one is slower because it has to wait for the receiver to allocate 
> pages.
> But maybe as you said there are other aspects as well.
> The last one is always way faster because apparently map/unmap is less costly 
> with a stopped guest.

That's suspicious.  If true, we've got some very wonky behaviour in the
hypervisor...

> Right now the code may reach up to 15Gbit/s. The next step is to map the domU 
> just once to reach wirespeed.

We can in principle do that in 64bit toolstacks, for HVM guests.  But
not usefully until we've fixed the fact that Xen has no idea what the
guest physmap is supposed to look like.

At the moment, the current scheme is a little more resilient to bugs
caused by the guest attempting to balloon during the live phase.

Another area to improve, which can be started now, is to avoid bounce
buffering hypercall data.  Now that we have /dev/xen/hypercall which you
can mmap() regular kernel pages from, what we want is a simple memory
allocator which we can allocate permanent hypercall buffers from, rather
than the internals of every xc_*() hypercall wrapper bouncing the data
in (potentially) both directions.

>
>> Are these observations with an otherwise idle dom0?
> Yes. Idle dom0 and a domU busy with touching its memory.
>
> Unfortunately, I'm not able to prove the reported gain with the systems I 
> have today.
> I'm waiting for preparation of different hardware, right now I have only a 
> pair of CoyotePass and WilsonCity.
>
> I'm sure there were NUMA effects involved. Last years libvirt was unable to 
> properly pin vcpus. If I pin all the involved memory to node#0 there is some 
> jitter in the logged numbers, but no obvious improvement. The fist iteration 
> is slightly faster, but that is it.

Oh - so the speedup might not be from reduced data handling?

Avoiding unnecessary data copies is clearly going to improve things,
even if it isn't 25%.

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.