[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V5 00/22] IOREQ feature (+ virtio-mmio) on Arm

  • To: Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 28 Jan 2021 18:10:35 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=geRZwB08/xCNrTjCrlcklSj2P49juCcw4r/P/9d9Wpk=; b=PpMWJEZX2+qZ+VW4FND1GwcOqiB0LKOOOS3miBMtlXgGblJ2B5NyVJ1zsfnI7/I95Pci/f90AOKt9Hlb/2tV7DYcSGHsiFPiL5V1kVtyraeeyrv8QC5YwC9nuy2wOKnodd3/wBOJpN3tiY98qL2Da16y0I+DGGbL5zxbHHH17V9AGLyMY0OaoaeB/aLfhKfbCZtXGVW0EmIaBkv//PoDyai5aI8+dELiDNgigNKgz0sfjRZYEvZL7QsOS1tq78/kbkRY7BP3FItxHLVk3A8w4hMlVJVwzNFZZpulKD+kZayjSyzdGLFUoFkSj6CZiih/0pOVJLjVZ61T/tMAduWJvQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j6MuoMuMRQLlKULN61gpsetf06v6PrWB1/Ua43hALPvwWp6ACxBwMNhHQ3bWQ6hvjbH8peQNG/382ZY1g+C9Q8L6CAj5YcTD/esfcC+QszkwgAd7nRGRtlr0d2AT0nCQlUAgwqJ8QTInJZ97ghPyNmPG+rGiIvhzuouUaKcr8zOkMAVqQ+gZPDU85MVcFToVatHdNHd17oDibeQ5tT6NQ+U+NDIRSONbCxMerGzH/bNtsQuEz57auNJOlWxZQMDZ4Rg+O5VZK1pu1w5q5JrX5frED/7ZLdj+rSlswIgtqdWNzpy1+egA35PDYJUsnUhba6Nl7jNchFBMNhOwxZJr3Q==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Kaly Xin <Kaly.Xin@xxxxxxx>, Artem Mygaiev <joculator@xxxxxxxxx>, Alex Bennée <alex.bennee@xxxxxxxxxx>
  • Delivery-date: Thu, 28 Jan 2021 18:10:58 +0000
  • Ironport-sdr: TUx3QFc47QO9MJJO2mzkiE3y2yXe3d4XtYRh9F51pB5WzKLTBFaDasTwKoAPHAo6cZe5WFQRFk utZ6ja+KArFW2uZH7O5+Gs4MJb1vz00JJlEXN+ewui5IJ2jNBijxZg3oEOqPgNI8DIoT/Svkl7 6sI9Gpw3AkzN/PHBTh0dlA1Ib6lXkK4boznl8iVwZEH6FB4fTnn4aA0Krt4mqouYbw6zHYPkeZ yVm2jJIDqdMGriU719BM/JxMrDzXvlKHWuwHt7Lb692uEHEZupOn/WpG3meGUbYTINkFYxrSQw av4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28/01/2021 17:44, Julien Grall wrote:
> On 28/01/2021 17:24, Stefano Stabellini wrote:
>> On Thu, 28 Jan 2021, Julien Grall wrote:
>>> On 25/01/2021 19:08, Oleksandr Tyshchenko wrote:
>>>   > Patch series [8] was rebased on recent "staging branch"
>>>> (5e31789 tools/ocaml/libs/xb: Do not crash after xenbus is
>>>> unmapped) and
>>>> tested on
>>>> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk
>>>> backend [9]
>>>> running in driver domain and unmodified Linux Guest running on
>>>> existing
>>>> virtio-blk driver (frontend). No issues were observed. Guest domain
>>>> 'reboot/destroy'
>>>> use-cases work properly. Patch series was only build-tested on x86.
>>> I have done basic testing with a Linux guest on x86 and I didn't
>>> spot any
>>> regression.
>>> Unfortunately, I don't have a Windows VM in hand, so I can't confirm
>>> if there
>>> is no regression there. Can anyone give a try?
>>> On a separate topic, Andrew expressed on IRC some concern to expose the
>>> bufioreq interface to other arch than x86. I will let him explain
>>> his view
>>> here.
>>> Given that IOREQ will be a tech preview on Arm (this implies the
>>> interface is
>>> not stable) on Arm, I think we can defer the discussion past the
>>> freeze.
>> Yes, I agree that we can defer the discussion.
>>> For now, I would suggest to check if a Device Model is trying to
>>> create an
>>> IOREQ server with bufioreq is enabled (see ioreq_server_create()).
>>> This would
>>> not alleviate Andrew's concern, however it would at allow us to
>>> judge whether
>>> the buffering concept is going to be used on Arm (I can see some use
>>> for the
>>> Virtio doorbell).
>>> What do others think?
>> Difficult to say. We don't have any uses today but who knows in the
>> future.
> I have some ideas, but Andrew so far objects on using the existing
> bufioreq interface for good reasons. It is using guest_cmpxchg() which
> is really expensive.

Worse.  Patch 5 needs to switch cmpxchg() to guest_cmpxchg() to avoid
reintroducing XSA-295, but doesn't.

>> Maybe for now the important thing is to clarify that the bufioreq
>> interface won't be maintain for backward compatibility on ARM, and could
>> be removed without warnings, at least as long as the whole IOREQ feature
>> is a tech preview. >
>> In other words, it is not like we are forced to keep bufioreq around
>> forever on ARM.
> That's correct, we are not forced to use it. But if you only document
> it, then there is a fair chance this will split past the "Tech Preview".
> Today, there is no caller which requires to send buffered I/O in the
> Xen Arm hypervisor. So a Device Model should not need to say "I want
> to allocate a page and event channel for the buffered I/O".
> The check I suggested serves two purposes:
>   1) Catch any future upstream user of bufioreq
>   2) Avoid an interface to be marked supported by mistake

"use bufioreq" is an input to create_ioreq_server.  The easiest fix in
the short term is "if ( !IS_ENABLED(CONFIG_X86) && bufioreq ) return

The key problem with the bufioreq interface is that it is a ring with a
non-power-of-2 number of entries.  See c/s b7007bc6f9a45 if the
implications of a non-power-of-2 number of entries aren't immediately clear.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.