[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/5] x86/PV: fold redundant calls to adjust_guest_l<N>e()


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 11 Jan 2021 11:36:48 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=i791hPIHS+GdEsm1BbPjsZfKF067/VAD0sO+e6MAg24=; b=f4bGxyFOiMo19ioXjBTzIIy9gDS/zoF+eEi4/NoP2zcACz7s8n48bHH9jke0kodR1JjQT9tF/ESSI6gontjxfT18t4dxUOuKcvqP+E8KXVq2fXLnyHvz+194IqfS5wCuI/dtkc3Y4b1sGiA2+JONfOKJS6ib18Pn+SovK/BX9fCSoohkZfyEgqwy7J4LgU3WvJSKiwYXAt23BqPsZf03awYZeUBgm/sFhZ0x5ZMFRtGZ6kpw9HvXvEtaf0AxpPJXUZII3EbtzwlnjiPTSaQ+TBJ5GWNsDTeyU0fZruS48L92Ls5MHRqo0OEOBY7RXWM9n4pZtuQf8w2Amlm4Juhhog==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ga83IX+RHKpWMBmOSXhCdgsSXKhKL5UCu5uxesnkbI0yd+yA4f6K/J8qGurZeBBLFCSyc1plCl2lcp87U8CTQW2MzxOkPKJ8RtBAhXaubRkhrNfrwtsy7TAFLsD8G5sxbBJ/FcwUFgMwBjPxTrRTJQ4sZ9pMUpAkBJ0ADt9dSd6ftKrQnnMJMpdzE3nIgY1SqbE1+LSJ/DxRoWfbF65R9pI/3bIpLozbEE8u8wSE8b5JpHA4VSXpt6gLj4PRD0tORQy0wU8BfcTvfx1hB9tYD4Uv2tGRuBlsFetLd9We/eg5ob1VjjQ25pmROvEt7705bgnx4z6hhgau3aMZWDjK8g==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Mon, 11 Jan 2021 10:37:05 +0000
  • Ironport-sdr: hPGRCCtk3OmrrLPqn8KVKjfQt5fOccAzdpYb8+1EYGR0LFV6ThIR1KJM4wguOBuETnjOBhVN1X 7RkZLLp1/gHQhXebz5JM6l+aGJ8leOKIbS8LjHf8zagTluFgiFadpqeeLA+WTDyIjPEoaJxJ8o qEBsTxXqa0jTlSwraV6lfKf3GCGQsBmJhFWHwEpaKjZQWH48N5r3ePfC6nt/l8/geudXvk1D65 if4GqFmKAr7ENw54P+0RtqFGgVMCQthXTu522YmivfFA6aYX3/fVGK+UG1cZEC7DWHtt5xzruQ 4p4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Nov 03, 2020 at 11:56:44AM +0100, Jan Beulich wrote:
> At least from an abstract perspective it is quite odd for us to compare
> adjusted old and unadjusted new page table entries when determining
> whether the fast path can be used. This is largely benign because
> FASTPATH_FLAG_WHITELIST covers most of the flags which the adjustments
> may set, and the flags getting set don't affect the outcome of
> get_page_from_l<N>e(). There's one exception: 32-bit L3 entries get
> _PAGE_RW set, but get_page_from_l3e() doesn't allow linear page tables
> to be created at this level for such guests. Apart from this _PAGE_RW
> is unused by get_page_from_l<N>e() (for N > 1), and hence forcing the
> bit on early has no functional effect.
> 
> The main reason for the change, however, is that adjust_guest_l<N>e()
> aren't exactly cheap - both in terms of pure code size and because each
> one has at least one evaluate_nospec() by way of containing
> is_pv_32bit_domain() conditionals.
> 
> Call the functions once ahead of the fast path checks, instead of twice
> after.

I guess part of the reasoning for doing it that way is because you can
avoid the adjust_guest_l1e in the slow path if get_page_from_l1e
fails?

In any case, adjust_guest_l1e was only called once from either the
fast or the slow paths.

> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.