[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/8] xen/common: Split tlb-clock.h out of mm.h


  • To: Nicola Vetrini <nicola.vetrini@xxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 13 Mar 2025 19:55:03 +0000
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>, Shawn Anastasio <sanastasio@xxxxxxxxxxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 13 Mar 2025 19:55:19 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13/03/2025 7:43 pm, Nicola Vetrini wrote:
> On 2025-03-13 14:35, Andrew Cooper wrote:
>> On 13/03/2025 12:59 pm, Jan Beulich wrote:
>>> On 12.03.2025 18:45, Andrew Cooper wrote:
>>>> xen/mm.h includes asm/tlbflush.h almost at the end, which creates a
>>>> horrible
>>>> tangle.  This is in order to provide two common files with an
>>>> abstraction over
>>>> the x86-specific TLB clock logic.
>>>>
>>>> First, introduce CONFIG_HAS_TLB_CLOCK, selected by x86 only.  Next,
>>>> introduce
>>>> xen/tlb-clock.h, providing empty stubs, and include this into
>>>> memory.c and
>>>> page_alloc.c
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>> ---
>>>> CC: Anthony PERARD <anthony.perard@xxxxxxxxxx>
>>>> CC: Michal Orzel <michal.orzel@xxxxxxx>
>>>> CC: Jan Beulich <jbeulich@xxxxxxxx>
>>>> CC: Julien Grall <julien@xxxxxxx>
>>>> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
>>>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
>>>> CC: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>>> CC: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
>>>> CC: Shawn Anastasio <sanastasio@xxxxxxxxxxxxxxxxxxxxx>
>>>>
>>>> There is still a mess here with the common vs x86 split, but it's
>>>> better
>>>> contained than before.
>>>> ---
>>>>  xen/arch/x86/Kconfig        |  1 +
>>>>  xen/common/Kconfig          |  3 +++
>>>>  xen/common/memory.c         |  1 +
>>>>  xen/common/page_alloc.c     |  1 +
>>>>  xen/include/xen/mm.h        | 27 --------------------
>>>>  xen/include/xen/tlb-clock.h | 49
>>>> +++++++++++++++++++++++++++++++++++++
>>>>  6 files changed, 55 insertions(+), 27 deletions(-)
>>>>  create mode 100644 xen/include/xen/tlb-clock.h
>>>>
>
>
>>> However, see below.
>>>
>>>> +        arch_flush_tlb_mask(&mask);
>>>> +    }
>>>> +}
>>>> +
>>>> +#else /* !CONFIG_HAS_TLB_CLOCK */
>>>> +
>>>> +struct page_info;
>>>> +static inline void accumulate_tlbflush(
>>>> +    bool *need_tlbflush, const struct page_info *page,
>>>> +    uint32_t *tlbflush_timestamp) {}
>>>> +static inline void filtered_flush_tlb_mask(uint32_t
>>>> tlbflush_timestamp) {}
>>> Is doing nothing here correct?
>>
>> Yeah, it's not, but this only occurred to me after sending the series.
>>
>> Interestingly, CI is green across the board for ARM, which suggests to
>> me that this logic isn't getting a workout.
>>
>>>  mark_page_free() can set a page's
>>> ->u.free.need_tlbflush. And with that flag set the full
>>>
>>> static inline void accumulate_tlbflush(
>>>     bool *need_tlbflush, const struct page_info *page,
>>>     uint32_t *tlbflush_timestamp)
>>> {
>>>     if ( page->u.free.need_tlbflush &&
>>>          page->tlbflush_timestamp <= tlbflush_current_time() &&
>>>          (!*need_tlbflush ||
>>>           page->tlbflush_timestamp > *tlbflush_timestamp) )
>>>     {
>>>         *need_tlbflush = true;
>>>         *tlbflush_timestamp = page->tlbflush_timestamp;
>>>     }
>>> }
>>>
>>> reduces to (considering that tlbflush_current_time() resolves to
>>> constant 0,
>>> which also implies every page's ->tlbflush_timestamp is only ever 0)
>>>
>>> static inline void accumulate_tlbflush(
>>>     bool *need_tlbflush, const struct page_info *page,
>>>     uint32_t *tlbflush_timestamp)
>>> {
>>>     if ( !*need_tlbflush )
>>>         *need_tlbflush = true;
>>> }
>>>
>>> which means a not-stubbed-out filtered_flush_tlb_mask(), with
>>> tlbflush_filter()
>>> doing nothing, would actually invoke arch_flush_tlb_mask() (with all
>>> online CPUs
>>> set in the mask) when called. And arch_flush_tlb_mask() isn't a
>>> no-op on Arm.
>>
>> Yes.  Sadly, fixing this (without Eclair complaining in the middle of
>> the series) isn't as easy as I'd hoped.
>>
>
> Hi Andrew,
>
> I didn't quite follow the whole thread (been busy the last couple of
> days), but could you explain briefly what's the issue here? Just a
> link to a failing pipeline should be fine as well.

There isn't one.

But to untangle this the easy way, I'd need to have a duplicate
declaration for arch_flush_tlb_mask() for a patch of two.

Which I know Eclair would complain about, and therefore I need to find a
different way to untangle it.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.