[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.21 01/10] x86/HPET: limit channel changes


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 16 Oct 2025 17:25:15 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yN4Mzr5yDtMXDGBWtTYNHfHELixjqRYhn+4e+ybuMxo=; b=FjrPtdhCbUWFg4thS0X4b+tzwFdFEdiv3j9T1OE1JFnWBj8x44k+QcEGTCX64pcDL2KYJLjyS9i3axW2OfE/lfCAMmzCpzwl/QHEnHWBr5nxw9psDwwIP7wlwr/v7NbfhYBVDXtXq0mvtsvPzlRGh+wgQkLFKsphSjdhSPZiRfFVwaJelfLXBFiJQ3aiW96+pcAR7z9AcsneLibOf4ueeegPEUzXrMvFzSEVZIzO6cMTguhWdem8FrknovQiGC6bEwFGJ2Xzmo6cZ5NEyYYCm3xWLfjY96CC2dEbT/cwSXvZcCUOTdniwAxtbn50377X9jgieFltLIz7/fgs+Hbv8Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BNyhTLVHhqK0ZC39a0d2u+57FpO4AASIRDCeHBTdAA5pCNOO22BwHW/Bjg+SU8pfvDK9ZLmH9Sop/SlwZL7pKZJlEHxb2d4z21aSVp9XxjNoW9ioUjOOSvA0JmWzEB78aguwpwqIEJasEldj/VtM5g4Enrp7bsoPIN0dfOUTrZH9TodTuNCFIofWekr9YC1IzYsx3i5mJJYWGoAZp/HY0fw+UTtnT5NCXXUVlm7nIcmKm7emoqkh0peMyWJnHNGDdmRRr676XaPemsfUZ0XfunVQdlawzQtg6e8kDAGA7gQeKdcGuQhvVmt9G39Zn3yvHoUBhaUgeRVi9stIaEdREA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • Delivery-date: Thu, 16 Oct 2025 15:25:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 16, 2025 at 05:16:07PM +0200, Jan Beulich wrote:
> On 16.10.2025 17:07, Roger Pau Monné wrote:
> > On Thu, Oct 16, 2025 at 01:47:38PM +0200, Jan Beulich wrote:
> >> On 16.10.2025 12:24, Roger Pau Monné wrote:
> >>> On Thu, Oct 16, 2025 at 09:31:21AM +0200, Jan Beulich wrote:
> >>>> @@ -454,9 +456,21 @@ static struct hpet_event_channel *hpet_g
> >>>>      if ( num_hpets_used >= nr_cpu_ids )
> >>>>          return &hpet_events[cpu];
> >>>>  
> >>>> +    /*
> >>>> +     * Try the least recently used channel first.  It may still have 
> >>>> its IRQ's
> >>>> +     * affinity set to the desired CPU.  This way we also limit having 
> >>>> multiple
> >>>> +     * of our IRQs raised on the same CPU, in possibly a nested manner.
> >>>> +     */
> >>>> +    ch = per_cpu(lru_channel, cpu);
> >>>> +    if ( ch && !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
> >>>> +    {
> >>>> +        ch->cpu = cpu;
> >>>> +        return ch;
> >>>> +    }
> >>>> +
> >>>> +    /* Then look for an unused channel. */
> >>>>      next = arch_fetch_and_add(&next_channel, 1) % num_hpets_used;
> >>>>  
> >>>> -    /* try unused channel first */
> >>>>      for ( i = next; i < next + num_hpets_used; i++ )
> >>>>      {
> >>>>          ch = &hpet_events[i % num_hpets_used];
> >>>> @@ -479,6 +493,8 @@ static void set_channel_irq_affinity(str
> >>>>  {
> >>>>      struct irq_desc *desc = irq_to_desc(ch->msi.irq);
> >>>>  
> >>>> +    per_cpu(lru_channel, ch->cpu) = ch;
> >>>> +
> >>>>      ASSERT(!local_irq_is_enabled());
> >>>>      spin_lock(&desc->lock);
> >>>>      hpet_msi_mask(desc);
> >>>
> >>> Maybe I'm missing the point here, but you are resetting the MSI
> >>> affinity anyway here, so there isn't much point in attempting to
> >>> re-use the same channel when Xen still unconditionally goes through the
> >>> process of setting the affinity anyway?
> >>
> >> While still using normal IRQs, there's still a benefit: We can re-use the
> >> same vector (as staying on the same CPU), and hence we save an IRQ
> >> migration (being the main source of nested IRQs according to my
> >> observations).
> > 
> > Hm, I see.  You short-circuit all the logic in _assign_irq_vector().
> > 
> >> We could actually do even better, by avoiding the mask/unmask pair there,
> >> which would avoid triggering the "immediate" IRQ that I (for now) see as
> >> the only explanation of the large amount of "early" IRQs that I observe
> >> on (at least) Intel hardware. That would require doing the msg.dest32
> >> check earlier, but otherwise looks feasible. (Actually, the unmask would
> >> still be necessary, in case we're called with the channel already masked.)
> > 
> > Checking with .dest32 seems a bit crude, I would possibly prefer to
> > slightly modify hpet_attach_channel() to notice when ch->cpu == cpu
> > and avoid the call to set_channel_irq_affinity()?
> 
> That would be an always-false condition, wouldn't it? "attach" and "detach"
> are used strictly in pairs, and after "detach" ch->cpu != cpu.

I see, we set ch->cpu = -1 if the channel is really detached as
opposed to migrated to a different CPU.  I haven't looked, but I
assume leaving the previous CPU in ch->cpu would cause issues
elsewhere.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.