[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.21 02/10] x86/HPET: disable unused channels


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 16 Oct 2025 18:28:19 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WsLXnlLQe+LoMK1UvfDN9/jGVbZdwCSiwx2Q+loKspA=; b=Tax68c7lYCFTXgpU9+ILf1KX8j50Zp1QgUc4oQJ/Seu2VhQ7eaYHbWsxXbWzDkMUh+6lkODlJG9p53NICx+nAmpNlF+39phnU/lQb21/tOC0JE9LDQcqnJnN9PNfV+Q2n8mzGyV0Yvy3LA3i56MZKvNei+fRPOF8wx/07EZ4gor1uCrc988K4aBoxZFjRrb8kUwPbPfiYthJmRefMPWjUA7N5thJjz4eGQEkxBD9h4DI7kIKrC9TromQl9sIo/OiI5D3OMgBRC/ONIBDnI3jc/SeO5ET6DrTEgHyQ9ig0NfvPpVYM3QCyGrCpIqNYpz6lZ8TFe7FILNCtHgCOB090Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ZGD4RwTvnmx79V4n7NPqWIAJanU3GlPVM974/uv8VtlN+lJdRCneK480hylQXp9OCiMuXK2wWQOtGDNfKDSonVJBKTCA0kgo65DD94M/rWhk0QNsj7t1zBo4Ekj5LuM3BHIqjx778kEQXqVrx8cfqwZ3aGERMGwk+nhYQae4nPtTaXIZNeYhyaZOzOSxLMVDh6YKq6hnksD1rcC1Klhzk/Ovz0FjprKmAz2ixQXOtJ+AJczYRXOVqgMRaKZXE/kzUnKsq0ZAVZLgherz4zxiJiwsOmMAervE24HlL0KC57aB30aK0g0pPeOmjnucC0Pt1I4QAc2d4Qasn+P3dMlqwA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • Delivery-date: Thu, 16 Oct 2025 16:28:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 16, 2025 at 05:55:30PM +0200, Jan Beulich wrote:
> On 16.10.2025 17:34, Roger Pau Monné wrote:
> > On Thu, Oct 16, 2025 at 01:57:41PM +0200, Jan Beulich wrote:
> >> On 16.10.2025 13:42, Roger Pau Monné wrote:
> >>> On Thu, Oct 16, 2025 at 09:31:42AM +0200, Jan Beulich wrote:
> >>>> Keeping channels enabled when they're unused is only causing problems:
> >>>> Extra interrupts harm performance, and extra nested interrupts could even
> >>>> have caused worse problems.
> >>>>
> >>>> Note that no explicit "enable" is necessary - that's implicitly done by
> >>>> set_channel_irq_affinity() once the channel goes into use again.
> >>>>
> >>>> Along with disabling the counter, also "clear" the channel's "next 
> >>>> event",
> >>>> for it to be properly written by whatever the next user is going to want
> >>>> (possibly avoiding too early an IRQ).
> >>>>
> >>>> Further, along the same lines, don't enable channels early when starting
> >>>> up an IRQ. This similarly should happen no earlier than from
> >>>> set_channel_irq_affinity() (here: once a channel goes into use the very
> >>>> first time). This eliminates a single instance of
> >>>>
> >>>> (XEN) [VT-D]INTR-REMAP: Request device [0000:00:1f.0] fault index 0
> >>>> (XEN) [VT-D]INTR-REMAP: reason 25 - Blocked a compatibility format 
> >>>> interrupt request
> >>>>
> >>>> during boot. (Why exactly there's only one instance, when we use multiple
> >>>> counters and hence multiple IRQs, I can't tell. My understanding would be
> >>>> that this was due to __hpet_setup_msi_irq() being called only after
> >>>> request_irq() [and hence the .startup handler], yet that should have
> >>>> affected all channels.)
> >>>>
> >>>> Fixes: 3ba523ff957c ("CPUIDLE: enable MSI capable HPET for timer 
> >>>> broadcast")
> >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>>> ---
> >>>> A window still remains for IRQs to be caused by stale comparator values:
> >>>> hpet_attach_channel() is called ahead of reprogram_hpet_evt_channel().
> >>>> Should we also write the comparator to "far into the future"?
> >>>
> >>> It might be helpful to reprogram the comparator as far ahead as
> >>> possible in hpet_attach_channel() ahead of enabling it, or
> >>> alternatively in hpet_detach_channel().
> >>
> >> The downside is yet another (slow) MMIO access. Hence why I didn't make
> >> such a change right away. Plus I wasn't quite sure about the locking there:
> >> Imo if we did so, it would be better if the lock wasn't dropped
> >> intermediately.
> >>
> >>>> @@ -542,6 +540,8 @@ static void hpet_detach_channel(unsigned
> >>>>          spin_unlock_irq(&ch->lock);
> >>>>      else if ( (next = cpumask_first(ch->cpumask)) >= nr_cpu_ids )
> >>>>      {
> >>>> +        hpet_disable_channel(ch);
> >>>> +        ch->next_event = STIME_MAX;
> >>>>          ch->cpu = -1;
> >>>>          clear_bit(HPET_EVT_USED_BIT, &ch->flags);
> >>>>          spin_unlock_irq(&ch->lock);
> >>>
> >>> I'm a bit confused with what the HPET code does here (don't know
> >>> enough about it, and there are no comments).  Why is the timer rotated
> >>> to a CPU in ch->cpumask once disabled, instead of just being plain
> >>> disabled?
> >>
> >> Because it will still be needed by the other CPUs that the channel is
> >> shared with.
> > 
> > Yeah, missed that part, the channel is migrated to a different CPU.  I
> > wonder however: since an active channel can be migrated around between
> > CPUs, isn't there a risk of the timer firing just in the middle of
> > migration (when interrupt generation is disabled), and hence Xen
> > possibly missing a deadline?
> > 
> > In hpet_broadcast_exit() we need to check whether the timer has
> > expired after the migration, and manually trigger a broadcast if
> > needed.  This also risks doing to broadcasts also back-to-back, but
> > it's the only option I see to avoid missing a deadline.
> > 
> > Maybe there's something I'm missing, this is all fairly complex.
> 
> set_channel_irq_affinity() invokes handle_hpet_broadcast() to cover that
> case.

Oh, indeed, sorry for the fuzz then.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.