[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 6/6] x86/HVM: limit cache writeback overhead



On Thu, May 15, 2025 at 08:47:46AM +0200, Jan Beulich wrote:
> On 14.05.2025 17:12, Roger Pau Monné wrote:
> > On Wed, May 14, 2025 at 03:20:56PM +0200, Jan Beulich wrote:
> >> On 14.05.2025 15:00, Roger Pau Monné wrote:
> >>> On Wed, May 03, 2023 at 11:47:18AM +0200, Jan Beulich wrote:
> >>>> There's no need to write back caches on all CPUs upon seeing a WBINVD
> >>>> exit; ones that a vCPU hasn't run on since the last writeback (or since
> >>>> it was started) can't hold data which may need writing back.
> >>>
> >>> Couldn't you do the same with PV mode, and hence put the cpumask in
> >>> arch_vcpu rather than hvm_vcpu?
> >>
> >> We could in principle, but there's no use of flush_all() there right now
> >> (i.e. nothing to "win").
> > 
> > Yes, that will get "fixed" if we take patch 2 from my series.  That
> > fixes the lack of broadcasting of wb{,no}invd when emulating it for
> > PV domains.
> > 
> > I think this patch would be better after my fix to cache_op(),
> 
> Right, this matches what I said ...
> 
> > and
> > then the uncertainty around patch 2 makes it unclear whether we want
> > this.
> > 
> >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >>>> ---
> >>>> With us not running AMD IOMMUs in non-coherent ways, I wonder whether
> >>>> svm_wbinvd_intercept() really needs to do anything (or whether it
> >>>> couldn't check iommu_snoop just like VMX does, knowing that as of
> >>>> c609108b2190 ["x86/shadow: make iommu_snoop usage consistent with
> >>>> HAP's"] that's always set; this would largely serve as grep fodder then,
> >>>> to make sure this code is updated once / when we do away with this
> >>>> global variable, and it would be the penultimate step to being able to
> >>>> fold SVM's and VT-x'es functions).
> >>>
> >>> On my series I expand cache_flush_permitted() to also account for
> >>> iommu_snoop, I think it's easier to have a single check that signals
> >>> whether cache control is allowed for a domain, rather that having to
> >>> check multiple different conditions.
> >>
> >> Right, adjustments here would want making (in whichever series goes in
> >> later).
> >>
> >> For both of the responses: I think with patch 1 of this series having
> >> gone in and with in particular Andrew's concern over patch 2 (which
> >> may extend to patch 3), it may make sense for your series to go next.
> >> I shall then re-base, while considering what to do with patches 2 and 3
> >> (they may need dropping in the end).
> 
> ... here.

By the time I saw your paragraph, I had already written mine, so I
just left it, we reached the same conclusion which is good IMO :).

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.