[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 4/4] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more efficient
On 28.08.2024 20:08, Andrew Cooper wrote: > On 28/08/2024 10:19 am, Jan Beulich wrote: >> On 27.08.2024 15:57, Andrew Cooper wrote: >>> There are two issues. First, pi_test_and_clear_on() pulls the cache-line to >>> the CPU and dirties it even if there's nothing outstanding, but the final >>> for_each_set_bit() is O(256) when O(8) would do, >> Nit: That's bitmap_for_each() now, I think. And again ... >> >>> and would avoid multiple >>> atomic updates to the same IRR word. >>> >>> Rewrite it from scratch, explaining what's going on at each step. >>> >>> Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the >>> removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit(). >> ... here, and no underscore prefixes on the two find functions. > > Yes, and fixed. > >> >>> --- a/xen/arch/x86/hvm/vmx/vmx.c >>> +++ b/xen/arch/x86/hvm/vmx/vmx.c >>> @@ -2317,18 +2317,72 @@ static void cf_check vmx_deliver_posted_intr(struct >>> vcpu *v, u8 vector) >>> >>> static void cf_check vmx_sync_pir_to_irr(struct vcpu *v) >>> { >>> - struct vlapic *vlapic = vcpu_vlapic(v); >>> - unsigned int group, i; >>> - DECLARE_BITMAP(pending_intr, X86_NR_VECTORS); >>> + struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc; >>> + union { >>> + uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)]; >> Using unsigned long here would imo be better, as that's what matches >> struct pi_desc's DECLARE_BITMAP(). > > Why? It was also the primary contribution to particularly-bad code > generation in this function. I answered the "why" already: Because of you copying from something ... >>> + uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)]; >>> + } vec; >>> + uint32_t *irr; >>> + bool on; >>> >>> - if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) ) >>> + /* >>> + * The PIR is a contended cacheline which bounces between the CPU(s) >>> and >>> + * IOMMU(s). An IOMMU updates the entire PIR atomically, but we can't >>> + * express the same on the CPU side, so care has to be taken. >>> + * >>> + * First, do a plain read of ON. If the PIR hasn't been modified, this >>> + * will keep the cacheline Shared and not pull it Excusive on the >>> current >>> + * CPU. >>> + */ >>> + if ( !pi_test_on(desc) ) >>> return; >>> >>> - for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ ) >>> - pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group); >>> + /* >>> + * Second, if the plain read said that ON was set, we must clear it >>> with >>> + * an atomic action. This will bring the cachline to Exclusive on the >> Nit (from my spell checker): cacheline. >> >>> + * current CPU. >>> + * >>> + * This should always succeed because noone else should be playing with >>> + * the PIR behind our back, but assert so just in case. >>> + */ >>> + on = pi_test_and_clear_on(desc); >>> + ASSERT(on); >>> + >>> + /* >>> + * The cacheline is now Exclusive on the current CPU, and because ON >>> was >> "is" is pretty ambitious. We can only hope it (still) is. > > I can't think of a clearer way of saying this. "will have become > Exclusive" perhaps, but this is getting into some subtle tense gymnastics. > >>> + * get it back again. >>> + */ >>> + for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i ) >>> + vec._64[i] = xchg(&desc->pir[i], 0); ... that is the result of DECLARE_BITMAP(), i.e. an array of unsigned longs. If you make that part of the new union unsigned long[] too, you'll have code which is bitness-independent (i.e. would also have worked correctly in 32-bit Xen, and would work correctly in hypothetical 128-bit Xen). I don't think the array _type_ was "the primary contribution to particularly-bad code generation in this function"; it was how that bitmap was used. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |