[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/2] xen/link: Introduce .bss.percpu.page_aligned
On 29/07/2019 14:17, Jan Beulich wrote: > On 26.07.2019 22:32, Andrew Cooper wrote: >> Future changes are going to need to page align some percpu data. >> >> This means that the percpu area needs suitably aligning in the BSS so CPU0 >> has >> correctly aligned data. Shuffle the exact link order of items within the BSS >> to give .bss.percpu.page_aligned appropriate alignment. >> >> In addition, we need to be able to specify an alignment attribute to >> __DEFINE_PER_CPU(). Rework it so the caller passes in all attributes, and >> adjust DEFINE_PER_CPU{,_READ_MOSTLY}() to match. This has the added bonus >> that it is now possible to grep for .bss.percpu and find all the users. > And it has the meaningful downside of now every use site needing to get > things right. You say this as if it the current way of doing things is anything more than an illusion of protection. > This is not really a problem solely because > __DEFINE_PER_CPU() is a helper for all the real DEFINE_PER_CPU*(). The > grep-ing argument is not a really meaningful one imo anyway - you could > as well grep for DEFINE_PER_CPU. And as usual, our points of view differ substantially here. Looking for DEFINE_PER_CPU() doesn't help you identify the sections in use. That requires a further level of indirection. > Anyway - this is not an objection to the chosen approach, just a remark. > I'd like to note though that you explicitly undo something I had done > (iirc), and I may find odd when running into again down the road, > potentially resulting in an "undo-the-undo" patch. I think we really > need to find a way to avoid re-doing things that were done intentionally > in certain ways, when the difference between variants is merely personal > taste. Keir introduced percpu in ea608cc36d when DEFINE_PER_CPU() was private to x86 and had the __section() implicit in it. You changed DEFINE_PER_CPU() to include a suffix for the purpose of introducing DEFINE_PER_CPU_READ_MOSTLY() in cfbf17ffbb0, but nowhere is there any hint of a suggestion that the end result was anything more than just "how it happened to turn out". As to "this being intentional to remove mistakes". There are plenty of ways to screw this up, including ways which don't involve using __DEFINE_PER_CPU(,, "foo"), or manually inserting something into .bss.per_cpu outside of any of the percpu infrastructure, and no amount of technical measures can catch this. The only recourse is sensible code review, and any opencoded use of __DEFINE_PER_CPU() or __section(".bss.per_cpu" ...) stick out in an obvious manner. >> --- a/xen/arch/x86/xen.lds.S >> +++ b/xen/arch/x86/xen.lds.S >> @@ -293,14 +293,15 @@ SECTIONS >> __bss_start = .; >> *(.bss.stack_aligned) >> *(.bss.page_aligned*) >> - *(.bss) >> - . = ALIGN(SMP_CACHE_BYTES); >> __per_cpu_start = .; >> + *(.bss.percpu.page_aligned) > Now this is a case where I think an explicit ALIGN(PAGE_SIZE) would be > desirable: If the last item in .bss.page_aligned was not a multiple of > PAGE_SIZE in size, then __per_cpu_start would live needlessly early, > possibly increasing our memory overhead by a page per CPU for no gain > at all. Hmm, yes. We should do our best to defend against bugs like this. > >> *(.bss.percpu) >> . = ALIGN(SMP_CACHE_BYTES); >> *(.bss.percpu.read_mostly) >> . = ALIGN(SMP_CACHE_BYTES); >> __per_cpu_data_end = .; >> + *(.bss) >> + . = ALIGN(SMP_CACHE_BYTES); >> __bss_end = .; > Why is this last ALIGN() needed? Try taking it out and the linker will make its feelings known. Technically, it only needs 8 byte alignment (its so we can use rep stosq to clear), which is more relaxed than SMP_CACHE_BYTES. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |