[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/4] xen/arm: domain: Fix MISRA C 2012 Rule 8.7 violation



On Mon, 25 Jul 2022, Xenia Ragiadakou wrote:
> On 7/25/22 09:32, Jan Beulich wrote:
> > On 24.07.2022 19:20, Xenia Ragiadakou wrote:
> > > On 7/7/22 10:55, Jan Beulich wrote:
> > > > On 07.07.2022 09:27, Xenia Ragiadakou wrote:
> > > > > On 7/6/22 11:51, Jan Beulich wrote:
> > > > > > On 06.07.2022 10:43, Xenia Ragiadakou wrote:
> > > > > > > On 7/6/22 10:10, Jan Beulich wrote:
> > > > > > > > On 05.07.2022 23:02, Xenia Ragiadakou wrote:
> > > > > > > > > The function idle_loop() is referenced only in domain.c.
> > > > > > > > > Change its linkage from external to internal by adding the
> > > > > > > > > storage-class
> > > > > > > > > specifier static to its definitions.
> > > > > > > > > 
> > > > > > > > > Since idle_loop() is referenced only in inline assembly, add
> > > > > > > > > the 'used'
> > > > > > > > > attribute to suppress unused-function compiler warning.
> > > > > > > > 
> > > > > > > > While I see that Julien has already acked the patch, I'd like to
> > > > > > > > point
> > > > > > > > out that using __used here is somewhat bogus. Imo the better
> > > > > > > > approach
> > > > > > > > is to properly make visible to the compiler that the symbol is
> > > > > > > > used by
> > > > > > > > the asm(), by adding a fake(?) input.
> > > > > > > 
> > > > > > > I 'm afraid I do not understand what do you mean by "adding a
> > > > > > > fake(?)
> > > > > > > input". Can you please elaborate a little on your suggestion?
> > > > > > 
> > > > > > Once the asm() in question takes the function as an input, the
> > > > > > compiler
> > > > > > will know that the function has a user (unless, of course, it finds
> > > > > > a
> > > > > > way to elide the asm() itself). The "fake(?)" was because I'm not
> > > > > > deeply
> > > > > > enough into Arm inline assembly to know whether the input could then
> > > > > > also be used as an instruction operand (which imo would be
> > > > > > preferable) -
> > > > > > if it can't (e.g. because there's no suitable constraint or operand
> > > > > > modifier), it still can be an input just to inform the compiler.
> > > > > 
> > > > > According to the following statement, your approach is the recommended
> > > > > one:
> > > > > "To prevent the compiler from removing global data or functions which
> > > > > are referenced from inline assembly statements, you can:
> > > > > -use __attribute__((used)) with the global data or functions.
> > > > > -pass the reference to global data or functions as operands to inline
> > > > > assembly statements.
> > > > > Arm recommends passing the reference to global data or functions as
> > > > > operands to inline assembly statements so that if the final image does
> > > > > not require the inline assembly statements and the referenced global
> > > > > data or function, then they can be removed."
> > > > > 
> > > > > IIUC, you are suggesting to change
> > > > > asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" )
> > > > > into
> > > > > asm volatile ("mov sp,%0; b %1" : : "r" (stack), "S" (fn) : "memory"
> > > > > );
> > > > 
> > > > Yes, except that I can't judge about the "S" constraint.
> > > > 
> > > 
> > > This constraint does not work for arm32. Any other thoughts?
> > > 
> > > Another way, as Jan suggested, is to pass the function as a 'fake'
> > > (unused) input.
> > > I 'm suspecting something like the following could work
> > > asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack), "X" (fn) :
> > > "memory")
> > > What do you think?
> > 
> > Well, yes, X should always be a fallback option. But I said already when
> > you suggested S that I can't judge about its use, so I guess I'm the
> > wrong one to ask. Even more so that you only say "does not work", without
> > any details ...
> > 
> 
> The question is addressed to anyone familiar with arm inline assembly.
> I added the arm maintainers as primary recipients to this email to make this
> perfectly clear.
> 
> When cross-compiling Xen on x86 for arm32 with
> asm volatile ("mov sp,%0; b %1" : : "r" (stack), "S" (fn) : "memory" );
> compilation fails with the error: impossible constraint in ‘asm'

Unfortunately looking at the GCC manual pages [1], it doesn't seem to be
possible. The problem is that the definition of "S" changes between
aarch64 and arm (the 32-bit version).

For aarch64:

S   An absolute symbolic address or a label reference

This is what we want. For arm instead:

S   A symbol in the text segment of the current file

This is not useful for what we need to do here. As far as I can tell,
there is no other way in GCC assembly inline for arm to do this.

So we have 2 choices: we use the __used keyword as Xenia did in this
patch. Or we move the implementation of switch_stack_and_jump in
assembly. I attempted a prototype of the latter to see how it would come
out, see below.

I don't like it very much. I prefer the version with __used that Xenia
had in this patch. But I am OK either way.



diff --git a/xen/arch/arm/arm32/entry.S b/xen/arch/arm/arm32/entry.S
index 38826142ad..4d28f7e9f7 100644
--- a/xen/arch/arm/arm32/entry.S
+++ b/xen/arch/arm/arm32/entry.S
@@ -442,6 +442,10 @@ ENTRY(__context_switch)
         add     r4, r1, #VCPU_arch_saved_context
         ldmia   r4, {r4 - sl, fp, sp, pc}       /* Load registers and return */
 
+ENTRY(__switch_stack_and_jump)
+        mov sp, r0
+        bx r1
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index 95f1a92684..5d5d713f80 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -618,6 +618,10 @@ ENTRY(__context_switch)
         mov     sp, x9
         ret
 
+ENTRY(__switch_stack_and_jump)
+        mov sp, x0
+        br x1
+
 /*
  * Local variables:
  * mode: ASM
diff --git a/xen/arch/arm/include/asm/current.h 
b/xen/arch/arm/include/asm/current.h
index 73e81458e5..7696440a57 100644
--- a/xen/arch/arm/include/asm/current.h
+++ b/xen/arch/arm/include/asm/current.h
@@ -44,8 +44,12 @@ static inline struct cpu_info *get_cpu_info(void)
 
 #define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
 
+void return_to_new_vcpu32(void);
+void return_to_new_vcpu64(void);
+void __switch_stack_and_jump(void *p, void *f);
+
 #define switch_stack_and_jump(stack, fn) do {                           \
-    asm volatile ("mov sp,%0; b " STR(fn) : : "r" (stack) : "memory" ); \
+    __switch_stack_and_jump(stack, fn);                                 \
     unreachable();                                                      \
 } while ( false )
 


[1] https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.