[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: fix wait code asm() constraints
On 03/08/2012 09:40, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > In __prepare_to_wait(), properly mark early clobbered registers. By > doing so, we at once eliminate the need to save/restore rCX and rDI. > > In check_wakeup_from_wait(), make the current constraints match by > removing the code that actuall alters registers. By adjusting the > resume address in __prepare_to_wait(), we can simply re-use the copying > operation there (rather than doing a second pointless copy in the > opposite direction after branching to the resume point), which at once > eliminates the need for re-loading rCX and rDI inside the asm(). First of all, this is code improvement, rather than a bug fix, right? The asm constraints are correct for the code as it is, I believe. It also seems the patch splits into two independent parts: A. Not sure whether the trade-off of the rCX/rDI save/restore versus more complex asm constraints makes sense. B. Separately, the adjustment of the restore return address, and avoiding needing to reload rCX/rDI after label 1, as well as avoiding the copy in check_wakeup_from_wait(), is very nice. I'm inclined to take the second part only, and make it clearer in the changeset comment that it is not a bug fix. What do you think? -- Keir > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > --- a/xen/common/wait.c > +++ b/xen/common/wait.c > @@ -126,6 +126,7 @@ static void __prepare_to_wait(struct wai > { > char *cpu_info = (char *)get_cpu_info(); > struct vcpu *curr = current; > + unsigned long dummy; > > ASSERT(wqv->esp == 0); > > @@ -140,27 +141,27 @@ static void __prepare_to_wait(struct wai > > asm volatile ( > #ifdef CONFIG_X86_64 > - "push %%rax; push %%rbx; push %%rcx; push %%rdx; push %%rdi; " > + "push %%rax; push %%rbx; push %%rdx; " > "push %%rbp; push %%r8; push %%r9; push %%r10; push %%r11; " > "push %%r12; push %%r13; push %%r14; push %%r15; call 1f; " > - "1: mov 80(%%rsp),%%rdi; mov 96(%%rsp),%%rcx; mov %%rsp,%%rsi; " > + "1: mov %%rsp,%%rsi; addq $2f-1b,(%%rsp); " > "sub %%rsi,%%rcx; cmp %3,%%rcx; jbe 2f; " > "xor %%esi,%%esi; jmp 3f; " > "2: rep movsb; mov %%rsp,%%rsi; 3: pop %%rax; " > "pop %%r15; pop %%r14; pop %%r13; pop %%r12; " > "pop %%r11; pop %%r10; pop %%r9; pop %%r8; " > - "pop %%rbp; pop %%rdi; pop %%rdx; pop %%rcx; pop %%rbx; pop %%rax" > + "pop %%rbp; pop %%rdx; pop %%rbx; pop %%rax" > #else > - "push %%eax; push %%ebx; push %%ecx; push %%edx; push %%edi; " > + "push %%eax; push %%ebx; push %%edx; " > "push %%ebp; call 1f; " > - "1: mov 8(%%esp),%%edi; mov 16(%%esp),%%ecx; mov %%esp,%%esi; " > + "1: mov %%esp,%%esi; addl $2f-1b,(%%esp); " > "sub %%esi,%%ecx; cmp %3,%%ecx; jbe 2f; " > "xor %%esi,%%esi; jmp 3f; " > "2: rep movsb; mov %%esp,%%esi; 3: pop %%eax; " > - "pop %%ebp; pop %%edi; pop %%edx; pop %%ecx; pop %%ebx; pop %%eax" > + "pop %%ebp; pop %%edx; pop %%ebx; pop %%eax" > #endif > - : "=S" (wqv->esp) > - : "c" (cpu_info), "D" (wqv->stack), "i" (PAGE_SIZE) > + : "=&S" (wqv->esp), "=&c" (dummy), "=&D" (dummy) > + : "i" (PAGE_SIZE), "1" (cpu_info), "2" (wqv->stack) > : "memory" ); > > if ( unlikely(wqv->esp == 0) ) > @@ -200,7 +201,7 @@ void check_wakeup_from_wait(void) > } > > asm volatile ( > - "mov %1,%%"__OP"sp; rep movsb; jmp *(%%"__OP"sp)" > + "mov %1,%%"__OP"sp; jmp *(%0)" > : : "S" (wqv->stack), "D" (wqv->esp), > "c" ((char *)get_cpu_info() - (char *)wqv->esp) > : "memory" ); > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |