[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen: fix alignment for bitops
Hi Vladimir, On 25 April 2014 11:16, Pranavkumar Sawargaonkar <pranavkumar@xxxxxxxxxx> wrote: > Hi Vladimir, > > On 24 April 2014 13:08, Vladimir Murzin <murzin.v@xxxxxxxxx> wrote: >> On Mon, Apr 21, 2014 at 5:27 PM, Vladimir Murzin <murzin.v@xxxxxxxxx> wrote: >>> On Mon, Apr 21, 2014 at 11:28 AM, Pranavkumar Sawargaonkar >>> <pranavkumar@xxxxxxxxxx> wrote: >>>> Hi Vladimir/David, >>>> >>>> On 17 April 2014 15:52, David Vrabel <david.vrabel@xxxxxxxxxx> wrote: >>>>> On 16/04/14 08:55, Vladimir Murzin wrote: >>>>>> Bitops operations like set/clear/change mandate world aligned pointer, >>>>>> mainly >>>>>> because architectures specific implementation. >>>>>> >>>>>> Looks that DEFINE_PER_CPU does required alignment for cpu_control_block; >>>>>> however, local copy used for bitops might not be world aligned. >>>>>> >>>>>> For arm64 it ends up with unaligned access trap... >>>>> >>>>> Thanks. Does this version work for you? >>>>> >>>>> David >>>>> >>>>> 8<---------------------- >>>>> xen/events/fifo: fix alignment for bitops on local ready word >>>>> >>>>> Bitops operations like set_bit(), clear_bit(), and test_bit() require >>>>> word aligned pointers, because of architectures specific >>>>> implementations. >>>>> >>>>> Looks that DEFINE_PER_CPU does required alignment for >>>>> cpu_control_block; however, local copy of the ready word might not be >>>>> word aligned. >>>>> >>>>> For arm64 it ends up with unaligned access trap at: >>>>> >>>>> if (head == 0) >>>>> clear_bit(priority, BM(ready)); >>>>> >>>>> Use unsigned long for "ready" to make sure it is word aligned. >>>>> >>>>> Signed-off-by: Vladimir Murzin <murzin.v@xxxxxxxxx> >>>>> Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx >>>>> --- >>>>> drivers/xen/events/events_fifo.c | 6 +++--- >>>>> 1 files changed, 3 insertions(+), 3 deletions(-) >>>>> >>>>> diff --git a/drivers/xen/events/events_fifo.c >>>>> b/drivers/xen/events/events_fifo.c >>>>> index 96109a9..475d967 100644 >>>>> --- a/drivers/xen/events/events_fifo.c >>>>> +++ b/drivers/xen/events/events_fifo.c >>>>> @@ -243,7 +243,7 @@ static void handle_irq_for_port(unsigned port) >>>>> >>>>> static void consume_one_event(unsigned cpu, >>>>> struct evtchn_fifo_control_block >>>>> *control_block, >>>>> - unsigned priority, uint32_t *ready) >>>>> + unsigned priority, unsigned long *ready) >>>>> { >>>>> struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu); >>>>> uint32_t head; >>>>> @@ -273,7 +273,7 @@ static void consume_one_event(unsigned cpu, >>>>> * copy of the ready word. >>>>> */ >>>>> if (head == 0) >>>>> - clear_bit(priority, BM(ready)); >>>>> + clear_bit(priority, ready); >>>>> >>>>> if (sync_test_bit(EVTCHN_FIFO_PENDING, BM(word)) >>>>> && !sync_test_bit(EVTCHN_FIFO_MASKED, BM(word))) >>>>> @@ -285,7 +285,7 @@ static void consume_one_event(unsigned cpu, >>>>> static void evtchn_fifo_handle_events(unsigned cpu) >>>>> { >>>>> struct evtchn_fifo_control_block *control_block; >>>>> - uint32_t ready; >>>>> + unsigned long ready; >>>>> unsigned q; >>>>> >>>>> control_block = per_cpu(cpu_control_block, cpu); >>>>> >>>>> _______________________________________________ >>>>> Xen-devel mailing list >>>>> Xen-devel@xxxxxxxxxxxxx >>>>> http://lists.xen.org/xen-devel >>>> >>>> I have tried this fix on APM X-Gene and it is solving a crash (you >>>> have reported) during booting of DOM0. >>>> >>>> But while trying out domU when I tried to execute >>>> "/etc/init.d/xencommons start || true" >>>> I am observing a similar "alignment fault" crash while performing >>>> "clear_bit" in " evtchn_fifo_clear_pending" >>>> Pasting a crash log: >>>> >>>> root@arm64:~/xen-images# /etc/init.d/xencommons start || true >>>> Starting C xenstored.... >>>> Unhandled fault: alignment fault (0x96000021) at 0xffffffc0fae7700c >>>> Internal error: : 96000021 [#1] PREEMPT SMP >>>> Modules linked in: >>>> CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.15.0-rc1-next-20140417+ #5 >>>> task: ffffffc0007188b0 ti: ffffffc00070c000 task.ti: ffffffc00070c000 >>>> PC is at clear_bit+0x14/0x30 >>>> LR is at evtchn_fifo_clear_pending+0x28/0x30 >>>> pc : [<ffffffc000289304>] lr : [<ffffffc0002d9f60>] pstate: 600001c5 >>>> sp : ffffffc00070fbb0 >>>> x29: ffffffc00070fbb0 x28: ffffffc0fae77000 >>>> x27: 000000000000000c x26: 0000000000000003 >>>> x25: 00000000dffe0000 x24: 0000000000000000 >>>> x23: 0000000000000007 x22: ffffffc000768000 >>>> x21: ffffffc0f94a0884 x20: ffffffc000645b80 >>>> x19: ffffffc0f94a0800 x18: 0000007f863063e0 >>>> x17: 0000007f875d0280 x16: ffffffc0000f93ec >>>> x15: 001b92d7f76c3f7d x14: 0000000000000000 >>>> x13: 00000003e8000000 x12: 0000000000000018 >>>> x11: 0000000000000000 x10: 0000000000000400 >>>> x9 : ffffffc00070fd20 x8 : 00000000fe871000 >>>> x7 : ffffffc0fa800038 x6 : ffffffc0fa800000 >>>> x5 : 00000000fffffffa x4 : 0000000000000000 >>>> x3 : 0000000080000000 x2 : 0000000000000001 >>>> x1 : ffffffc0fae7700c x0 : 0000000000000000 >>>> >>>> Process swapper/0 (pid: 0, stack limit = 0xffffffc00070c058) >>>> Stack: (0xffffffc00070fbb0 to 0xffffffc000710000) >>>> fba0: 0070fbc0 ffffffc0 002d7f84 >>>> ffffffc0 >>>> fbc0: 0070fbd0 ffffffc0 000e44f0 ffffffc0 0070fc00 ffffffc0 000e13e8 >>>> ffffffc0 >>>> fbe0: 00000002 00000000 fef7a900 ffffffc0 00709900 ffffffc0 000e13d4 >>>> ffffffc0 >>>> fc00: 0070fc20 ffffffc0 002da0ec ffffffc0 fac39000 ffffffc0 000d1e80 >>>> ffffffc0 >>>> fc20: 0070fca0 ffffffc0 002d74c4 ffffffc0 007096ec ffffffc0 fef7ef10 >>>> ffffffc0 >>>> fc40: 0064d878 ffffffc0 00775000 ffffffc0 00000000 00000000 00642500 >>>> ffffffc0 >>>> fc60: 006424f8 ffffffc0 00648318 ffffffc0 0070c000 ffffffc0 00000000 >>>> 00000081 >>>> fc80: 00000000 00000000 a0000000 ffffffc0 00000000 00000000 002d7494 >>>> ffffffc0 >>>> fca0: 0070fcf0 ffffffc0 002d7574 ffffffc0 fac0c800 ffffffc0 0072cae0 >>>> ffffffc0 >>>> fcc0: fac26500 ffffffc0 00645b80 ffffffc0 00000018 00000000 fef79410 >>>> ffffffc0 >>>> fce0: 00000001 00000000 00648318 ffffffc0 0070fd00 ffffffc0 000a06a8 >>>> ffffffc0 >>>> fd00: 0070fd10 ffffffc0 000e50b0 ffffffc0 0070fd50 ffffffc0 000e13e8 >>>> ffffffc0 >>>> fd20: 00000018 00000000 00709000 ffffffc0 00642000 ffffffc0 00642000 >>>> ffffffc0 >>>> fd40: 00000018 00000000 00000000 00000000 0070fd70 ffffffc0 000848bc >>>> ffffffc0 >>>> fd60: 007095b0 ffffffc0 000848a4 ffffffc0 0070fdc0 ffffffc0 0008128c >>>> ffffffc0 >>>> fd80: 0076f000 ffffffc0 0000400c ffffff80 0070fdf0 ffffffc0 00004010 >>>> ffffff80 >>>> fda0: 60000145 00000000 00759cac ffffffc0 0070fdf0 ffffffc0 0070fdf0 >>>> ffffffc0 >>>> fdc0: 0070ff10 ffffffc0 00083dac ffffffc0 0070c000 ffffffc0 0070c000 >>>> ffffffc0 >>>> fde0: 0070ff10 ffffffc0 00084ef0 ffffffc0 00000000 00000000 00653708 >>>> ffffffc0 >>>> fe00: 00653708 ffffffc0 00000001 00000000 00000018 00000000 00000000 >>>> 00000000 >>>> fe20: 00000001 00000000 00000400 00000000 00718db0 ffffffc0 0070fd20 >>>> ffffffc0 >>>> fe40: 00000400 00000000 00000000 00000000 00000018 00000000 e8000000 >>>> 00000003 >>>> fe60: 00000000 00000000 f76c3f7d 001b92d7 000f93ec ffffffc0 875d0280 >>>> 0000007f >>>> fe80: 863063e0 0000007f 0070c000 ffffffc0 0070c000 ffffffc0 00761200 >>>> ffffffc0 >>>> fea0: 0051e000 ffffffc0 006424f8 ffffffc0 00759cac ffffffc0 00000001 >>>> 00000000 >>>> fec0: 00648318 ffffffc0 0070c000 ffffffc0 00000000 00000081 0070ff10 >>>> ffffffc0 >>>> fee0: 00084eec ffffffc0 0070ff10 ffffffc0 00084ef0 ffffffc0 60000145 >>>> 00000000 >>>> ff00: 0070c000 ffffffc0 0070c000 ffffffc0 0070ff20 ffffffc0 000d944c >>>> ffffffc0 >>>> ff20: 0070ff80 ffffffc0 00508944 ffffffc0 00768000 ffffffc0 0075b000 >>>> ffffffc0 >>>> ff40: 006fb9a0 ffffffc0 0075b000 ffffffc0 feffde40 ffffffc0 00000000 >>>> 00000041 >>>> ff60: 0007b000 00000041 0007d000 00000041 00080450 ffffffc0 0050893c >>>> ffffffc0 >>>> ff80: 0070ffa0 ffffffc0 006d68a0 ffffffc0 00000002 00000000 0075b000 >>>> ffffffc0 >>>> ffa0: 00000000 00000000 00080210 00000041 00000000 00000000 00000e11 >>>> 00000000 >>>> ffc0: 08000000 00000041 500f0000 00000000 0071a000 00000041 00000000 >>>> 00000000 >>>> ffe0: 006fb9a0 ffffffc0 00000000 00000000 00000000 00000000 00000000 >>>> 00000000 >>>> Call trace: >>>> [<ffffffc000289304>] clear_bit+0x14/0x30 >>>> [<ffffffc0002d7f80>] ack_dynirq+0x20/0x2c >>>> [<ffffffc0000e44ec>] handle_edge_irq+0x94/0x18c >>>> [<ffffffc0000e13e4>] generic_handle_irq+0x24/0x40 >>>> [<ffffffc0002da0e8>] evtchn_fifo_handle_events+0x180/0x188 >>>> [<ffffffc0002d74c0>] __xen_evtchn_do_upcall+0x9c/0x144 >>>> [<ffffffc0002d7570>] xen_hvm_evtchn_do_upcall+0x8/0x14 >>>> [<ffffffc0000a06a4>] xen_arm_callback+0x8/0x18 >>>> [<ffffffc0000e50ac>] handle_percpu_devid_irq+0x90/0xb8 >>>> [<ffffffc0000e13e4>] generic_handle_irq+0x24/0x40 >>>> [<ffffffc0000848b8>] handle_IRQ+0x68/0xe0 >>>> [<ffffffc000081288>] gic_handle_irq+0x38/0x80 >>>> Exception stack(0xffffffc00070fdd0 to 0xffffffc00070fef0) >>>> fdc0: 0070c000 ffffffc0 0070c000 >>>> ffffffc0 >>>> fde0: 0070ff10 ffffffc0 00084ef0 ffffffc0 00000000 00000000 00653708 >>>> ffffffc0 >>>> fe00: 00653708 ffffffc0 00000001 00000000 00000018 00000000 00000000 >>>> 00000000 >>>> fe20: 00000001 00000000 00000400 00000000 00718db0 ffffffc0 0070fd20 >>>> ffffffc0 >>>> fe40: 00000400 00000000 00000000 00000000 00000018 00000000 e8000000 >>>> 00000003 >>>> fe60: 00000000 00000000 f76c3f7d 001b92d7 000f93ec ffffffc0 875d0280 >>>> 0000007f >>>> fe80: 863063e0 0000007f 0070c000 ffffffc0 0070c000 ffffffc0 00761200 >>>> ffffffc0 >>>> fea0: 0051e000 ffffffc0 006424f8 ffffffc0 00759cac ffffffc0 00000001 >>>> 00000000 >>>> fec0: 00648318 ffffffc0 0070c000 ffffffc0 00000000 00000081 0070ff10 >>>> ffffffc0 >>>> fee0: 00084eec ffffffc0 0070ff10 ffffffc0 >>>> [<ffffffc000083da8>] el1_irq+0x68/0xd8 >>>> [<ffffffc0000d9448>] cpu_startup_entry+0x114/0x174 >>>> [<ffffffc000508940>] rest_init+0x7c/0x88 >>>> [<ffffffc0006d689c>] start_kernel+0x350/0x368 >>>> Code: 4a030000 d2800022 8b400c21 9ac32043 (c85f7c22) >>>> >>>> >>>> Thanks, >>>> Pranav >>> >>> HI Pranav >>> >>> I've just come back form holidays. The last time I tried run DomU >>> there was Xen crush with syndrome 0x21 - looks like alignment trap >>> again ;) Unfortunately, had no chance to play with it yet :( >>> >>> Probably, I could post *untested* patch which makes arm64 bitops >>> "friendly" to 32-bit... still awaiting response form arm64 about the >>> best way to deal with it ;) >>> >>> Vladimir >> >> Hi Pranav >> >> Could you try [1]? Does it fix your case? >> >> [1] http://www.spinics.net/lists/arm-kernel/msg325071.html > > I will try it today in some time. I have tried your patch along with the suggestion of changing " add x1, x1, x0, lsr #2 // Get word offset" to " mov r0, r0, lsr #5 add r1, r1, r0, lsl #2 @ Get word offset " This manages me to get DOM0 and DOMU working on XGENE with latest Linux kernel ( 3.15.0-rc1) on XGENE. Thanks, Pranav ---------------------- Modified Patch ------- diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S index 7dac371..95550f4 100644 --- a/arch/arm64/lib/bitops.S +++ b/arch/arm64/lib/bitops.S @@ -26,14 +26,15 @@ */ .macro bitop, name, instr ENTRY( \name ) - and w3, w0, #63 // Get bit offset + and w3, w0, #31 // Get bit offset eor w0, w0, w3 // Clear low bits mov x2, #1 - add x1, x1, x0, lsr #3 // Get word offset + lsr x0, x0, #5 + add x1, x1, x0, lsl #2 // Get word offset lsl x3, x2, x3 // Create mask -1: ldxr x2, [x1] - \instr x2, x2, x3 - stxr w0, x2, [x1] +1: ldxr w2, [x1] + \instr w2, w2, w3 + stxr w0, w2, [x1] cbnz w0, 1b ret ENDPROC(\name ) @@ -41,15 +42,16 @@ ENDPROC(\name ) .macro testop, name, instr ENTRY( \name ) - and w3, w0, #63 // Get bit offset + and w3, w0, #31 // Get bit offset eor w0, w0, w3 // Clear low bits mov x2, #1 - add x1, x1, x0, lsr #3 // Get word offset + lsr x0, x0, #5 + add x1, x1, x0, lsl #2 // Get word offset lsl x4, x2, x3 // Create mask -1: ldxr x2, [x1] +1: ldxr w2, [x1] lsr x0, x2, x3 // Save old value of bit - \instr x2, x2, x4 // toggle bit - stlxr w5, x2, [x1] + \instr w2, w2, w4 // toggle bit + stlxr w5, w2, [x1] cbnz w5, 1b dmb ish and x0, x0, #1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |