[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/arm: kernel BUG at /local/home/julien/works/linux/drivers/xen/swiotlb-xen.c:102



Hello Julien,
I didn't manage to reproduce the issue you are seeing but I think you
are right: non-LPAE kernels could get in trouble by calling
xen_bus_to_phys. This problem is not ARM specific per se, but it doesn't
occur on x86 because config XEN depends on X86_32 && X86_PAE.

The following patch should fix the issue:


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index ac5d41b..489620d 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -96,8 +96,6 @@ static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
        dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
        phys_addr_t paddr = dma;
 
-       BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
-
        paddr |= baddr & ~PAGE_MASK;
 
        return paddr;
@@ -449,7 +447,7 @@ static void xen_unmap_single(struct device *hwdev, 
dma_addr_t dev_addr,
 
        BUG_ON(dir == DMA_NONE);
 
-       xen_dma_unmap_page(hwdev, paddr, size, dir, attrs);
+       xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs);
 
        /* NOTE: We use dev_addr here, not paddr! */
        if (is_xen_swiotlb_buffer(dev_addr)) {


On Wed, 5 Nov 2014, Julien Grall wrote:
> Hi Stefano,
> 
> I've tried your patch series [1] "introduce GNTTABOP_cache_flush"
> on midway with non-LPAE (i.e short descriptor) kernel.
> 
> While your series should fix the cache flush on this kind of kernel,
> I'm still hitting a BUG_ON when I boot a guest (see stack trace [2]).
> 
> >From a quick look this is because the dma and physical address don't have
> the same size (resp. 64 and 32 bits). Technically xen_bus_to_phys doesn't 
> really
> have a meaning on Xen ARM: the address is still a DMA address but we cast it 
> to
> a physical address (not sure why?).
> 
> It occurs when I use the multi_v7 config (+ XEN) as DOM0. Disabling 
> CONFIG_LPAE
> in the config should also do the job.
> 
> Regards,
> 
> [1] https://lkml.org/lkml/2014/10/27/441
> 
> [2] Stack trace:
> 
> [   98.092290] dma 0x2b6769000 paddr 0xb6769000
> [   98.096475] ------------[ cut here ]------------
> [   98.101147] kernel BUG at 
> /local/home/julien/works/linux/drivers/xen/swiotlb-xen.c:102!
> [   98.109219] Internal error: Oops - BUG: 0 [#1] SMP ARM
> [   98.114427] Modules linked in:
> [   98.117554] CPU: 3 PID: 48 Comm: irq/115-highban Not tainted 
> 3.18.0-rc3-00051-g93cf079-dirty #231
> [   98.126493] task: db281680 ti: db40e000 task.ti: db40e000
> [   98.131965] PC is at xen_unmap_single+0xc4/0xc8
> [   98.136563] LR is at xen_unmap_single+0xc4/0xc8
> [   98.141162] pc : [<c04d1640>]    lr : [<c04d1640>]    psr: 60000013
> [   98.141162] sp : db40fdf0  ip : 00000000  fp : b6769000
> [   98.152793] r10: 00000200  r9 : 00000002  r8 : 00000002
> [   98.158088] r7 : 00000000  r6 : 002b6769  r5 : 00000002  r4 : b6769000
> [   98.164693] r3 : 00000753  r2 : 1af49000  r1 : dbbc73bc  r0 : 00000020
> [   98.171282] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment 
> kernel
> [   98.178660] Control: 10c5387d  Table: 39e3c06a  DAC: 00000015
> [   98.184475] Process irq/115-highban (pid: 48, stack limit = 0xdb40e250)
> [   98.191159] Stack: (0xdb40fdf0 to 0xdb410000)
> [   98.195586] fde0:                                     b6769000 00000020 
> 00000000 00000002
> [   98.203833] fe00: db40fe00 db0ea210 00000000 d98a1a80 00000001 00000001 
> 00000002 db0ea210
> [   98.212079] fe20: 00000000 db3e3410 db3f1790 c04d1880 00000200 00000002 
> 00000000 00000016
> [   98.220325] fe40: 0000091e db4100b8 d98a1a80 db410000 00000001 000000b0 
> 00000001 c05cdddc
> [   98.228570] fe60: 00000000 c0262f9c db08c01c db281680 db40e010 db4100b8 
> db410000 db4116c8
> [   98.236817] fe80: 00000001 c05ce0b0 00000000 00000000 db410000 c05ce478 
> db410000 db4116c8
> [   98.245068] fea0: 00000000 db410000 e0880100 00000008 00000000 e0880000 
> 00000001 c05e5288
> [   98.253309] fec0: c0c8a994 db40fee8 e0804000 c020897c c041c868 60000013 
> ffffffff 00000000
> [   98.261554] fee0: db3f1890 0000001f 00000073 00000001 db3f1890 c02836ac 
> db00f7d4 c05e57d0
> [   98.269801] ff00: c05e5788 db3f6340 db00f780 00000000 00000001 db00f780 
> db3f6340 c02836c8
> [   98.278047] ff20: db3f6360 db40e000 00000000 c02839ec c02838b4 db40e030 
> 00000000 c02837f8
> [   98.286293] ff40: 00000000 db3f6380 00000000 db3f6340 c02838b4 00000000 
> 00000000 00000000
> [   98.294538] ff60: 00000000 c0262358 00000000 00000000 00000000 db3f6340 
> 00000000 00000000
> [   98.302785] ff80: db40ff80 db40ff80 00000000 00000000 db40ff90 db40ff90 
> db40ffac db3f6380
> [   98.311031] ffa0: c0262280 00000000 00000000 c020f278 00000000 00000000 
> 00000000 00000000
> [   98.319276] ffc0: 00000000 00000000 00000000 00000000 00000000 00000000 
> 00000000 00000000
> [   98.327522] ffe0: 00000000 00000000 00000000 00000000 00000013 00000000 
> 00000000 00000000
> [   98.335780] [<c04d1640>] (xen_unmap_single) from [<c04d1880>] 
> (xen_swiotlb_unmap_sg_attrs+0x48/0x68)
> [   98.344974] [<c04d1880>] (xen_swiotlb_unmap_sg_attrs) from [<c05cdddc>] 
> (ata_sg_clean+0x8c/0x120)
> [   98.353912] [<c05cdddc>] (ata_sg_clean) from [<c05ce0b0>] 
> (__ata_qc_complete+0x34/0x144)
> [   98.362071] [<c05ce0b0>] (__ata_qc_complete) from [<c05ce478>] 
> (ata_qc_complete_multiple+0xa0/0xd4)
> [   98.371194] [<c05ce478>] (ata_qc_complete_multiple) from [<c05e5288>] 
> (ahci_port_thread_fn+0x138/0x638)
> [   98.380649] [<c05e5288>] (ahci_port_thread_fn) from [<c05e57d0>] 
> (ahci_thread_fn+0x48/0x90)
> [   98.389067] [<c05e57d0>] (ahci_thread_fn) from [<c02836c8>] 
> (irq_thread_fn+0x1c/0x40)
> [   98.396970] [<c02836c8>] (irq_thread_fn) from [<c02839ec>] 
> (irq_thread+0x138/0x174)
> [   98.404693] [<c02839ec>] (irq_thread) from [<c0262358>] (kthread+0xd8/0xf0)
> [   98.411723] [<c0262358>] (kthread) from [<c020f278>] 
> (ret_from_fork+0x14/0x3c)
> [   98.419011] Code: e1a02004 e1a03005 e34c00ad eb0f9c9a (e7f001f2) 
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.