|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen ARM community call - meeting minutes and date for the next one
On 01/12/2017 07:50 PM, Stefano Stabellini wrote:
> On Thu, 12 Jan 2017, Pooya.Keshavarzi wrote:
>>
>> Firstly sorry for the late reply on this.
>>
>> Regarding the problem with swiotlb-xen here are some more details:
>>
>> If we limit Dom0's memory such that only low-memory (up to 32-bit
>> addressable memory) is available to Dom0, then swiotlb-xen does not have to
>> use bounce buffers and the devices (e.g. USB, ethernet) would work.
>>
>> But when there is some high memory also available to Dom0, the followings
>> happen:
>> - If the the device address happens to be in the device's DMA window (see
>> xen_swiotlb_map_page()), then the device would work.
>> - Otherwise if it has to allocate and map a bounce buffer, then the device
>> would not work.
>
> From what you wrote it looks like the xen_swiotlb_map_page path:
>
> if (dma_capable(dev, dev_addr, size) &&
> !range_straddles_page_boundary(phys, size) &&
> !xen_arch_need_swiotlb(dev, phys, dev_addr) &&
> !swiotlb_force) {
> /* we are not interested in the dma_addr returned by
> * xen_dma_map_page, only in the potential cache flushes
> executed
> * by the function. */
> xen_dma_map_page(dev, page, dev_addr, offset, size, dir, attrs);
> return dev_addr;
> }
>
> works, but the other does not. Does it match your understanding? Have
> you done any digging to find the reason why the bounce buffer code path
> is broken on your platform?
Yes, The above path works but the other one doesn't.
I did some digging but failed to find out what's the problem. The returned
address from swiotlb_tbl_map_single() is within the memory range allocated
earlier for Xen software IO TLB and is dma capable, so it seem to be OK.
What's your suggestion for further digging?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |