[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Upstream Dom0 DRM problems regarding swiotlb
>>> On 13.02.19 at 15:10, <michael.d.labriola@xxxxxxxxx> wrote: > On Wed, Feb 13, 2019 at 5:34 AM Jan Beulich <JBeulich@xxxxxxxx> wrote: >> >> >>> On 12.02.19 at 19:46, <michael.d.labriola@xxxxxxxxx> wrote: >> > Konrad, >> > >> > Starting w/ v4.17, I cannot log in to GNOME w/out getting the >> > following mess in dmesg and ending up back at the GDM login screen. >> > >> > [ 28.554259] radeon_dp_aux_transfer_native: 200 callbacks suppressed >> > [ 31.219821] radeon 0000:01:00.0: swiotlb buffer is full (sz: 2097152 >> > bytes) >> > [ 31.220030] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed >> > to allocate GEM object (16384000, 2, 4096, -14) >> > [ 31.226109] radeon 0000:01:00.0: swiotlb buffer is full (sz: 2097152 >> > bytes) >> > [ 31.226300] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed >> > to allocate GEM object (16384000, 2, 4096, -14) >> > [ 31.300734] gnome-shell[1935]: segfault at 88 ip 00007f39151cd904 >> > sp 00007ffc97611ad8 error 4 in libmutter-cogl.so[7f3915178000+aa000] >> > [ 31.300745] Code: 5f c3 0f 1f 40 00 48 8b 47 78 48 8b 40 40 ff e0 >> > 66 0f 1f 44 00 00 48 8b 47 78 48 8b 40 48 ff e0 66 0f 1f 44 00 00 48 >> > 8b 47 78 <48> 8b 80 88 00 00 00 ff e0 0f 1f 00 48 8b 47 78 48 8b 40 68 >> > ff e0 >> > [ 38.193302] radeon_dp_aux_transfer_native: 116 callbacks suppressed >> > [ 40.009317] radeon 0000:01:00.0: swiotlb buffer is full (sz: 2097152 >> > bytes) >> > [ 40.009488] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed >> > to allocate GEM object (16384000, 2, 4096, -14) >> > [ 40.015114] radeon 0000:01:00.0: swiotlb buffer is full (sz: 2097152 >> > bytes) >> > [ 40.015297] [drm:radeon_gem_object_create [radeon]] *ERROR* Failed >> > to allocate GEM object (16384000, 2, 4096, -14) >> > [ 40.028302] gnome-shell[2431]: segfault at 2dadf40 ip >> > 0000000002dadf40 sp 00007ffcd24ea5f8 error 15 >> > [ 40.028306] Code: 20 6e 31 00 00 00 00 00 00 00 00 37 e3 3d 2d 7f >> > 00 00 80 f4 e6 3d 2d 7f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >> > 00 00 00 <00> 00 00 00 00 00 00 00 c1 00 00 00 00 00 00 00 80 e1 d2 03 >> > 00 00 >> > >> > >> > This happens w/ both radeon and amdgpu. >> > >> > I bisected down to the following range of commits, which basically add >> > conditional code to radeon and amdgpu to NOT use swiotlb if dma_bits >> > is smaller than the system's max iomem address... but that very much >> > doesn't work on a Xen dom0. >> >> Well, not so much a Xen Dom0, but a Xen PV domain. >> >> > 82626363 drm: add func to get max iomem address v2 >> > fd5fd480 drm/amdgpu: only enable swiotlb alloc when need v2 >> > 1bc3d3cc drm/radeon: only enable swiotlb path when need v2 >> > >> > Reverting the offending commits gives me a usable v4.20 dom0 kernel w/ >> > working 3d support. Not sure what the appropriate upstream fix for >> > this would be, as I don't 100% understand this. Could you enlighten >> > me? ;-) >> >> Well, this depends on how much abstraction we want, and how >> much abstraction the maintainers of the DRM drivers demand. >> It could be as simple as adding xen_swiotlb checks into the >> conditionals setting ->need_swiotlb, but in an abstract sense >> the issue of course exists for PV guests of any hypervisor. >> (Altering drm_get_max_iomem() itself would seem wrong to me, >> unless its name was also changed.) > > Ah, so this isn't necessarily Xen-specific but rather any paravirtual > guest? That hadn't crossed my mind. Is there an easy way to find out > if we're a pv guest in the need_swiotlb conditionals? There's xen_pv_domain(), but I think xen_swiotlb would be more to the point if the check is already to be Xen-specific. There's no generic "is PV" predicate that I'm aware of. > If not, we > should at least add a module parameter to force swiotlb usage to both > radeon and amdgpu. I'd be more than happy to gin up a patch to do > either and submit to upstream (dri-devel, I guess). I don't think module parameters are a good way forward here. They may do as a temporary workaround, but not as a solution. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |