[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Upstream Dom0 DRM problems regarding swiotlb



On Wed, Feb 13, 2019 at 11:57 AM Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
>
> On Wed, Feb 13, 2019 at 09:09:32AM -0700, Jan Beulich wrote:
> > >>> On 13.02.19 at 17:00, <michael.d.labriola@xxxxxxxxx> wrote:
> > > On Wed, Feb 13, 2019 at 9:28 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
> > >> >>> On 13.02.19 at 15:10, <michael.d.labriola@xxxxxxxxx> wrote:
> > >> > Ah, so this isn't necessarily Xen-specific but rather any paravirtual
> > >> > guest?  That hadn't crossed my mind.  Is there an easy way to find out
> > >> > if we're a pv guest in the need_swiotlb conditionals?
> > >>
> > >> There's xen_pv_domain(), but I think xen_swiotlb would be more to
> > >> the point if the check is already to be Xen-specific. There's no generic
> > >> "is PV" predicate that I'm aware of.
> > >
> > > Well, that makes doing conditional code right more difficult.  I
> > > assume since there isn't a generic predicate, and PV isn't new, that
> > > it's absence is by design?  To reign in the temptation to sprinkle
> > > conditional code all over the kernel?  ;-)
> >
> > Well, with lguest gone, Xen is the only PV environment the kernel
> > can run in, afaik at least. I guess to decide between the suggested
> > options or the need for some abstracting macro (or yet something
> > else), you'll really need to ask the driver maintainers. Or simply
> > send a patch their way implementing one of them, and see what
> > their reaction is.
>
> Could you try this out and see if it works and I will send it out:
>
>
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> index 9fc3296592fe..96bf1df0ed28 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
> @@ -23,6 +23,7 @@
>  #include <linux/firmware.h>
>  #include <drm/drmP.h>
>  #include <drm/drm_cache.h>
> +#include <xen/xen.h>
>  #include "amdgpu.h"
>  #include "gmc_v6_0.h"
>  #include "amdgpu_ucode.h"
> @@ -887,6 +888,8 @@ static int gmc_v6_0_sw_init(void *handle)
>                 dev_warn(adev->dev, "amdgpu: No coherent DMA available.\n");
>         }
>         adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> +       if (xen_pv_domain())
> +               adev->need_swiotlb = 1;
>
>         r = gmc_v6_0_init_microcode(adev);
>         if (r) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> index 761dcfb2fec0..710ac0ece1b0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
> @@ -23,6 +23,7 @@
>  #include <linux/firmware.h>
>  #include <drm/drmP.h>
>  #include <drm/drm_cache.h>
> +#include <xen/xen.h>
>  #include "amdgpu.h"
>  #include "cikd.h"
>  #include "cik.h"
> @@ -1031,6 +1032,8 @@ static int gmc_v7_0_sw_init(void *handle)
>                 pr_warn("amdgpu: No coherent DMA available\n");
>         }
>         adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> +       if (xen_pv_domain())
> +               adev->need_swiotlb = 1;
>
>         r = gmc_v7_0_init_microcode(adev);
>         if (r) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> index 1ad7e6b8ed1d..c418a129bb32 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> @@ -23,6 +23,7 @@
>  #include <linux/firmware.h>
>  #include <drm/drmP.h>
>  #include <drm/drm_cache.h>
> +#include <xen/xen.h>
>  #include "amdgpu.h"
>  #include "gmc_v8_0.h"
>  #include "amdgpu_ucode.h"
> @@ -1156,6 +1157,8 @@ static int gmc_v8_0_sw_init(void *handle)
>                 pr_warn("amdgpu: No coherent DMA available\n");
>         }
>         adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> +       if (xen_pv_domain())
> +               adev->need_swiotlb = 1;
>
>         r = gmc_v8_0_init_microcode(adev);
>         if (r) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c 
> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> index bacdaef77b6c..85c0762c37ae 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -22,6 +22,7 @@
>   */
>  #include <linux/firmware.h>
>  #include <drm/drm_cache.h>
> +#include <xen/xen.h>
>  #include "amdgpu.h"
>  #include "gmc_v9_0.h"
>  #include "amdgpu_atomfirmware.h"
> @@ -1004,6 +1005,8 @@ static int gmc_v9_0_sw_init(void *handle)
>                 printk(KERN_WARNING "amdgpu: No coherent DMA available.\n");
>         }
>         adev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> +       if (xen_pv_domain())
> +               adev->need_swiotlb = 1;
>
>         if (adev->gmc.xgmi.supported) {
>                 r = gfxhub_v1_1_get_xgmi_info(adev);
> diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
> b/drivers/gpu/drm/radeon/radeon_device.c
> index 59c8a6647ff2..02fba6829936 100644
> --- a/drivers/gpu/drm/radeon/radeon_device.c
> +++ b/drivers/gpu/drm/radeon/radeon_device.c
> @@ -35,6 +35,7 @@
>  #include <linux/vgaarb.h>
>  #include <linux/vga_switcheroo.h>
>  #include <linux/efi.h>
> +#include <xen/xen.h>
>  #include "radeon_reg.h"
>  #include "radeon.h"
>  #include "atom.h"
> @@ -1388,6 +1389,8 @@ int radeon_device_init(struct radeon_device *rdev,
>                 pr_warn("radeon: No coherent DMA available\n");
>         }
>         rdev->need_swiotlb = drm_get_max_iomem() > ((u64)1 << dma_bits);
> +       if (xen_pv_domain())
> +               rdev->need_swiotlb = 1;
>
>         /* Registers mapping */
>         /* TODO: block userspace mapping of io register */
>
> >
> > Jan
> >
> >

Yes, that works for me.  However, I feel like the conditional should
be in drm_get_max_iomem() instead of directly after it everywhere it's
used...  and is just checking xen_pv_domain() enough?  Jan made it
sound like there were possibly other PV cases that would also still
need swiotlb.

-Mike

-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.