[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 25/25] drm/xlnx: Compute dumb-buffer sizes with drm_mode_size_dumb()


  • To: Thomas Zimmermann <tzimmermann@xxxxxxx>, maarten.lankhorst@xxxxxxxxxxxxxxx, mripard@xxxxxxxxxx, airlied@xxxxxxxxx, simona@xxxxxxxx
  • From: Tomi Valkeinen <tomi.valkeinen@xxxxxxxxxxxxxxxx>
  • Date: Thu, 16 Jan 2025 12:03:24 +0200
  • Autocrypt: addr=tomi.valkeinen@xxxxxxxxxxxxxxxx; keydata= xsFNBE6ms0cBEACyizowecZqXfMZtnBniOieTuFdErHAUyxVgtmr0f5ZfIi9Z4l+uUN4Zdw2 wCEZjx3o0Z34diXBaMRJ3rAk9yB90UJAnLtb8A97Oq64DskLF81GCYB2P1i0qrG7UjpASgCA Ru0lVvxsWyIwSfoYoLrazbT1wkWRs8YBkkXQFfL7Mn3ZMoGPcpfwYH9O7bV1NslbmyJzRCMO eYV258gjCcwYlrkyIratlHCek4GrwV8Z9NQcjD5iLzrONjfafrWPwj6yn2RlL0mQEwt1lOvn LnI7QRtB3zxA3yB+FLsT1hx0va6xCHpX3QO2gBsyHCyVafFMrg3c/7IIWkDLngJxFgz6DLiA G4ld1QK/jsYqfP2GIMH1mFdjY+iagG4DqOsjip479HCWAptpNxSOCL6z3qxCU8MCz8iNOtZk DYXQWVscM5qgYSn+fmMM2qN+eoWlnCGVURZZLDjg387S2E1jT/dNTOsM/IqQj+ZROUZuRcF7 0RTtuU5q1HnbRNwy+23xeoSGuwmLQ2UsUk7Q5CnrjYfiPo3wHze8avK95JBoSd+WIRmV3uoO rXCoYOIRlDhg9XJTrbnQ3Ot5zOa0Y9c4IpyAlut6mDtxtKXr4+8OzjSVFww7tIwadTK3wDQv Bus4jxHjS6dz1g2ypT65qnHen6mUUH63lhzewqO9peAHJ0SLrQARAQABzTBUb21pIFZhbGtl aW5lbiA8dG9taS52YWxrZWluZW5AaWRlYXNvbmJvYXJkLmNvbT7CwY4EEwEIADgWIQTEOAw+ ll79gQef86f6PaqMvJYe9QUCX/HruAIbAwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRD6 PaqMvJYe9WmFD/99NGoD5lBJhlFDHMZvO+Op8vCwnIRZdTsyrtGl72rVh9xRfcSgYPZUvBuT VDxE53mY9HaZyu1eGMccYRBaTLJSfCXl/g317CrMNdY0k40b9YeIX10feiRYEWoDIPQ3tMmA 0nHDygzcnuPiPT68JYZ6tUOvAt7r6OX/litM+m2/E9mtp8xCoWOo/kYO4mOAIoMNvLB8vufi uBB4e/AvAjtny4ScuNV5c5q8MkfNIiOyag9QCiQ/JfoAqzXRjVb4VZG72AKaElwipiKCWEcU R4+Bu5Qbaxj7Cd36M/bI54OrbWWETJkVVSV1i0tghCd6HHyquTdFl7wYcz6cL1hn/6byVnD+ sR3BLvSBHYp8WSwv0TCuf6tLiNgHAO1hWiQ1pOoXyMEsxZlgPXT+wb4dbNVunckwqFjGxRbl Rz7apFT/ZRwbazEzEzNyrBOfB55xdipG/2+SmFn0oMFqFOBEszXLQVslh64lI0CMJm2OYYe3 PxHqYaztyeXsx13Bfnq9+bUynAQ4uW1P5DJ3OIRZWKmbQd/Me3Fq6TU57LsvwRgE0Le9PFQs dcP2071rMTpqTUteEgODJS4VDf4lXJfY91u32BJkiqM7/62Cqatcz5UWWHq5xeF03MIUTqdE qHWk3RJEoWHWQRzQfcx6Fn2fDAUKhAddvoopfcjAHfpAWJ+ENc7BTQROprNHARAAx0aat8GU hsusCLc4MIxOQwidecCTRc9Dz/7U2goUwhw2O5j9TPqLtp57VITmHILnvZf6q3QAho2QMQyE DDvHubrdtEoqaaSKxKkFie1uhWNNvXPhwkKLYieyL9m2JdU+b88HaDnpzdyTTR4uH7wk0bBa KbTSgIFDDe5lXInypewPO30TmYNkFSexnnM3n1PBCqiJXsJahE4ZQ+WnV5FbPUj8T2zXS2xk 0LZ0+DwKmZ0ZDovvdEWRWrz3UzJ8DLHb7blPpGhmqj3ANXQXC7mb9qJ6J/VSl61GbxIO2Dwb xPNkHk8fwnxlUBCOyBti/uD2uSTgKHNdabhVm2dgFNVuS1y3bBHbI/qjC3J7rWE0WiaHWEqy UVPk8rsph4rqITsj2RiY70vEW0SKePrChvET7D8P1UPqmveBNNtSS7In+DdZ5kUqLV7rJnM9 /4cwy+uZUt8cuCZlcA5u8IsBCNJudxEqBG10GHg1B6h1RZIz9Q9XfiBdaqa5+CjyFs8ua01c 9HmyfkuhXG2OLjfQuK+Ygd56mV3lq0aFdwbaX16DG22c6flkkBSjyWXYepFtHz9KsBS0DaZb 4IkLmZwEXpZcIOQjQ71fqlpiXkXSIaQ6YMEs8WjBbpP81h7QxWIfWtp+VnwNGc6nq5IQDESH mvQcsFS7d3eGVI6eyjCFdcAO8eMAEQEAAcLBXwQYAQIACQUCTqazRwIbDAAKCRD6PaqMvJYe 9fA7EACS6exUedsBKmt4pT7nqXBcRsqm6YzT6DeCM8PWMTeaVGHiR4TnNFiT3otD5UpYQI7S suYxoTdHrrrBzdlKe5rUWpzoZkVK6p0s9OIvGzLT0lrb0HC9iNDWT3JgpYDnk4Z2mFi6tTbq xKMtpVFRA6FjviGDRsfkfoURZI51nf2RSAk/A8BEDDZ7lgJHskYoklSpwyrXhkp9FHGMaYII m9EKuUTX9JPDG2FTthCBrdsgWYPdJQvM+zscq09vFMQ9Fykbx5N8z/oFEUy3ACyPqW2oyfvU CH5WDpWBG0s5BALp1gBJPytIAd/pY/5ZdNoi0Cx3+Z7jaBFEyYJdWy1hGddpkgnMjyOfLI7B CFrdecTZbR5upjNSDvQ7RG85SnpYJTIin+SAUazAeA2nS6gTZzumgtdw8XmVXZwdBfF+ICof 92UkbYcYNbzWO/GHgsNT1WnM4sa9lwCSWH8Fw1o/3bX1VVPEsnESOfxkNdu+gAF5S6+I6n3a ueeIlwJl5CpT5l8RpoZXEOVtXYn8zzOJ7oGZYINRV9Pf8qKGLf3Dft7zKBP832I3PQjeok7F yjt+9S+KgSFSHP3Pa4E7lsSdWhSlHYNdG/czhoUkSCN09C0rEK93wxACx3vtxPLjXu6RptBw 3dRq7n+mQChEB1am0BueV1JZaBboIL0AGlSJkm23kw==
  • Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx, linux-mediatek@xxxxxxxxxxxxxxxxxxx, freedreno@xxxxxxxxxxxxxxxxxxxxx, linux-arm-msm@xxxxxxxxxxxxxxx, imx@xxxxxxxxxxxxxxx, linux-samsung-soc@xxxxxxxxxxxxxxx, nouveau@xxxxxxxxxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxx, spice-devel@xxxxxxxxxxxxxxxxxxxxx, linux-renesas-soc@xxxxxxxxxxxxxxx, linux-rockchip@xxxxxxxxxxxxxxxxxxx, linux-tegra@xxxxxxxxxxxxxxx, intel-xe@xxxxxxxxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx>, Andy Yan <andyshrk@xxxxxxx>, Daniel Stone <daniel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 16 Jan 2025 10:03:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi,

On 16/01/2025 10:09, Thomas Zimmermann wrote:
Hi


Am 15.01.25 um 15:20 schrieb Tomi Valkeinen:
[...]

My point is that we have the current UAPI, and we have userspace using it, but we don't have clear rules what the ioctl does with specific parameters, and we don't document how it has to be used.

Perhaps the situation is bad, and all we can really say is that CREATE_DUMB only works for use with simple RGB formats, and the behavior for all other formats is platform specific. But I think even that would be valuable in the UAPI docs.

To be honest, I would not want to specify behavior for anything but the linear RGB formats. If anything, I'd take Daniel's reply mail for documentation as-is. Anyone stretching the UAPI beyond RGB is on their own.


Thinking about this, I wonder if this change is good for omapdrm or xilinx (probably other platforms too that support non-simple non-RGB formats via dumb buffers): without this patch, in both drivers, the pitch calculations just take the bpp as bit-per-pixels, align it up, and that's it.

With this patch we end up using drm_driver_color_mode_format(), and aligning buffers according to RGB formats figured out via heuristics. It does happen to work, for the formats I tested, but it sounds like something that might easily not work, as it's doing adjustments based on wrong format.

Should we have another version of drm_mode_size_dumb() which just calculates using the bpp, without the drm_driver_color_mode_format() path? Or does the drm_driver_color_mode_format() path provide some value for the drivers that do not currently do anything similar?

With the RGB-only rule, using drm_driver_color_mode_format() makes sense. It aligns dumb buffers and video=, provides error checking, and overall harmonizes code. The fallback is only required because of the existing odd cases that already bend the UAPI's rules.

I have to disagree here.

On the platforms I have been using (omap, tidss, xilinx, rcar) the dumb buffers are the only buffers you can get from the DRM driver. The dumb buffers have been used to allocate linear and multiplanar YUV buffers for a very long time on those platforms.

I tried to look around, but I did not find any mentions that CREATE_DUMB should only be used for RGB buffers. Is anyone outside the core developers even aware of it?

If we don't use dumb buffers there, where do we get the buffers? Maybe from a v4l2 device or from a gpu device, but often you don't have those. DMA_HEAP is there, of course.

So we have the option to get DMA_HEAP buffers, specifying just the size of the buffer. Here we only specify the size, so the userspace has to understand the requirements for the format and the platform.

Or we can use CREATE_DUMB, specifying the width, height and bitsperpixel, and if we don't have any heuristics about figuring out the pixel format (as it has been), the end result is exactly the same as with DMA_HEAP (i.e. we essentially define the size of the buffer).

So, on these platforms (omap, tidss, xilinx, rcar), the CREATE_DUMB has always been just "give me X amount of memory that can be used for scanout". With this series, the meaning of the ioctl changes, and it's now "give me an memory buffer buffer that works with an RGB format with this width, height, bpp".

In practice I believe that doesn't cause regressions, as aligning buffers according to RGB pixel format rules happens to be fine for YUV formats too, but I'm not sure (and it already almost caused a regression with bpp=64). And I'm having trouble seeing the upside.

Aligning video= and dumb buffers almost sounds like going backwards. video= parameter is bad, so let's also make dumb buffers bad?

Harmonizing code is fine, but I think that can be done with a function that only does the fallback-case.

So... I can only speak for the platforms I'm using and maintaining, but I'd rather keep the old behavior for CREATE_DUMB that we've had for ages.

 Tomi




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.