[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design session notes: GPU acceleration in Xen


  • To: Demi Marie Obenour <demi@xxxxxxxxxxxxxxxxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Christian König <christian.koenig@xxxxxxx>
  • Date: Tue, 18 Jun 2024 08:33:38 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Q+/pLV36q1F0ZCqIFBAoYGsfYfHZZGIWtg7sDNv5GgA=; b=NOW5bJrllBryMAodAYSTEMPC+SArfNtZ71psT3bEer3Y0Y12Kq1fmn9WV5BUoZc8WDg7eCYScHzGnk1K4Nh1S/TdDkh00T4MTKPV1DxhNy/wSF1tcguauT/hBp+yNeYpzcAnKK2+heVPN05QxkHYHry/V0bHIrzd1NDm+AgLVOOKK4jHCHNJTl+/bJDxtI/xsDm++mIyk/4twp5qBJxqtGn+I8S8sTXqI+F9VSfBug8hPMCQGnMFUHWU4oGPUt45aOdYvsf/iNwhix6X4OmX9lavpsFda81t/lEc4ph4dUteq2t5NIM+teRHCoesP3hUgTw3Jb48vPCWPMCHkZvIOg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XzLmQO2vANzlkqIshnofyBhtA0y5AIiJxWYy9yUM+TxQoZqFpWwgVXUkJN9q3zVUsEcwDQwgbLW8uaSby2jTpxeYamB4Z1Y4Puz75zuacwUVq/HLzexlOAzAYhZwq6Sw1wJyJzRY+znlk15UXzTVocVseX1TcnioPheAVhxEiiYGQzq/v9ghRCYU2MJst1tsW5YRTCcBlf7qdbHMuvcXxwWy7IwPfxCHVZz6C7zrHr5c8gSSpj+/zQlt06Bt6iM3iT1lawMo1xusxj4uCpqiO62FUAjmWTe1aGTsTbsDNmL5qp1Tgd3tKECweWiLiLRqbXhskqqSs8R5OPDKd4vdBQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Xenia Ragiadakou <burzalodowa@xxxxxxxxx>, Ray Huang <ray.huang@xxxxxxx>, Xen developer discussion <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Direct Rendering Infrastructure development <dri-devel@xxxxxxxxxxxxxxxxxxxxx>, Qubes OS Development Mailing List <qubes-devel@xxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 18 Jun 2024 06:33:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki wrote:
> On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
>> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
>>> In both cases, the device physical
>>> addresses are identical to dom0’s physical addresses.
>>
>> Yes, but a PV dom0 physical address space can be very scattered.
>>
>> IIRC there's an hypercall to request physically contiguous memory for
>> PV, but you don't want to be using that every time you allocate a
>> buffer (not sure it would support the sizes needed by the GPU
>> anyway).

> Indeed that isn't going to fly. In older Qubes versions we had PV
> sys-net with PCI passthrough for a network card. After some uptime it
> was basically impossible to restart and still have enough contagious
> memory for a network driver, and there it was about _much_ smaller
> buffers, like 2M or 4M. At least not without shutting down a lot more
> things to free some more memory.

Ouch!  That makes me wonder if all GPU drivers actually need physically
contiguous buffers, or if it is (as I suspect) driver-specific. CCing
Christian König who has mentioned issues in this area.

Well GPUs don't need physical contiguous memory to function, but if they only get 4k pages to work with it means a quite large (up to 30%) performance penalty.

So scattering memory like you described is probably a very bad idea if you want any halve way decent performance.

Regards,
Christian.


Given the recent progress on PVH dom0, is it reasonable to assume that
PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
doesn't need to worry about this problem on x86?




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.