[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design session notes: GPU acceleration in Xen



On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> On 14.06.2024 09:21, Roger Pau Monné wrote:
> > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> >>> GPU acceleration requires that pageable host memory be able to be mapped
> >>> into a guest.
> >>
> >> I'm sure it was explained in the session, which sadly I couldn't attend.
> >> I've been asking Ray and Xenia the same before, but I'm afraid it still
> >> hasn't become clear to me why this is a _requirement_. After all that's
> >> against what we're doing elsewhere (i.e. so far it has always been
> >> guest memory that's mapped in the host). I can appreciate that it might
> >> be more difficult to implement, but avoiding to violate this fundamental
> >> (kind of) rule might be worth the price (and would avoid other
> >> complexities, of which there may be lurking more than what you enumerate
> >> below).
> > 
> > My limited understanding (please someone correct me if wrong) is that
> > the GPU buffer (or context I think it's also called?) is always
> > allocated from dom0 (the owner of the GPU).  The underling memory
> > addresses of such buffer needs to be mapped into the guest.  The
> > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > system RAM, and such buffer can be paged by the dom0 kernel at any
> > time (iow: changing the backing memory from MMIO to RAM or vice
> > versa).  Also, the buffer must be contiguous in physical address
> > space.
> 
> This last one in particular would of course be a severe restriction.
> Yet: There's an IOMMU involved, isn't there?

Yup, IIRC that's why Ray said it was much more easier for them to
support VirtIO GPUs from a PVH dom0 rather than classic PV one.

It might be easier to implement from a classic PV dom0 if there's
pv-iommu support, so that dom0 can create it's own contiguous memory
buffers from the device PoV.

> > I'm not sure it's possible to ensure that when using system RAM such
> > memory comes from the guest rather than the host, as it would likely
> > require some very intrusive hooks into the kernel logic, and
> > negotiation with the guest to allocate the requested amount of
> > memory and hand it over to dom0.  If the maximum size of the buffer is
> > known in advance maybe dom0 can negotiate with the guest to allocate
> > such a region and grant it access to dom0 at driver attachment time.
> 
> Besides the thought of transiently converting RAM to kind-of-MMIO, this

As a note here, changing the type to MMIO would likely involve
modifying the EPT/NPT tables to propagate the new type.  On a PVH dom0
this would likely involve shattering superpages in order to set the
correct memory types.

Depending on how often and how random those system RAM changes are
necessary this could also create contention on the p2m lock.

> makes me think of another possible option: Could Dom0 transfer ownership
> of the RAM that wants mapping in the guest (remotely resembling
> grant-transfer)? Would require the guest to have ballooned down enough
> first, of course. (In both cases it would certainly need working out how
> the conversion / transfer back could be made work safely and reasonably
> cleanly.)

Maybe.  The fact the guest needs to balloon down that amount of memory
seems weird to me, as from the guest PoV that mapped memory is
MMIO-like and not system RAM.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.