[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN RFC PATCH v4 0/5] IOMMU subsystem redesign and PV-IOMMU interface



Thanks for your review.

> Hi,
> 
> I finally got time to try this revision (sorry it took so long!). My
> goal was to test it this time with some HVM domU too. I didn't get very
> far...
> 
> Issues I hit:
> 
> 1. AMD IOMMU driver is not converted (fails to build), for now disabled
>     CONFIG_AMD_IOMMU.

I haven't really worked on the AMD-Vi code yet. I have plans for it but 
there is some specific bits to deal with (especially regarding interrupt 
remapping), that I planned to discuss especially during the Xen Project 
Winter Summit 2025.

> 2. PV shim build fails (linker fails to find p2m_add_identity_entry
>     symbol referenced from iommu.c)

I haven't considered PV shim yet, so I am not really surprised that 
there are some issues with it. We probably want to expose some PV-IOMMU 
features for PV guests under PV shim, but it probably needs some 
specific code for it.

> 3. Xen complains on boot about missing endbr64 (surprisingly, it didn't
>     exploded):
> 
>      (XEN) alt table ffff82d0404234d8 -> ffff82d040432d82
>      (XEN) altcall iommu_get_max_iova+0x11/0x30 dest 
> iommu.c#intel_iommu_get_max_iova has no endbr64
>      (XEN) altcall context.c#iommu_reattach_phantom+0x30/0x50 dest 
> iommu.c#intel_iommu_add_devfn has no endbr64
>      (XEN) altcall context.c#iommu_detach_phantom+0x25/0x40 dest 
> iommu.c#intel_iommu_remove_devfn has no endbr64
>      (XEN) altcall iommu_context_init+0x27/0x40 dest 
> iommu.c#intel_iommu_context_init has no endbr64
>      (XEN) altcall iommu_attach_context+0x3c/0xd0 dest 
> iommu.c#intel_iommu_attach has no endbr64
>      (XEN) altcall context.c#iommu_attach_context.cold+0x1d/0x53 dest 
> iommu.c#intel_iommu_detach has no endbr64
>      (XEN) altcall iommu_detach_context+0x37/0xa0 dest 
> iommu.c#intel_iommu_detach has no endbr64
>      (XEN) altcall iommu_reattach_context+0x95/0x240 dest 
> iommu.c#intel_iommu_reattach has no endbr64
>      (XEN) altcall context.c#iommu_reattach_context.cold+0x29/0x110 dest 
> iommu.c#intel_iommu_reattach has no endbr64
>      (XEN) altcall iommu_context_teardown+0x3f/0xa0 dest 
> iommu.c#intel_iommu_context_teardown has no endbr64
>      (XEN) altcall pci.c#deassign_device+0x99/0x270 dest 
> iommu.c#intel_iommu_add_devfn has no endbr64
> 

I also see that, but I am not sure what I need to do to fix it.

> 4. Starting a HVM domU with PCI device fails with:
> 
>      libxl: libxl_pci.c:1552:pci_add_dm_done: Domain 1:xc_assign_device 
> failed: No space left on device
>      libxl: libxl_pci.c:1875:device_pci_add_done: Domain 
> 1:libxl__device_pci_add failed for PCI device 0:aa:0.0 (rc -3)
>      libxl: libxl_create.c:2061:domcreate_attach_devices: Domain 1:unable to 
> add pci devices
> > I didn't change anything in the toolstack - maybe default context needs
> to be initialized somehow? But the docs suggest the default context
> should work out of the box. On the other hand, changelog for v4 says
> some parts are moved to the toolstack, but I don't see any changes in
> tools/ in this series...
> 

I only tried stuff inside Dom0, but I haven't really tried passing 
through a device. I think I missed some step regarding quarantine domain 
initialization, which is probably why you have "-ENOSPC" here. You can 
try in the meantime to set "quarantine=0" to disable this part to see if 
it progresses further.

I will plan to do some testing on usual PCI passthrough to see if there 
are issues there.

> FWIW The exact version I tried is this (this series, on top of staging +
> qubes patches):
> https://github.com/QubesOS/qubes-vmm-xen/pull/200
> At this stage, dom0 kernel didn't have PV-IOMMU driver included yet.
> 
> Full Xen log, with some debug info collected:
> https://gist.github.com/marmarek/e7ac2571df033c7181bf03f21aa5f9ab
> 

Thanks
Teddy



Teddy Astie | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.