[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN RFC PATCH v4 0/5] IOMMU subsystem redesign and PV-IOMMU interface



On Thu, Jan 09, 2025 at 11:39:04AM +0000, Teddy Astie wrote:
> Thanks for your review.
> 
> > Hi,
> > 
> > I finally got time to try this revision (sorry it took so long!). My
> > goal was to test it this time with some HVM domU too. I didn't get very
> > far...
> > 
> > Issues I hit:
> > 
> > 1. AMD IOMMU driver is not converted (fails to build), for now disabled
> >     CONFIG_AMD_IOMMU.
> 
> I haven't really worked on the AMD-Vi code yet. I have plans for it but 
> there is some specific bits to deal with (especially regarding interrupt 
> remapping), that I planned to discuss especially during the Xen Project 
> Winter Summit 2025.

:)

> > 2. PV shim build fails (linker fails to find p2m_add_identity_entry
> >     symbol referenced from iommu.c)
> 
> I haven't considered PV shim yet, so I am not really surprised that 
> there are some issues with it. We probably want to expose some PV-IOMMU 
> features for PV guests under PV shim, but it probably needs some 
> specific code for it.

I'm not sure if passthrough is supported with PV shim (never tried). The
current issue is much earlier ;)

> > 3. Xen complains on boot about missing endbr64 (surprisingly, it didn't
> >     exploded):
> > 
> >      (XEN) alt table ffff82d0404234d8 -> ffff82d040432d82
> >      (XEN) altcall iommu_get_max_iova+0x11/0x30 dest 
> > iommu.c#intel_iommu_get_max_iova has no endbr64
> >      (XEN) altcall context.c#iommu_reattach_phantom+0x30/0x50 dest 
> > iommu.c#intel_iommu_add_devfn has no endbr64
> >      (XEN) altcall context.c#iommu_detach_phantom+0x25/0x40 dest 
> > iommu.c#intel_iommu_remove_devfn has no endbr64
> >      (XEN) altcall iommu_context_init+0x27/0x40 dest 
> > iommu.c#intel_iommu_context_init has no endbr64
> >      (XEN) altcall iommu_attach_context+0x3c/0xd0 dest 
> > iommu.c#intel_iommu_attach has no endbr64
> >      (XEN) altcall context.c#iommu_attach_context.cold+0x1d/0x53 dest 
> > iommu.c#intel_iommu_detach has no endbr64
> >      (XEN) altcall iommu_detach_context+0x37/0xa0 dest 
> > iommu.c#intel_iommu_detach has no endbr64
> >      (XEN) altcall iommu_reattach_context+0x95/0x240 dest 
> > iommu.c#intel_iommu_reattach has no endbr64
> >      (XEN) altcall context.c#iommu_reattach_context.cold+0x29/0x110 dest 
> > iommu.c#intel_iommu_reattach has no endbr64
> >      (XEN) altcall iommu_context_teardown+0x3f/0xa0 dest 
> > iommu.c#intel_iommu_context_teardown has no endbr64
> >      (XEN) altcall pci.c#deassign_device+0x99/0x270 dest 
> > iommu.c#intel_iommu_add_devfn has no endbr64
> > 
> 
> I also see that, but I am not sure what I need to do to fix it.

I guess add "cf_check" annotation to functions that are called
indirectly.

> > 4. Starting a HVM domU with PCI device fails with:
> > 
> >      libxl: libxl_pci.c:1552:pci_add_dm_done: Domain 1:xc_assign_device 
> > failed: No space left on device
> >      libxl: libxl_pci.c:1875:device_pci_add_done: Domain 
> > 1:libxl__device_pci_add failed for PCI device 0:aa:0.0 (rc -3)
> >      libxl: libxl_create.c:2061:domcreate_attach_devices: Domain 1:unable 
> > to add pci devices
> > > I didn't change anything in the toolstack - maybe default context needs
> > to be initialized somehow? But the docs suggest the default context
> > should work out of the box. On the other hand, changelog for v4 says
> > some parts are moved to the toolstack, but I don't see any changes in
> > tools/ in this series...
> > 
> 
> I only tried stuff inside Dom0, but I haven't really tried passing 
> through a device. I think I missed some step regarding quarantine domain 
> initialization, which is probably why you have "-ENOSPC" here. You can 
> try in the meantime to set "quarantine=0" to disable this part to see if 
> it progresses further.

That helped a bit. Now domU starts. But device doesn't work - qemu
complains:

[2025-01-09 06:52:45] [00:08.0] xen_pt_realize: Real physical device 00:0d.3 
registered successfully
...
[2025-01-09 06:52:45] [00:09.0] xen_pt_realize: Real physical device 00:0d.2 
registered successfully
...
[2025-01-09 06:52:45] [00:0a.0] xen_pt_realize: Real physical device 00:0d.0 
registered successfully
...
[2025-01-09 06:52:59] [00:0a.0] xen_pt_msgctrl_reg_write: setup MSI (register: 
87).
[2025-01-09 06:52:59] [00:0a.0] msi_msix_setup: Error: Mapping of MSI (err: 19, 
vec: 0x25, entry 0[2025-01-09 06:52:59] x0)
[2025-01-09 06:52:59] [00:0a.0] xen_pt_msgctrl_reg_write: Warning: Can not map 
MSI (register: 86)!
[2025-01-09 06:54:21] [00:08.0] msix_set_enable: disabling MSI-X.
[2025-01-09 06:54:21] [00:08.0] xen_pt_msixctrl_reg_write: disable MSI-X
[2025-01-09 06:54:21] [00:09.0] xen_pt_msixctrl_reg_write: enable MSI-X
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x0)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x1)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x2)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x3)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x4)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x5)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x6)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x7)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x8)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0x9)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0xa)
[2025-01-09 06:54:21] [00:09.0] msi_msix_setup: Error: Mapping of MSI-X (err: 
19, vec: 0xef, entry 0xb)

and interestingly, Xen says all devices are still in dom0:

[2025-01-09 06:53:39] (XEN) ==== PCI devices ====
[2025-01-09 06:53:39] (XEN) ==== segment 0000 ====
[2025-01-09 06:53:39] (XEN) 0000:aa:00.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:01:00.0 - d0 - node -1  - MSIs < 132 133 134 
135 136 >
[2025-01-09 06:53:39] (XEN) 0000:00:1f.5 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:1f.4 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:1f.3 - d0 - node -1  - MSIs < 139 >
[2025-01-09 06:53:39] (XEN) 0000:00:1f.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:1d.0 - d0 - node -1  - MSIs < 131 >
[2025-01-09 06:53:39] (XEN) 0000:00:16.0 - d0 - node -1  - MSIs < 138 >
[2025-01-09 06:53:39] (XEN) 0000:00:15.3 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:15.1 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:15.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:14.2 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:14.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:12.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:0d.3 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:0d.2 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:0d.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:0a.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:08.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:07.3 - d0 - node -1  - MSIs < 130 >
[2025-01-09 06:53:39] (XEN) 0000:00:07.2 - d0 - node -1  - MSIs < 129 >
[2025-01-09 06:53:39] (XEN) 0000:00:07.1 - d0 - node -1  - MSIs < 128 >
[2025-01-09 06:53:39] (XEN) 0000:00:07.0 - d0 - node -1  - MSIs < 127 >
[2025-01-09 06:53:39] (XEN) 0000:00:06.0 - d0 - node -1  - MSIs < 126 >
[2025-01-09 06:53:39] (XEN) 0000:00:04.0 - d0 - node -1
[2025-01-09 06:53:39] (XEN) 0000:00:02.0 - d0 - node -1  - MSIs < 137 >
[2025-01-09 06:53:39] (XEN) 0000:00:00.0 - d0 - node -1

I don't see any errors from toolstack this time.

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.