[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Broken PCI device passthrough, after XSA-302 fix?



On Mon, Jan 20, 2020 at 09:36:27AM +0100, Jan Beulich wrote:
> On 19.01.2020 11:39, Pasi Kärkkäinen wrote:
> > On Mon, Jan 06, 2020 at 02:06:14PM +0100, Jan Beulich wrote:
> >> On 04.01.2020 02:07, Marek Marczykowski-Górecki  wrote:
> >>> I have a multi-function PCI device, behind a PCI bridge, that normally
> >>> I assign to a single domain. But now it fails with:
> >>>
> >>> (XEN) [VT-D]d14: 0000:04:00.0 owned by d0!<G><0>assign 0000:05:00.0 to 
> >>> dom14 failed (-22)
> >>
> >> I've tried this out in as close a setup as I could arrange for, but
> >> not one matching your scenario. I didn't find a system with a
> >> suitably placed (in the topology) multi-function device (had to use
> >> a single-function one), and of course I did this on (close to)
> >> master. No anomalies. Hence I wonder whether either of the two
> >> differences mentioned matters, and - if, as I suspect, it's the
> >> multi-function aspect that is relevant here - how things would have
> >> worked at all before those recent changes. This is because I think
> >> you should have hit the same error path even before, and it would
> >> seem to me that the patch below might be (and have been) needed.
> >>
> > 
> > I think Marek confirmed in the other mail that this patch fixes the issue.
> > 
> > Are you planning to merge this patch?
> 
> Well, it is still pending a maintainer ack. Kevin has requested a
> (mechanical) change just over the weekend.
>

Ah, sorry, I missed the other thread. The patch in question seems to be:

"[PATCH 1/2] VT-d: don't pass bridge devices to domain_context_mapping_one()",
in thread: "[PATCH 0/2] VT-d: domain_context_mapping_one() adjustments".


Thanks.

-- Pasi

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.