|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/6] xl: Add commands for hiding and unhiding pcie passthrough devices
On Fri, Jul 07, 2017 at 11:56:43AM +0100, Wei Liu wrote:
> On Wed, Jul 05, 2017 at 02:52:41PM -0500, Venu Busireddy wrote:
> > > [...]
> > > > diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
> > > > index 89c2b25..10a48a9 100644
> > > > --- a/tools/xl/xl_vmcontrol.c
> > > > +++ b/tools/xl/xl_vmcontrol.c
> > > > @@ -966,6 +966,15 @@ start:
> > > > LOG("Waiting for domain %s (domid %u) to die [pid %ld]",
> > > > d_config.c_info.name, domid, (long)getpid());
> > > >
> > > > + ret = libxl_reg_aer_events_handler(ctx, domid);
> > > > + if (ret) {
> > > > + /*
> > > > + * This error may not be severe enough to fail the creation of
> > > > the VM.
> > > > + * Log the error, and continue with the creation.
> > > > + */
> > > > + LOG("libxl_reg_aer_events_handler() failed, ret = 0x%08x",
> > > > ret);
> > > > + }
> > > > +
> > >
> > > First thing this suggests the ordering of this patch series is wrong --
> > > you need to put the patch that implements the new function before this.
> >
> > I will change the order in the next revision.
> >
> > > The other thing you need to be aware is that if the user chooses to not
> > > use a daemonised xl, he / she doesn't get a chance to handle these
> > > events.
> > >
> > > This is potentially problematic for driver domains. You probably want to
> > > also modify xl devd command. Also on the subject, what's your thought on
> > > driver domain? I'm not sure if a driver domain has the permission to
> > > kill the guest.
> >
> > I don't know if I understood your question correctly, but it is not the
> > driver domain that is killing another guest. It is Dom0 that is killing
> > the guest to which the device is assigned in passthrough mode. That guest
> > should still be killable by Dom0, even if it is a driver domain. Right?
>
> OK. I'm not sure my understanding of how PCI passthrough works is
> correct, so please correct me if I'm wrong.
>
> First, let's split the two concepts: toolstack domain and driver domain.
> They are mostly the same one (Dom0), but they don't have to.
>
> A driver domain drives the underlying hardware and provides virtualised
> devices to a DomU.
>
> AIUI (again, I could be very wrong about this):
>
> 1. PV PCI passthrough is done via pciback, which means the physical
> device is assigned to the driver domain. All events to / from the
> guest / device are handled by the driver domain -- which includes
> the AER error you're trying to handle.
>
> 2. HVM PCI passthrough is done via QEMU, but you also need to pre-assign
> the device to the driver domain in which QEMU runs. All events are only
> visible to the driver domain.
>
> Yes, a guest is going to be always killable by Dom0 (the toolstack
> domain), even if some devices of the guest are handled by a driver
> domain.
>
> But Dom0 now can't see the AER event so it won't be able to issue the
> "kill" or whatever action you want it to do. Is this not the case? Do
It can. That is how it works right now - the AER errors are sent to the
PCIe bridge which is a device driver in domain0. Then the kernel
sends it to pciback (which owns the device) to deal with.
> you expect the event to be always delivered to Dom0?
Yes.
>
> >
> > However, I have been asked by Jan Beulich (and many others) on the
> > need to kill the guest, and why the device can't be unassigned from
> > that guest! My initial thinking (for the first revision) was that the
> > guest and the device together are party to evil things, and hence the
> > guest should be killed. But I agree that unassigning the device should
> > be sufficient. Once the device is removed, the guest can't do much that
> > any other guest can't. Therefore, I plan to change this patchset to
> > simply unassign the device from the guest. This aspect is also covered
> > in the thread:
> >
> > https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg00552.html
> >
> > May I request you review that thread and post your thoughts?
> >
>
> Sure. But that's orthogonal to the problem we have here. I will reply to
> that thread.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |