[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] [PATCH 2/2] xen:i386:pc_piix: create isa bridge specific to IGD passthrough



On Wed, Sep 03, 2014 at 01:40:45AM +0000, Kay, Allen M wrote:
> 
> 
> > -----Original Message-----
> > From: Chen, Tiejun
> > Sent: Monday, September 01, 2014 12:50 AM
> > To: Michael S. Tsirkin
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Kay, Allen M; qemu-devel@xxxxxxxxxx;
> > Konrad Rzeszutek Wilk
> > Subject: Re: [Qemu-devel] [Xen-devel] [PATCH 2/2] xen:i386:pc_piix: create
> > isa bridge specific to IGD passthrough
> > 
> > On 2014/9/1 14:05, Michael S. Tsirkin wrote:
> > > On Mon, Sep 01, 2014 at 10:50:37AM +0800, Chen, Tiejun wrote:
> > >> On 2014/8/31 16:58, Michael S. Tsirkin wrote:
> > >>> On Fri, Aug 29, 2014 at 09:28:50AM +0800, Chen, Tiejun wrote:
> > >>>>
> > >>>>
> > >>>> On 2014/8/28 8:56, Chen, Tiejun wrote:
> > >>>>>>>>>>> +     */
> > >>>>>>>>>>> +    dev = pci_create_simple(bus, PCI_DEVFN(0x1f, 0),
> > >>>>>>>>>>> +                            "xen-igd-passthrough-isa-bridge");
> > >>>>>>>>>>> +    if (dev) {
> > >>>>>>>>>>> +        r = xen_host_pci_device_get(&hdev, 0, 0,
> > >>>>>>>>>>> + PCI_DEVFN(0x1f,
> > >>>>>>>>>>> 0), 0);
> > >>>>>>>>>>> +        if (!r) {
> > >>>>>>>>>>> +            pci_config_set_vendor_id(dev->config,
> > hdev.vendor_id);
> > >>>>>>>>>>> +            pci_config_set_device_id(dev->config,
> > >>>>>>>>>>> + hdev.device_id);
> > >>>>>>>>
> > >>>>>>>> Can you, instead, implement the reverse logic, probing the card
> > >>>>>>>> and supplying the correct device id for PCH?
> > >>>>>>>>
> > >>>>>>>
> > >>>>>>> Here what is your so-called reverse logic as I already asked you
> > >>>>>>> previously? Do you mean I should list all PCHs with a combo
> > >>>>>>> illustrated with the vendor/device id in advance? Then look up
> > >>>>>>> if we can find a
> > >>>>>>
> > >>>>>> Michael,
> > >>>>>>
> > >>>>>
> > >>>>> Ping.
> > >>>>>
> > >>>>> Thanks
> > >>>>> Tiejun
> > >>>>>
> > >>>>>> Could you explain this exactly? Then I can try follow-up your
> > >>>>>> idea ASAP if this is necessary and possible.
> > >>>>
> > >>>> Michel,
> > >>>>
> > >>>> Could you give us some explanation for your "reverse logic" when
> > >>>> you're free?
> > >>>>
> > >>>> Thanks
> > >>>> Tiejun
> > >>>
> > >>> So future drivers will look at device ID for the card and figure out
> > >>> how things should work from there.
> > >>> Old drivers still poke at device id of the chipset for this, but
> > >>> maybe qemu can do what newer drivers do:
> > >>> look at the card and figure out what guest should do, then present
> > >>> the appropriate chipset id.
> > >>>
> > >>> This is based on what Jesse said:
> > >>>         Practically speaking, we could probably assume specific GPU/PCH
> > combos,
> > >>>         as I don't think they're generally mixed across generations, 
> > >>> though
> > SNB
> > >>>         and IVB did have compatible sockets, so there is the 
> > >>> possibility of
> > >>>         mixing CPT and PPT PCHs, but those are register identical as 
> > >>> far as the
> > >>>         graphics driver is concerned, so even that should be safe.
> > >>>
> > >>
> > >> Michael,
> > >>
> > >> Thanks for your explanation.
> > >>
> > >>> so the idea is to have a reverse table mapping GPU back to PCH.
> > >>> Present to guest the ID that will let it assume the correct GPU.
> > >>
> > >> I guess you mean we should create to maintain such a table:
> > >>
> > >> [GPU Card: device_id(s), PCH: device_id]
> > >>
> > >> Then each time, instead of exposing that real PCH device id directly,
> > >> qemu first can get the real GPU device id to lookup this table to
> > >> present a appropriate PCH's device id to the guest.
> > >>
> > >> And looks here that appropriate PCH's device id is not always a that
> > >> real PCH's device id. Right? If I'm wrong please correct me.
> > >
> > > Exactly: we don't really care what the PCH ID is - we only need it for
> > > the guest driver to do the right thing.
> > 
> > Agreed.
> > 
> > I need to ask Allen to check if one given GPU card device id is always
> > corresponding to one given PCH on both HSW and BDW currently. If yes, I can
> > do this quickly. Otherwise I need some time to establish that sort
> > of connection.
> > 
> 
> If I understand this correctly, the only difference is instead of reading PCH 
> DevID/RevID from the host hardware, QEMU inserts those values into PCH 
> virtual device by looking at the reverse mapping table it maintains.
> 
> I agree the downside of doing this is the reverse mapping table may be hard 
> to maintain.

Point is it won't be, since future devices will not need this hack.
So we just need the table for the existing ones, build it
once and forget about it.

> What is the advantage of doing this instead of having QEMU reading it from 
> the host?  Is it to test to make sure reverse mapping methods works before it 
> is adopted in the new drivers?

That's one.
But a more important one is security: we don't want QEMU
to poke at any new files in the host if we can help it,
otherwise all distros have to change their LSM policies
to allow this. In particular when using kvm with VFIO, QEMU does not
poke at /sys/bus/pci/ at all anymore, and let's keep it that way.

> > Thanks
> > Tiejun
> > 
> > >
> > >>>
> > >>> the problem with these tables is they are hard to keep up to date
> > >>
> > >> Yeah. But I think currently we can just start from some modern CPU
> > >> such as HSW and BDW, then things could be easy.
> > >>
> > >> Allen,
> > >>
> > >> Any idea to this suggestion?
> > >>
> > >>> as new hardware comes out, but as future hardware won't need these
> > >>> hacks, we shall be fine.
> > >>
> > >> Yeah.
> > >>
> > >> Thanks
> > >> Tiejun
> > >>
> > >>>
> > >>>
> > >>>>>>
> > >>>>>> Thanks
> > >>>>>> Tiejun
> > >>>>>>
> > >>>>>>> matched PCH? If yes, what is that benefit you expect in
> > >>>>>>> passthrough case? Shouldn't we pass these info to VM directly in
> > passthrough case?
> > >>>>>>>
> > >>>>>>> Thanks
> > >>>>>>> Tiejun
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>
> > >
> > >
> 
> Allen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.