[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] How to create PCH to support those existing driver

On 2014/8/18 16:21, Michael S. Tsirkin wrote:
On Mon, Aug 18, 2014 at 11:06:29AM +0800, Chen, Tiejun wrote:
On 2014/8/17 18:32, Michael S. Tsirkin wrote:
On Fri, Aug 15, 2014 at 09:58:40AM +0800, Chen, Tiejun wrote:
Michael and Paolo,

Please re-post discussion on list. These off list ones are just
wasting time since they invariably have to be repeated on list again.

Okay, now just reissue this discussion to all related guys. And do you think
we need to discuss in public, qemu and xen mail list?


Now -CC qemu, xen and intel-gfx.

If I'm missing someone important please tell me as well.

After I created that new machine specific to IGD passthrough, xenigd, now I
will step next to register the PCH.

IIRC, our complete solution should be as follows:

#1 create a new machine based on piix, xenigd

This is done with Michael help.

#2 register ISA bridge

1> Its still fixed at 1f.0.
2> ISA bridge's vendor_id/device_id should be emulated but then
        subsystem_vendor_id = PCI_VENDOR_ID_XEN;
        subsystem_device_id = ISA Bridge's real device id

This mean we need to change driver to walk with this way.
For example, in
case of Linux native driver,

        if (pch->subsystem_vendor == PCI_VENDOR_ID_XEN)
                id = pch->subsystem_device & INTEL_PCH_DEVICE_ID_MASK;

Then driver can get that real device id by 'subsystem_device', right?

This is fine now but how to support those existing drivers which are just

Here correct one point, we don't need to care about supporting the legacy
driver since the legacy driver still should work qemu-traditional. So we
just make sure the existing driver with this subsystem_id way can support
those existing and legacy platform.

Now this is clear to me.

dependent on checking real vendor_id/device_id directly,

        if (pch->vendor == PCI_VENDOR_ID_INTEL) {
                unsigned short id = pch->device & INTEL_PCH_DEVICE_ID_MASK

Maybe I'm missing something, please hint me.


The subsystem id was just one idea.

But from that email thread, "RH / Intel Virtualization Engineering Meeting -
Minutes 7/10", I didn't see other idea we should prefer currently.

What was finally agreed for future drivers is that guests will get all
information they need from the video card, this ID hack was needed only
for very old legacy devices.
So this is for newer guests, they will work without need
for hacks, like any other device.

Actually we had a meeting to discuss our future solution, but seems you were
on vacation at that moment :)

In that meeting we had an agreement between us and some upstream guys.

We will have such a PCI capability structure in this PCI device to represent
all information in the future. This make sens to Intel as well.

Maybe Allen or Paolo known more details.

But obviously this a long-term solution, so currently we will work with this
subsystem_id way temporarily. And this way is accepted by those guys in the

I don't see the point really. If you are modifying the driver,

Yes, we need to modify something in the driver.

why not modify it to its final form.

What's your final form?

As I track that email thread, seems the follows is just a way you guys achieve a better agreement.

> why not set the subsys vid to the RH/Quamranet/Virtio VID, so it's
> obvious for the use-match?

That's exactly the suggestion. Though upstream they might be using the XenSource id since the patches were for Xen.

Or I'm missing something?


For existing drivers: Vendor ID is intel anyway.


For device ID, override it through a property
or something. But I think poking at the real host from
qemu is a mistake though, host is not
protected by iommu.
Two possible suggestions were to reverse-detect
id of the device from the card that is assigned,

I guess you're saying pci_get_device(vendor/devices_ids), right?

or just make it a user property, and move the smarts
to management.

Sorry could you elaborate this way in detail?


Will do but let's do it on the mailing list.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.