[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] AMD-Vi/Intel VT-d: Passthrough or virtualization?

On Tue, 4 Jun 2013 08:24:53 -0400, Nick Khamis <symack@xxxxxxxxx> wrote:
There was no way I was going to pass on this conversation, even with
my busy schedule.

This means that if you only have a single Video Card, you potentially will lose video output for anything but that VM. Best setup seems to be an integrated Video Card like the one that most Processors currently got to handle the Hypervisor + other VMs with the standard emulated VGA Drivers plus a discrete Video Card for the gaming VM.

My understanding was that this relied on the number of Virtual
Functions a PCI device was equipped with in the firmware. This at
least is the case for network cards...

Is it? I cannot say I tested PCI passthrough with all of my network
cards, but if special support for this is required from the NIC
firmware, I'd have expected at least some to not work, especially the
low end ones. Yet this doesn't seem to have been the case.

As far as I can tell, driver quality, especially for Windows domUs,
seems to be a more significant issue.

Not to chase tails here however, can we step back and figure out which of the chipset manufactures (AMD vs. Intel) provides a stable platform
that can be used in production. We are not necessarily interested in
GPUs but we are interested in passing through network cards QLogic,
Intel etc... I would imagine this would still be important to the
gamers, and Justin.tv broadcasters as well....

Secondly, are any of the manufactures such as IBM and DELL openly
pushing stable virtualized platforms (chipsets, pci bus, cpus, bios),
that we can again run in a production. And of course, there should be
XEN compatibility on top of all this...

As far as I can tell, the issue is not all that clear cut. Hardware
and firmware these days are almsot as buggy as software, and two
motherboards with near identical spec from different manufacturers
may well produce different success rates.

We can understand why the chipset, cpu, and even pci hardware
manufactures would play this cat and mouse game with virtualiztion
since to them it equates to less sales......

I am not sure that that follows as clearly. Take Nvidia for example.
For bare metal use you can use a "cheap" GeForce card. If you want
to use it with VGA passthrough, you have to shell out 3-4x the
amount for a Quadro card that is by and large identical but for a
few resistors and half a byte of firmware. On the flipside, they
do seem to do a pretty decent job of making sure that those that
pay 4x the amount for their hardware do in fact end up with a
decent, stable solution.

Casey DeLorme
From my experience if VT-d or IOMMU are not explicitly mentioned in the user manuals (available for download off the net before you spend a dime on the board) then it likely
does not have support for it.

Interesting... We do something similar when purchasing IBMs. We look
to see if there are BIOS firmware updates that involve virtualization
such as this:


I came in a little late in the game for this conversation however, can
we please iron out some issues here. At an abstract level (i.e.,
chipsets, cpus, gpu, network interfaces), without mentioning any
motherboard manufactures such as ASRock, Asus, Saphire etc.. can we
determine which combination will work. Both on the AMD and Intel
platform. The reason for this is because not too many people deploy
white boxes for production. it's strictly SuperMicro, IBM, Dell etc...

And now you know why you pay a huge premium for "certified" or
"approved" hardware - somebody has spent a lot of man-hours and money
testing the hardware/software combinations to ensure that what they end
up selling actually works. Unless your time is close to worthless or
your expected deployments are at least in the hundreds, it is infeasible
to invest the required amount in testing to come up with your own
white-box solution.

Don't get me wrong - I spend such an infeasible amount of time
and effort on various projects all the time, but I do this because
the involved technologies interest me, or because I want a solution
that nobody has come up with exactly in the way I want it to work,
not because it would ever be viable in the context of any commercial
systems integration. This is also why off the shelf solutions tend
to only focus on a small subset of the problem space - the simple
stuff is what most people want and any more than that starts to
get prohibitively expensive, especially when you get to the point
of effectively debugging your hardware manufacturers firmware
without any access to it's source code.

Once determining this, it would be nice to discuss which of the
manufactures are able to support stable platforms without rendering 3
out of the 4 PCIe/x slots useless. This would hurt!

The only suggestion I can come up with is a wiki where those
interested in contributing can post success/failure stories
surrounding specific motherboards (I will be happy to contribute
any recipes I come up with if/when I manage to get my EVGA SR-2
to work with my hardware the way I want to use it).

We also need to keep it fairly modern. For example, PCIe slots would
preferably be PCIe v2 x8/x16 PCIe v3 x8/x16. For example, I mentioned
IBM enabled virtualization server however, what I am really looking at
are the guts. For example:

Intel Based (x3550 M4)
CPU: E5-2600
Chipset: Intel C604

All PCIe slots are PCIe 3.0
Slot 1: PCIe x16; low profile, half-length
Slot 2: PCIe x8, opt. PCI-X or PCIe x16; full-height/half-length (PCIe
x16 req. 2nd CPU)

I am actually not sure if this platform support virtulaization since
the little bit of googling did not yield much.

AMD Based: x3755 M3
CPU: AMD Opteron 6200
Chipset: SR5690

All PCIe slots are PCIe 2.0

Slot 1: PCIe x16, full height, full length
Slot 2: PCIe x8, low profile, half length
Slot 3: PCIe x8 (x4 wired), low profile, half length
Slot 4: PCIe x8, low profile, half length (internal only, reserved
for RAID cont

Oh - so you actually want manufacturers to come up with hardware
that they pre-certify for virtualization use? Nice idea if it happens.


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.