[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] gaming on multiple OS of the same machine?



On Sat, May 12, 2012 at 12:54 AM, Andrew Bobulsky <rulerof@xxxxxxxxx> wrote:
Hello Peter,

I've done exactly this, and I can affirm that it kicks ass ;)

Wonderful, that's the best answer the existential question can have. :)


Make sure that you actually have the cores to give to those DomUs.
Specifically, if you plan on making each guest a dual core machine,
and have 4 guests, get an 8 core chip.

8 core or 8 threads? I was planning to get one of those 4core/8thread CPUs via hyperthreading. Sufficient or not?
I read in the documentation that 2 threads are reserved for the windows graphics anyway, so it'd have to be 4 virtual cores anyway.


> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD 6990 (=4 GPUs in 2 PCI-e slots)?
Alas, no. Not because Xen or IOMMU won't allow it, but because of the
architecture of the 6990. While the individual GPUs /can/ be split up
from the standpoint of PCIe, all of the video outputs are hardwired to
the "primary" GPU.  So while it would work in theory, there's nowhere
to plug in the second monitor.

So, are there any other dual GPUs that do work here? They don't have to be high-end (low power is even preferred), but given that most graphics cards are 2 pci-e slots high, having 2 dual cards or 4 single cards makes a HUGE difference in motherboard options, case requirements, cooling solutions, connectivity (pci-e wifi, pci-e usb controllers, ...) so anything that would deliver 4 discrete GPUs via 2 PCI-e slots would be far better than any other option.


I suggest picking up a Highpoint RocketU 1144A USB3
controller. It provides four USB controllers on one PCIe 4x card,
essentially giving you four different PCIe devices, one for each port,
that can be assigned to individual VMs.

If the 2x dual GPU option works, then that's certainly possible. Otherwise, I'll need really all PCI-e slots for the GPUs (used or covered). And that is a problem in itself, as I'd want to use wifi for the networking and it needs a PCI-e slot.


If you're still only in these
conceptual stages of your build, I may have some suggestions for you
if you like.

I am still in the conceptual stage, and I'm very much willing to listen.

Currently I'm mainly wondering how to get 4 GPUs cooled cheaply. Watercooling is overkill (I'm not needing high graphics like HD6990 anyway, just wanting to play games at medium-low resolutions for the coming years) but with air cooling they will block eachother's airflow and steal the slots for an extra wifi card or for the USB controller. So anything to get 2x dual GPU working here would be great.


Now that I think of it, you'll have the least amount of hassle by
doing "secondary VGA passthrough," which is just assigning a video
card to a vm as you would any other PCIe device. I'll readily admit
that this is nowhere near as cool as primary passthrough, but it
involves the least amount of work.

Where can I find information on the difference between these? Google suggests that primary/secondary VGA passthrough is passing the primary/secondary GPU to the VM, but that doesn't seem to make sense here...


Best regards,
Peter
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.