[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] gaming on multiple OS of the same machine?
Hi Peter, You are correct, I meant to type "without" the NF200 chip. I will now explain in detail: I checked every major manufacturers high end boards, searching for one with the best features to price and was aiming for the most PCIe slots I could get. Pretty much Every single board with more than 3 PCIe (x16) slots came with some form of PCI switch. Most of these PCI switches break IOMMU in one way or another.
On the ASRock Extreme7 Gen3 it came with two, the NF200 and the PLX PEX8606. The NF200 is completely incompatible with IOMMU, anything sitting behind it creates a layer of fail between your card and success.
The PLX was an entirely different form of problem, it merges device "function" with device identifiers. If you run "lspci" you get a list of your devices, they are identified by "bus:device.function".
The last two PCIe slots on the ASRock Extreme7 Gen3 shared functionality with onboard components, so for example the second to last was shared with my dual onboard LAN and the ASMedia SATA controller. When I used xen-pciback.hide to remove just the graphics card, it removed the other two components as well (treating them as one "device"). As a result I lost internet and four drive ports worth of storage.
In conclusion, I've already tried to find a way to make a 3-4x gaming machine and failed due to hardware problems, I didn't even get a chance to run into software issues.
************* In my opinion it would be cheaper both in time and money to buy two computers and reproduce one setup on the other, than to try getting four machines up on a single physical system.
A 4-core i7 with hyperthreading will be treated like 8-vcores, and while you "could" pass two to each windows machine, you end up with nothing left for the control OS (Dom0) or the hypervisor (Xen) itself. They would share CPU's of course, but in my opinion you're bound to run into resource conflicts at high load.
************* I didn't think the dual GPU's would work, for the same reason my PLX chip created trouble. While it has two GPU's it's probably treated as a "single device" with multiple "functions" and you can't share a single device between multiple machines.
So, you will need a motherboard with four distinct PCIe x16 slots that are not tied to some PCI Switch such as the PLX or NF200 chip. I can't say that such a board doesn't exist, but my understanding is that no board manufacturer is producing consumer hardware specifically for virtualization, and NF200 or PLX are beneficial to anyone running a single OS system with multi-GPU configurations (SLI/Crossfire), which would account for the majority of their target market.
************* Armed with this knowledge, here is where you may run into problems: - Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
- Sparing enough USB ports for all machines input devices - Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips - Will need a 6-core i7 to power all systems without potential resource conflicts
- Encountering bugs nobody else has when you reach that 3rd or 4th HVM If I were in your shoes, I would do two systems, others have already mentioned success, so you'll have an easier time of getting it setup, and you won't buy all that hardware only to run into some limitation you hadn't planned on.
************* I started my system with Stock air cooling and 2x 120MM fans in a cheap mid-tower computer case. The CPU never went over 60C, the GPU doesn't overheat either, and the ambient temperature is around 70F.
I did upgrade my stock CPU fan to a corsair H70 Core self-contained liquid cooling system, was inexpensive and my CPU stays around 40C on average, plus it's even quieter than it was before. I have never run more than one GPU in my computers before, so I don't know if there is some special magic that happens when you have two or more that they suddenly get even hotter, but I have to imagine that not to be the case unless you're doing some serious overclocking.
************* The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect three GPU's and still have space for a single-slot PCIe device, but I only have a 650W power supply, and have no need for more than one Windows instance.
************* Secondary VS Primary: Secondary cards are available after the system boots up. Primary cards are used from the moment the system boots. Your primary card is where you will see POST during boot-time, and the windows logo.
Secondary pass through works great for gaming, shows the display after the machine boots without any problems, and takes literally no extra effort to setup on your part. Primary passthrough requires custom ATI patching, and what exists may not work for all cards.
I began looking into Primary passthrough very recently, because I use my machine for more than just games and ran into a problem. Software like CAD, Photoshop, and 3D Sculpting tools use OpenGL and only work with the primary GPU, which means they either don't run or run without GPU acceleration (slowly).
************* A lot to take in, but I hope my answers help a bit. If you have more questions I'll be happy to share what knowledge I can. ~Casey
On Sun, May 13, 2012 at 7:30 AM, Peter Vandendriessche <peter.vandendriessche@xxxxxxxxx> wrote:
_______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |