[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] gaming on multiple OS of the same machine?



On Sat, May 19, 2012 at 3:15 PM, Peter Vandendriessche
<peter.vandendriessche@xxxxxxxxx> wrote:
> Hi Andrew,
>
> On Sun, May 13, 2012 at 1:30 PM, Peter Vandendriessche
> <peter.vandendriessche@xxxxxxxxx> wrote:
>>
>> On Sat, May 12, 2012 at 12:54 AM, Andrew Bobulsky <rulerof@xxxxxxxxx>
>> wrote:
>>>
>>> If you're still only in these
>>> conceptual stages of your build, I may have some suggestions for you
>>> if you like.
>>
>>
>> I am still in the conceptual stage, and I'm very much willing to listen.
>
>
> Could you tell me your hardware suggestions then?
>
> Best regards,
> Peter

Hello again, Peter,

Okay!  Here goes!

While the desire for raw computing power would cause me to suggest
building this as an Intel i7 setup, I have no experience with IOMMU
(VT-d) on Intel boards, and couldn't suggest one.  Furthermore, I
believe that sheer core count for this type of application is probably
your single most important spec, and the cost of an 8-core i7 is
insanely high.  With that in mind, I'm going to suggest below a
modified (from my setup) AMD-based build containing a board/CPU that I
have tested and can confirm works with PCIe passthrough on Xen.... but
not with 4 video cards.  That said, the architecture of the board and
chipset should not prohibit this, but I can't make a guarantee.  While
I *would* suggest the board that I currently use (the MSI
890FXA-GD70), it required a BIOS downgrade for me to get IOMMU to
work... which would probably break compatibility with the CPU I'm
going to suggest.  Perhaps that's been fixed in the year since I
discovered the problem, but I have no idea :P

Motherboard: Gigabyte GA-990FXA-UD7 (
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128508 )

CPU: AMD FX-8150 (
http://www.newegg.com/Product/Product.aspx?Item=N82E16819103960 )

For a case, you'll want something with 8 slots on the PCI backplane.
The cheapest case I found that has this is the Antec 100.  It also
happens to be extremely cheap, compared to any Enthusiast-class case
that I've come across.  Everything else was $200+.  On a side note, it
was a great case to work with!

Case: Antec One Hundred (
http://www.newegg.com/Product/Product.aspx?Item=N82E16811129183 )

Your power supply is a crucial component here.  I find that, working
in a high-power-consumption scenario that is outside of the "normal"
bounds of system design, having a single-rail PSU is the only way to
save yourself from some serious headache.  The PSU I suggest is now my
personal favorite because of its stats and affordability.  I've
personally bought four of them, and recommended it to other, many
satisfied friends and colleagues.

PSU: Azza Titan 1000W PSAZ-1000A14 (
http://www.newegg.com/Product/Product.aspx?Item=N82E16817517008 )



...And, for the core of the system, that's mostly it.  Now, while it
/is/ possible on the motherboard that I mentioned to perform PCIe
passthrough of the onboard USB controllers in a fashion that gets USB
into your VMs, I haven't personally confirmed the count of controllers
(you'll need four, obviously!), and verified that they work correctly
when passed to different VMs.  While they should, again assuming that
there are enough of them, some testing needs to be done.  I'm not
currently in a position to do that at the moment, but as your build
progresses, I may be able to accommodate.  That said, to solve the
problem your best bet would be a single single-slot GPU (watercooled
is the only way to do this with current-gen cards AFAIK), which will
make way for an add-in USB controller.  I mentioned this controller
previously in the thread, and it's probably one of a kind in terms of
architecture: every port is a separate USB controller on the PCIe bus,
allowing you to assign one USB port, via PCI Passthrough, to each of
your VMs.  I bought some extension cables and four-port hubs from
Monoprice, plugged the extension into the card, the hub into the
extension, and then Keyboard/Mouse/USB Audio into each hub, leaving a
single port free for the user to plug in a thumb drive or whatever.

USB Controller: HighPoint RU1144A (
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115104 )

USB Extension: 15ft USB 2.0 (
http://www.monoprice.com/products/product.asp?c_id=103&cp_id=10303&cs_id=1030304&p_id=5435&seq=1&format=2
)

USB Hub: 4-port USB 2.0 (
http://www.monoprice.com/products/product.asp?c_id=103&cp_id=10307&cs_id=1030702&p_id=6631&seq=1&format=2
)

USB Audio: Generic Piece of Crap, buy extras (
http://www.amazon.com/Channel-External-Sound-Audio-Adapter/dp/B0027EMHM6/ref=sr_1_2?rps=1&ie=UTF8&qid=1337522161&sr=8-2
)

Cheap-o Desk Mic, it works, but it kinda sucks (
http://www.amazon.com/gp/product/B00284VD02/ref=oh_details_o05_s00_i00
)

Optionally, if you *don't* want to watercool a video card, or can't
find a single-slot card to suit your needs, you can always do what I
did and hang the USB controller off of a PCIe riser cable.  It's ugly,
but it gets the job done.  Due to the way that installing a video card
essentially "on top" of the riser causes you to resort to a bit of
force to get it installed (you can also do some shaving of the PCIe
connector with a dremel or a grinder or whatever to reduce its
vertical profile, but that's a bit beyond the scope of this email :P),
I'd suggest buying two, just in case it happens to not work when you
put it all together.  The stress points that such an installation
happens to aggravate are also the least-robust portions of the riser
cable's connectors.  Try not to install it more than once if you can
help it.

Riser Cable: PCIe 4x Riser (
http://www.amazon.com/HOTER-PCI-E-Express-Riser-Flexible/dp/B0057M0LT6/ref=sr_1_2?s=electronics&ie=UTF8&qid=1337522446&sr=1-2
)


....Whew!  That's it for my suggestions, at least as far as my
experience in this matter is concerned, anyway!  Other considerations
that you'll have are really up to personal preference and budget.
Think video cards, keyboards, mice, speakers or headsets, and so on.
I used Radeon 5850's for it, and switched one out with a Radeon 6870 I
had instead (just because I could, really) which also works well.  For
storage, you'll likely want a single hard disk dedicated to each VM,
use LVM or a full-disk passthrough, perhaps.  I bought a bunch of 74
gig WD Raptors on clearance, and each VM gets one of those.  The
hypervisor boots from a dedicated disk as well.  Alternatively, you
could just snag a 300-400 GB SSD, and put them all on a single disk.
You'd get more than enough IOPs out of that, without the
seeking-related latency issues that multiple VMs sharing a single,
traditional HDD would encounter, but I'm sure you're aware of that :)

I'm not too sure if I missed anything... it's taken a while to write
this, but I'm hoping it's pretty comprehensive!  While the advice is
intended for Peter, of course, I do invite anyone to ask questions or
add comments as you like.

And finally, attached is a photo that I took of my wiring job (minus
the PCIe riser/USB and hard disks!).  It's nowhere near as nice now
because of the followup work I put into the hardware to actually make
the damned thing work, but the insight I've gleaned since then is in
the text above, despite its absence in the photo below! ;-)

Best Regards,
Andrew Bobulsky

Attachment: photo.JPG
Description: JPEG image

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.