[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cheap IOMMU hardware and ECC support importance


  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Wed, 25 Jun 2014 10:43:45 +0100
  • Delivery-date: Wed, 25 Jun 2014 09:44:02 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

On 2014-06-25 10:08, Mihail Ivanov wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

So I've read more about ECC and decided I need it.
Will probably use 8-16GB DIMM's 1600Mhz.
About the amount of VGA's - I want 7-14 monitors(at least 2 Megapixels
60Hz on each one - that is ex. 1920x1080), so I need cheap video outputs.

If you are just after the pixel count, you might want to
consider 4K monitors. Most of the motherboards you listed
in the previous email, especially the server grade ones
that support ECC only have 2-3 PCIe x16 slots, and just
about all GPUs nowdays are in PCIe x16 form factor. You
could put them in x8 slots, but you would have to cut
open the end of the slot on the motherboard and make sure
there are no components on the motherboard (e.g. caps,
heatsinks, etc.) in the way of the slot. But then there
is still the limitation of at most 4 dual slot GPUs imposed
by any case you are likely to be able to get.

You could try to find some single-slot GPUs, but they are
very few nowdays and not port-rich. Something like an older
Radeon Eyefinity model with 6 mini-DPs might get you some
of the way there, but IIRC there's a total of one model,
it's a few generations out of date, and it's a dual slot
card.

You could start messing about with ribbon PCIe risers,
but that starts to complicate things and you are still
going to have to heavily modify any off the shelf case
to make that work.

I don't think you'll manage to reconcile your various
requirements.

I use an EVGA SR-2 that is one of 2 boards that has
7x PCIe x16 slots (the other being the EVGA SR-X),
and those are in HPTX form factor and require a
huge case. I have an Lian Li PC-P80 Armorsuit case
which is quite well ventilated and with 2 780Ti GPUs
and CPUs going flat out air cooling is at best marginal
even without overclocking. CPUs get to 80C+, as do GPUs
under load.

And I wouldn't recommend the EVGA SR-2 for this. SR-X
might fare mildly better because the PCIe bridges it
uses are PLX rather than NF200, but the slot width
arrangements are a bit weird, memory channel
configuration is lopsided, and I suggest you read
the last few pages of the the EVGA SR-2/SR-X forum if
you still aren't sufficiently put off those. Oh,
and ATI R9 cards also don't work with NF200 bridges.

Only one-two of the VGA's need be powerful and pass-throughed, the
rest should stay in Dom0.

I still don't think you are going to struggle to
achieve 14 monitors over 4 GPUs no matter how you
look at it. If you want to stretch the desktop,
across multiple screens you might find that something
like Matrox TripleHead2Go may be a more sensible
solution.

So next thing I've read about RAID, so I am thinking of raiding 2 x WD
Black 2 TB. (Should I do software raid or hardware raid?)

ZFS. RAID is, IMO, unfit for purpose with today's
disk size and unreliablity.

I'd also suggest you get HGST drives. Failing that get WD Reds
or another model with TLER (and make sure you enable it).

And about my SSD -  doing regular backups on two small HDD's should be
fine?

You'll have to elaborate on that a bit more, but
backups shouldn't really be in the same machine.

And this is starting to get off topic for Xen.

Also I will be using ZFS

Then why do you even mention RAID and hardware vs. software?

and my Dom0 will be Fedora.

You like having to reinstall your OS from scratch
every 6 months?

being unsure whether the last one supports ECC(I know there are ASUS
mobos that do, but then again - BROKEN IVRS TABLES, so no IOMMU).
I've read mixed opinions about Gigabyte - some people stating official
emails that say that one or another 990FX mobo supports ECC, then
again on their website they say no such thing. So in the end some
people got ECC ram but were unsure if it runs with ECC or if it's
disabled.

"Official" emails aren't worth the electrons they are stored in.
Back when I was looking at getting an SR-2 I was assured that it
supports VT-d, even though the NF200 bridges break VT-d support
quite badly.

And about needing more than 6 cores, I'd like to - but my budget is
tight so going 600 $ on the CPU is bad enough as it is, since I have
to buy expensive mobo(300-350$) and ram too.

Xeon X5650s (6 cores) are going for nearer $300 on ebay.

And I still think you should start with a motherboard
(or a whole workstation) from the list of those Citrix
certified for GPU passthrough. Otherwise there is a non
trivial chance you will find yourself with a lot of
expensive hardware that is not fit for the purpose you
bought it for. The nature of the problems with the
SR-2 was such that I was able to work around them
and get things working with some custom patches with
relatively livable-with side effects, but even if you
were prepared to sink that amount of time into the
project, you may run into problems with other hardware
that simply aren't solvable in software.

Gordan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.