[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cheap IOMMU hardware and ECC support importance



Gordan Bobic <gordan@xxxxxxxxxx> writes:

> On 2014-06-26 18:36, lee wrote:
>> Gordan Bobic <gordan@xxxxxxxxxx> writes:
>>
>>> On 2014-06-26 17:12, lee wrote:
>>>>> Mihail Ivanov <mihail.ivanov93@xxxxxxxxx> writes:
>>>>
>>>>> So next thing I've read about RAID, so I am thinking of raiding 2
>>>>> x WD
>>>>> Black 2 TB. (Should I do software raid or hardware raid?)
>>>>
>>>> Software raid can mean quite a slowdown compared to hardware raid.
>>>
>>> The only situation where hardware RAID helps is if you have
>>> a _large_ battery backed write cache, and then it only helps
>>> on small bursty writes. A recent x86 CPU can do the RAID
>>> checksumming orders of magnitude faster than most RAID card
>>> ASICs, and hardware RAID cache is completely useless since
>>> anything that is likely to be caught in it will also be in
>>> the OS page cache.
>>
>> The CPU may be able to handle the raid faster, and there may be lots of
>> RAM available for caching.  Both using CPU and RAM draws on resources
>> that may be occupied otherwise.
>
> A typical caching hardware RAID controller has maybe 3% of RAM of
> a typical server. And I'm pretty sure that for the price of one
> you could easily get more than an extra 3% of CPU and RAM.

That depends on what you have and need.  I needed at least 9 SATA
ports.  Choices:


+ buy a new board plus CPU plus RAM
  - costs at least 10 times of what I payed for the controller and gives
    me only max. 8 ports

+ max out the RAM
  - means to buy 16GB of RAM, throwing 8GB away, costs more than what I
    payed for the controller

+ buy some relatively cheap SATA controller
  - might not work at all, or not work well, and gives me only 1--2
    additional ports, i. e. a total of only 8.  It would have cost less
    than what I payed for the RAID controller, but is it worth the
    trouble?  It would have blocked a PCIe slot for only 1--2 more
    ports.  I didn't find that worthwhile but a waste of money.


The hardware RAID controller gives me 10fps more with my favourite game
I'm playing, compared to software raid.  Since fps rates can be rather
low (because I'm CPU limited), that means a significant difference.

>>> The time where hardware RAID was worthwhile has passed.
>>
>> I'm not sure what you consider "recent".  I have an AMD Phenom 965, and
>> I do notice the slowdowns due to software raid compared to hardware
>> raid, on the very same machine.
>
> I can believe that if you have a battery backed cache module

It has one.

> and your workload includes a lot of synchronous writes. But
> for that workload you would probably be better off getting an
> SSD and using ZFS with ZIL in terms of total cost, performance
> and reliability.

SSDs still loose badly when you compare price with capacity.  For what I
payed for the RAID controller, I could now buy two 120GB SSDs (couldn't
back then).  That means two disks more, requiring two more SATA ports
(11 in total), and an increased overall chance of disk failures because
the more disks you have, the more can fail.

I don't know about ZFS, though, never used that.  How much CPU overhead
is involved with that?  I don't need any more CPU overhead like comes
with software raid.

>> expensive ones.  Perhaps the lack of ports is not so much of a problem
>> with the available disk capacities nowadays; however, it is what
>> made me
>> get a hardware raid controller.
>
> Hardware RAID is, IMO, far too much of a liability with
> modern disks. Latent sector errors happen a lot more
> often than most people realize, and there are error
> situations that hardware RAID cannot meaningfully handle.

So far, it works very well here.  Do you think that software RAID can
handle errors better?  And where do you find a mainboard that has like
12 SAS/SATA ports?

>> I can say that the quality of Debian has been declining quite a lot
>> over
>> the years and can't say that about Fedora.  I haven't used Fedora that
>> long, and it's working quite well.
>
> Depends on what your standards and requirements are, I suppose.
> I have long bailed on Fedora other than for experimental testing
> purposes to get an idea of what to expect in the next EL. And
> enough bugs filter down to EL despite the lengthy stabilization
> stage that it's becoming quite depressing.

It seems that things are getting more and more complicated --- despite
they don't need to --- and that people are getting more and more
clueless.  More bugs might be a side effect of that, and things aren't
done as thoroughly as they used to be done.

> I find that on my motherboard most RAID controllers don't work
> at all with IOMMU enabled. Something about the way the transparent
> bridging native PCIX RAID ASICs to PCIe makes things not work.

Perhaps that's a problem of your board, not of the controllers.

> Cheap SAS cards, OTOH, work just fine, and at a fraction of
> the cost.

And they provide only a fraction of the ports and features.

> As I said, I had far more problems with SAS RAID cards than SATA
> controllers, and I use PMPs on top of those SAS controllers. I
> might look at alternatives if I was running on pure solid state
> but for spinning rust SATA+PMP+FIS+NCQ yields results that a
> hardware RAID controller wouldn't likely improve on.

I plugged the controller in, connected the disks, created the volumes,
copied the data over, and it has been working without any problems ever
since, eliminating the CPU overhead of software raid.  After some time,
one of the disks failed, so I replaced it with no trouble.

The server is the same --- only that it crashes (unless that is finally
fixed).  That it crashes may be due to a kernel or xen bug, or to the
software for the raid controller being too old.

Anyway, I have come to like hardware RAID better than software RAID.
You could as well argue that graphics cards are evil.

>>> Alternatives aren't better, IMO. Having tried Xen, VMware and KVM,
>>> Xen was the only one I managed to (eventually) get working in the
>>> way I originally envisaged.
>>
>> Hm, I find that surprising.  I haven't tried VMware and thought that as
>> a commercial product, it would make it easy to set up some VMs and to
>> run them reliably.
>
> It's fine as long as you don't have quirky hardware.
> Unfortunately, most hardware is buggy to some degree,
> in which case things like PCI passthrough are likely
> to not work at all.
>
> With Xen there is always the source that can be modified
> to work around at least the more workaroundable problems.
> And unlike on the KVM development lists, Xen developers
> actually respond to questions about working around such
> hardware bugs.

So with VMware, you'd have to get certified hardware.

>> KVM/QEMU I tried years ago, and it seemed much more
>> straightforward than xen does now, which appears to be very chaotic.
>
> Now try using it without virt-manager.

I used KVM/QEMU without and am using xen without.

>> After all, I'm not convinced that virtualization as it's done with xen
>> and the like is the right way to go.
> [...]
>
> I am not a fan of virtualization for most workloads, but sometimes
> it is convenient, not least in order to work around deficiencies of
> other OS-es you might want to run. For example, I don't want to
> maintain 3 separate systems - partitioning up one big system is
> much more convenient. And I can run Windows gaming VMs while
> still having the advantages of easy full system rollbacks by
> having my domU disks backed by ZFS volumes. It's not for HPC
> workloads, but for some things it is the last unsuitable solution.

Not even for most?  It seems as if everyone is using it quite a lot,
make it sense or not.


-- 
Knowledge is volatile and fluid.  Software is power.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.