New build: benchmarks and reviews...

Status
Not open for further replies.

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Introduction


I became in need for a NAS solution, something to store all my files and provide a great level of safety and redundancy.

ZFS was always going to be my choice of filesystem; it worked well, provide lots of useful features (especially snapshots) and is very reliable.

I looked at the existing professional solutions, but none of them provided the flexibility I was after.
FreeNAS' backer iXsystems has an interesting FreeNAS Mini; but only allows a maximum of four disks; and their professional solutions was just outside my budget.


It's been a while since I had done a ZFS benchmarks (check zfs-raidz-benchmarks I wrote in 2009).
So I'm at it again.


Hardware Setup

- Supermicro SC826TQ-500LPB 12 bays chassis
- Supermicro X10SL7-F motherboard
- Intel E3-1220v3 processor
- 32GB RAM (made of 4*8GB Kingston DD3 1600MHz ECC KVR16E11/8EF)
- 6 x WD RED 4TB



from the top
two chassis: 24 bays total
Description

The chassis comes with a 500W platinum rated redundant power supplies; it's rated at 94% for 20% load and 88% at 10% load. Even with 12 disks, it won't ever go over 25% load so this power supply is overkilled, but it's the smallest Supermicro has.

The X10SL7-F has 6 onboard SATA connectors plus a LSI 2308 SAS2 with 8 SAS/SATA ports.

ZFS shouldn't run on top of hardware RAID controller, it defeat the purpose of ZFS. The LSI was flashed with an IT firmware, making the 2308 a plain HBA.

The plan was to use RAIDZ2 (ZFS equivalent to RAID6), which provides redundancy for two simultaneous disk failures). RAIDZ2 with six 4TB disks would give me 16TB (14TiB) of available space.

The system could later be extended with six more disks... As this is going to be used as a MythTV storage center, I estimate that it will reach capacity in just one year (though MythTV perfectly handles auto-deleting low-priority recordings)

The choice came between the new Seagate NAS drives and WD. My primary concern was power consumption and noise: the WD being 5400 drives win power-wise, but the Seagate are a tiny bit more quiet. Anandtech review also found that IOPS on the WD Red were slightly better: this and the lower power consumption made me go for the WD: the noise difference being minimal.

While I like to fiddle with computers, I'm not as young as I used to and as such, I wanted something that would be easy to use and configure: so my plan was to use FreeNAS.

FreeNAS is a FreeBSD based distribution that makes everything web configurable... It's still not for the absolute noob, and requires that you have good understanding of the underlying file system: ZFS.

FreeNAS runs of a USB flash drive in read-only mode, and let you install all of the FreeBSD ports in a jail residing in the ZFS partitions..

Hiccups

My plans became slightly compromised once I put everything together and realise how noisy that setup was. The Supermicro chassis being enterprise-grade, its only concern is to keep everything cool. But damn, that thing is noisy: no way I could ever have this in any room or office.

There's nothing in the motherboard BIOS allowing you to change the fans speed. The IPMI web access let you choose the fan speed mode: but the choice ends up being between "Normal" which wouldn't let anyone sleep, and "Heavy" which would for sure wake-up the neighbours.

The fans on this motherboard are controlled by a Nuvoton NCT6776D. On linux the w83627ehf kernel module let you control most of the PWM including the fans speed, unfortunately I found no equivalent on FreeBSD. So if I'm to run FreeBSD I would have to use an external voltage regulator to lower the fans: something that doesn't appeal to me.

Also, this motherboard and chassis provides SGPIO interface to control the SAS/SATA backplane and indicates the status of the drives. This is great to identify which disk is faulty as you can't always rely on the device name provided by the OS.

However, I connected all my drives to the LSI 2308 controller.
Despite my attempts, I couldn't get the front LEDs to show the status of the disk in FreeBSD. Something I could easily do under Linux using the sas2ircu utility...

I like FreeBSD, and always used it for servers, but its lack of hardware gimmick like this started to annoy me. As part of my involvement in the MythTV projects (www.mythtv.org), I have switched to Linux for all my home servers, and I've grown familiar to it over the years.

A few months ago, I would have never considered anything but FreeBSD as I wanted to use ZFS, however the ZFS On Linux (ZOL) project recently called their drivers as "ready for production".... So could it be that linux be the solution?

So FreeBSD or Linux?

I ran various benchmarks, here are the results...

Benchmarks

lz4 compression was enabled on all ZFS partitions.
Intel NAS Performance Toolkit (via Windows and samba: gigabit link)



FreeNASUbuntu RAIDZ2 Ubuntu md+ext4
Test Name Throughput(MB/s) Throughput(MB/s) Difference Throughput(MB/s)​
HDVideo_1Play 93.402 102.626 9.88% 101.585​
HDVideo_2Play 74.331 95.031 27.85% 101.153​
HDVideo_4Play 66.395 95.931 44.49% 99.255​
HDVideo_1Record 104.369 87.922 -15.76% 208.868​
HDVideo_1Play_1Record 63.991 97.156 51.83% 96.807​
ContentCreation 10.361 10.433 0.69% 10.734​
OfficeProductivity 51.627 56.867 10.15% 11.405​
FileCopyToNAS 56.42 50.427 -10.62% 55.226​
FileCopyFromNAS 66.868 85.651 28.09% 8.367​
DirectoryCopyToNAS 5.692 16.812 195.36% 13.356​
DirectoryCopyFromNAS 19.127 27.29 42.68% 0.638​
PhotoAlbum 10.403 10.88 4.59% 12.413​

Bonnie++



FreeNAS 9.1

Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete​
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU​
ports 64G 197 99 741314 74 522658 63 469 95 1531762 68 612.6 3 16 +++++ +++ +++++ +++ +++++ +++ 11537 36 +++++ +++ 22031 71​
Latency 55233us 117ms 4771ms 129ms 760ms 817ms Latency 16741us 78us 126us 145ms 23us 92942us​

Ubuntu - ZOL 0.6.2

Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete​
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU​
ubuntu 63G 199 99 1102305 91 686584 77 498 98 1539862 66 445.4 10 16 +++++ +++ +++++ +++ +++++ +++ 30173 96 +++++ +++ +++++ +++​
Latency 50051us 60474us 326ms 79062us 93147us 133ms Latency 20511us 236us 252us 41664us 10us 356us​

Ubuntu - MD - EXT4

Version 1.97 Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size Per Char Block Rewrite Per Char Block Num Files Create Read Delete Create Read Delete​
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU​
ubuntu 63G 1086 95 137492 11 133601 7 5758 85 575418 15 571.8 7 16 29852 0 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++​
Latency 18937us 146ms 634ms 23816us 86087us 141ms Latency 47us 223us 227us 39us 17us 35us​








iozone

This NAS will mostly deal with very big files, so let's specifically test those (ext4 perform especially poorly here)
started with iozone -o -c -t 8 -r 128k -s 4G


FreeNASUbuntu ZOL 0.6.2Ubuntu md+ext4
Title KB/s KB/s KB/s​
Children see throughput for 8 initial writers 38100.27 38065.71 6141.23​
Parent sees throughput for 8 initial writers 38096.5 37892.05 6140.56​
Min throughput per process 4762.32 4749.14 767.58​
Max throughput per process 4763.04 4769.23 767.74​
Avg throughput per process 4762.53 4758.21 767.65​
Min xfer 4193664.00 KB 4176640.00 KB 4193536.00 KB​
Children see throughput for 8 rewriters 36189.47 36842.59 5938.99​
Parent sees throughput for 8 rewriters 36189.12 36842.26 5938.93​
Min throughput per process 4523.49 4602.7 742.25​
Max throughput per process 4524.1 4609 742.52​
Avg throughput per process 4523.68 4605.32 742.37​
Min xfer 4193792.00 KB 4188672.00 KB 4192896.00 KB​
Children see throughput for 8 readers 4755369.06 4219519.44 541271.02​
Parent sees throughput for 8 readers 4743778.25 4219187.22 540861.93​
Min throughput per process 593043.56 527155.69 58198.66​
Max throughput per process 596547.44 527774.56 71080.23​
Avg throughput per process 594421.13 527439.93 67658.88​
Min xfer 4169728.00 KB 4189696.00 KB 3438592.00 KB​
Children see throughput for 8 re-readers 4421317.25 4648511.62 4961596.72​
Parent sees throughput for 8 re-readers 4416015.2 4648083.46 4593093.04​
Min throughput per process 539363.12 580726.5 31874.36​
Max throughput per process 558968.81 581585 4619527.5​
Avg throughput per process 552664.66 581063.95 620199.59​
Min xfer 4048000.00 KB 4188288.00 KB 29184.00 KB​
Children see throughput for 8 reverse readers 5082555.84 1929863.77 9773067.07​
Parent sees throughput for 8 reverse readers 4955348.81 1898050.51 9441166.11​
Min throughput per process 426778 183929.8 7041.22​
Max throughput per process 991416.19 381972.75 9710407​
Avg throughput per process 635319.48 241232.97 1221633.38​
Min xfer 1879040.00 KB 2075008.00 KB 3072​
Children see throughput for 8 stride readers 561014.62 179665.19 11963150.31​
Parent sees throughput for 8 stride readers 559886.81 179420.54 11030888.26​
Min throughput per process 57737.73 19092.56 11788.88​
Max throughput per process 107340.9 36176.44 9238066​
Avg throughput per process 70126.83 22458.15 1495393.79​
Min xfer 2268288.00 KB 2221312.00 KB 5376​
Children see throughput for 8 random readers 209240.16 93627.64 13201790.94​
Parent sees throughput for 8 random readers 209234.74 93625.12 12408594.53​
Min throughput per process 25897.38 11702.19 72349.58​
Max throughput per process 27949.81 11704.4 9059793​
Avg throughput per process 26155.02 11703.45 1650223.87​
Min xfer 3886464.00 KB 4193536.00 KB 34688​
Children see throughput for 8 mixed workload 91072.29 24038.27 Too Slow
Cancelled​
Parent sees throughput for 8 mixed workload 17608.63 23877.52​
Min throughput per process 2305.94 2990.74​
Max throughput per process 20461.23 3021.75​
Avg throughput per process 11384.04 3004.78​
Min xfer 472704.00 KB 4151296.00 KB​
Children see throughput for 8 random writers 36929.77 37391.58​
Parent sees throughput for 8 random writers 36893.05 36944.62​
Min throughput per process 4615.37 4643.6​
Max throughput per process 4617.82 4704.09​
Avg throughput per process 4616.22 4673.95​
Min xfer 4192128.00 KB 4140416.00 KB​
Children see throughput for 8 pwrite writers 37133.23 36726.14​
Parent sees throughput for 8 pwrite writers 37131.31 36549.34​
Min throughput per process 4641.49 4575.27​
Max throughput per process 4641.97 4600.88​
Avg throughput per process 4641.65 4590.77​
Min xfer 4193920.00 KB 4171008.00 KB​
Children see throughput for 8 pread readers 4943880.5 4806370.25​
Parent sees throughput for 8 pread readers 4942716.33 4805915.05​
Min throughput per process 617373.75 595302.62​
Max throughput per process 619556.38 603041.88​
Avg throughput per process 617985.06 600796.28​
Min xfer 4179968.00 KB 4140544.00 KB​






phoronix test suite

results are available here (xml result file here)
Comparison including md+ext4 raid6 here



Conclusions

Ignoring some of the nonsensical data found by the benchmarks above which indicates pure cache effect; the ZFS On Linux drivers are doing extremely well, and Ubuntu manages on average to surpass FreeBSD: that was a surprise...
Maybe time to port FreeNAS to use linux as kernel? That would be a worthy project...
Sorry, the array don't render very well here... I created a blog entry there:
http://jyavariousideas.blogspot.com.au/2013/11/zfs-raidz-benchmarks-part-2.html
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nice writeup. I knew you'd have problems with noise when you wrote that you'd be using it for a MythTV storage box. That almost always means "in the same room I spend significant time in". Then I got a good laugh at your comment "but the choice ends up being between 'Normal' which wouldn't let anyone sleep, and 'Heavy' which would for sure wake-up the neighbours". That's why mine stays in the basement. As long as I don't get an F0 tornado in my basement I'll be happy.

Benchmarks are quite often pointless. For ZFS especially it requires careful setup and execution to get numbers that mean anything. You have seen this firsthand as your tests weren't properly setup for the test based on your hardware, hence the absurdly high values you got. Not to mention the caching issue, but even small details like the ashift of your zpool in comparision to your iozone test and hard drives themselves can cause you to have benchmarks that are complete BS. That has less to do with ZFS and more to do with planning on the physical characteristics of how data is stored.

I have a friend that uses ZFS on linux. The performance is not impressive by any stretch, and benchmarks(as you've demonstrated first hand) are not everything. The FreeNAS smokes it despite the Linux box having clear advantages in every possible way.

Honestly I really have to wonder why, after you dismiss a bunch of benchmarks that are obviously being artificially inflated, you then turn around and say "Maybe time to port FreeNAS to use linux as kernel? That would be a worthy project...". The more logical conclusion you should have obtained if you were trying to be objective is "damn, I need to do more research.. something is not right". It sure sounds like you are trying to sell Linux despite having evidence that should have been a warning sign that your results are completely invalid.





Pro-Tip: If you are having to dismiss any benchmark and you can't point to the solid irrefutable, provable and repeatable reason why you should stop and go to the drawing board(aka you've missed something important). What's to say that for similar reasons that some benchmarks were clearly artificially inflated just as many could be artificially deflated too? Failure of the scientific method in my opinion. This should have been obvious since everything I've read by the guys working on the ZOL project has said "yes, its slow... its going to be slow until we optimize the code" but somehow you came to the conclusion that its faster. Should have been another red flag in my opinion.

Note: On your original review from 2009 you said "ZFS has passed every attempts to crash it... However, FreeBSD doesn't handle hotplug properly.. So any change of drive due to failure will have to be done with the machine offline unfortunately." Actually, hotplug is not useful for disk replacement. What you meant to say was hotswap(there is a difference). Hotswap is supported in FreeBSD but depends on the hardware used and the associated driver(this is clearly documented in the FreeNAS manual). So your assessment is incorrect and I don't think you understand the difference between hotswap and hotplug in the context you used it in.

I'd definitely be interested in seeing a "v2.0" of this after you figure out all of your mistakes and can do a worthy deep-level comparison of ZFS on FreeNAS compared to ZFS on Linux. I can tell you from my personal experience that FreeNAS smokes ZFS on Linux in the performance arena. Not to mention ZFS on linux is not kernel mode so it will always run in userspace(therefore always have a significant penalty in performance). To really do a fair apples-to-apples comparison in the performance arena you'd have to compile it in kernel mode(assuming that is even possible). Even then, many people will then dismiss your benchmark results because kernel mode isn't the directly supported model.

(What would be interesting in my opinion would be to compare FreeNAS to ZOL in kernel and non-kernel).
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
I can tell you from my personal experience that FreeNAS smokes ZFS on Linux in the performance arena. Not to mention ZFS on linux is not kernel mode so it will always run in userspace(therefore always have a significant penalty in performance). To really do a fair apples-to-apples comparison in the performance arena you'd have to compile it in kernel mode(assuming that is even possible). Even then, many people will then dismiss your benchmark results because kernel mode isn't the directly supported model.

(What would be interesting in my opinion would be to compare FreeNAS to ZOL in kernel and non-kernel).


Well, no offence, but your experience is out of date on this topic...

ZOL is now a full kernel module and entered "stable" release several months ago... And so far, the overall consensus is that performance is impressive....
My comparison here is FreeBSD vs ZOL in kernel as you call it.

Edit: have a read: http://zfsonlinux.org/ ("The native Linux kernel port of the ZFS filesystem.")

I wouldn't have even considered linux if it was still the old fuse drivers (that did run in RAM). As I mentioned, I've only ever used FreeBSD in professional environment for over 15 years; and started to use linux a few years back as home server (I'm one of the MythTV developer)... (I have used linux for close to 20 years as part of my job, but not for servers)


I dislike Linux in many ways, mostly due to the constantly changing architecture, the requirements to always have to significantly update code that would have otherwise been just fine on FreeBSD (I wrote some kernel drivers on FreeBSD over 10 years ago, they still compile today)

Yes, many of the tests are "invalid", but I'm comparing Apple with Apple, so a test that only rate caching actually serves some purpose, and more often than not, linux has the edge of FreeBSD even in that area.
If you looked at my original ZFS review of 2009, I actually went into the details of describing every single results (distinguishing CPU cache, L2 cache, driver cache etc...); but I wasn't going to redo that today. It's all a matter on how you read the results and what to look for.
Important conclusions were:
Caching is more often than not more efficient in linux than freebsd
Physical speed is *always* faster in linux. I'm not sure why, and I'm puzzled by this. FreeNAS uses version 14 of the drivers, ubuntu 13.10 uses version 15. I flashed my card with v16 (supermicro's latest drivers). I've been trying to download the 764MB v16 for linux all day, the speed from LSI is so low it timeout the session after one hour and I need 2.5 hours.

I have run much more than this last round of tests, but unfortunately, my 3 year old son managed to get into my office room and played with the caddies. So all the results of one week worth of tests are now gone (I compared RAID60, stripe of 12, stripe of 6, raid5 (all adjusted zfs-wise of course).

One test I actually ran was an actual use-case of my intended use: a mythbackend server with 5 DVB-T acquisition card, recording 6 mpeg2 streams to the server via NFS.
Surprisingly, the linux set up handled locking more appropriately and while there was plenty of spare cycles on all machines, the ubuntu one had more leeway.

Obviously you don't know my background, but for me to state all of this is a big call. I've been on the FreeBSD camp for a *very* long time, and whenever a discussion came up about what to use for servers: freebsd was always my answer.

So from a pure performance point of view, it's head & head between the two; which is an impressive feast when ZOL states " it should be made clear that the ZFS on Linux implementation has not yet been optimized for performance. As the project matures we can expect performance to improve."

I've cloned the FreeNAS repository, will have a quick go at seeing what's involved in having it run over a linux kernel. I've already created a USB stick that uses a similar configuration as FreeNAS: it boots in read-only mode and use an overlay to have the rest of the system in ramdisk or over the zfs pool (similar to ubuntu live cd)

I don't want to start a Linux vs FreeBSD debate; like all religious debate it never ends well

Linux let me change the speed of the fans (can run them at 1600rpm, and they become a whisper), and I get SGPIO working with the LSI. I don't have a basement, my only option would be the attic; but temperature can reach 60 degres in summer, my CPU wouldn't like that
Silly what it comes down to :) I just wanted to lower the speed of my fans...
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
For ZFS especially it requires careful setup and execution to get numbers that mean anything. You have seen this firsthand as your tests weren't properly setup for the test based on your hardware, hence the absurdly high values you got. Not to mention the caching issue, but even small details like the ashift of your zpool in comparision to your iozone test and hard drives themselves can cause you to have benchmarks that are complete BS. That has less to do with ZFS and more to do with planning on the physical characteristics of how data is stored..


I wanted to reply separately to this one...

Yes for bonnie and the phoronix one: those are generic benchmark, but as per my comment above, when comparing apple & apple even what appears as useless isn't always so

However, I disagree with you that the tests aren't properly setup for my hardware. The iozone one is actually completely set for my hardware. Argument was set to always hit the physical speed for writes, rewrites and random reads, which are the one that matter the most.

Having segments of 128k also enter a special case in ZFS drivers on both BSD and ZOL in that zfs writes sync data directly to the pool than than the zil.

That also showed that FreeBSD is twice as fast as ZOL in random reads (ext4 shits itself and I cancelled the test as I couldn't be bothered waiting anymore : it took over 8 hours to run half the tests using ext4+md !!) but ext4 has never been known to be good with big files...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, no offence, but your experience is out of date on this topic...

ZOL is now a full kernel module and entered "stable" release several months ago... And so far, the overall consensus is that performance is impressive....
My comparison here is FreeBSD vs ZOL in kernel as you call it.

Edit: have a read: http://zfsonlinux.org/ ("The native Linux kernel port of the ZFS filesystem.")

I wouldn't have even considered linux if it was still the old fuse drivers (that did run in RAM). As I mentioned, I've only ever used FreeBSD in professional environment for over 15 years; and started to use linux a few years back as home server (I'm one of the MythTV developer)... (I have used linux for close to 20 years as part of my job, but not for servers)

I do stand corrected on the kernel module. I was under the impression it wasn't a kernel module but it turns out that it is, the exception I misunderstood is that it will never be included with linux itself because of the licensing restrictions. But it is most certainly a kernel module. My experience with linux is quite limited as I've only been playing with it about 8 months. In fact, I first setup ZOL on a spare system a few days after the official announcement that it was "production ready". I remember going to the forums and everyone and their mother were complaining about the poor performance.

Actually, I did unintentionally use the fuse version for about 30 minutes when I was trying to setup ZFS on linux. I figured out the error when I tried to do something like create the pool and it returned with something like "parameter doesn't exist". Other than that, my experience is strictly with the ZOL project, on a full SSD pool as well as a 5 disk RAIDZ1 and 10 disk RAIDZ1 in 2 vdevs and 10 disk striped pool(just had to experiment to see what did and didn't work well).

Overall, I'm not to worried about how good/bad/invalid the results of your tests are. At the end of the day I don't think FreeBSD has too much to fear against linux. Each has it's own benefits and detractors. Linux seems to do well on the desktop and do acceptable for servers while BSD seems to be the opposite. Personally, I think ZFS' biggest competition down the pipeline is the BTRFS. From what I've read BTRFS plans to implement many features in the future that may make it suitable as a ZFS replacement. It probably won't be in the next 3-5 years from what I understand though. ZFS' development in the open-source world seems to be very hindered right now because of its complexity and very small number of developers with the necessary knowledge and skills to add more features. The development seems very fragmented and very disorganized with even agreeing on future features to add. Only time will tell how this all plays out.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Netgear has made a prosumer-grade using btrfs, it scored very well in the smallnetbuilder review (better than the ixsystem mini plus).
Personally I think it's a lame duck; if you read their dev mailing list; they are still going back and forth in regards to fundamental designs. There are still bug reports being logged and no-one seems to use it in production, seems that all users are directly related to development :)
That there are few developers on zfs isn't much of a problem: I still use JFS on my myth box, development ended what? 15 years ago? Or even ntfs, when is the last time it got updated?


As I said I was only reviewing the performance of ZOL vs FreeBSD and I was amazed on how well it performed. Last I played with it, a year ago, it was terrible... And there were too many things missing.
Just found out it's missing NFSv4 ACL, so you're stuck with posix ones (in fairness, it took a while for FreeBSD to support those too).
Linux ntfs is in kernel which is a big advantage, and works directly from zfs (and so does smb shares)


While I find zfs performance acceptable on Linux, there's a long way to go for me to trust it with my data. What's the point of going with raidz2, ECC ram etc.. if it's to use a very new driver that could screw everything?..

At the end of the day; the only thing annoying me about freenas/FreeBSD today is that I can't control the fans speed, SGPIO support is nonexistent and I haven't managed to get the UPS to work (aeon 5130) yet. I'd like also to get a nice UI about the UPS status (btw, that UPS too is damn noisy!)
If I can resolve those within the next couple of days, Linux vs bsd is a non issue.

I'm in the process of transferring the content of my myth video collection (8TB) to the new server, and it's good timing: as I was copying the files I got 2 emails from smartctl about how /dev/sdb had unrecoverable sector errors.

I hope this thread will also raise the issue that while everything looks nice on paper (and I did do my homework), youre bound to have missed on something: for me it was how loud the case was and how deep it was (have to purchase a new cabinet)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
At the end of the day; the only thing annoying me about freenas/FreeBSD today is that I can't control the fans speed, SGPIO support is nonexistent and I haven't managed to get the UPS to work (aeon 5130) yet. I'd like also to get a nice UI about the UPS status (btw, that UPS too is damn noisy!)
If I can resolve those within the next couple of days, Linux vs bsd is a non issue.

If you have any electronics background you could easily make(or buy) a PWM generator to control the fans manually. If your Supermicro fans are anything like mine, they can draw a heck of a current. But you can do some cheating... My motherboard (Supermicro X9SCM-F) has PWM on it and thanks to a 1:5 splitter like http://www.ebay.com/itm/Evercool-Br...611?pt=LH_DefaultDomain_0&hash=item46073a89db I power the fans via a 4-pin molex, but the motherboard regulates the speed for all of the fans together. In your case your motherboard doesn't seem to be particularly good at regulating fan speed, but if you can buy a PWM controller like http://www.ebay.com/bhp/pwm-controller (note that that controller isn't going to work, but if you shop on ebay you should be able to find something for a rackmountable case you can mount inside). All you have to do is find a controller that provides a PWM signal that is the right frequency for fan control and buy one. I've seen some people buy them for less than $10 on ebay and have excellent success with their server. If you are scared of a little soldering or customization to make a little controller like that mount in your case you might want to see a friend to help.

As for SGPIO... good luck? I don't care about the lights that much personally. Mine work fine on my Norco 4224(although being able to see the activity light is damn near impossible because they are so dim even with constant disk usage). Not really sure why you had the comment in the beginning about "This is great to identify which disk is faulty as you can't always rely on the device name provided by the OS." I always reliably identify a disk by its device. For one, the GUI will give you serial numbers for all of your disks relative to the device ID. I always just go to the CLI and do smartctl -a /dev/X | grep Serial and then I always know precisely what disk is bad by the serial number. Then it's just a matter of grabbing the right disk and checking out the label. If you have 100s of disks that would suck, but a case like yours that's hot-swap should take a minute or two tops. If you label the disks with a sticker with the last 3 digits you'll pretty much know which physical disk to remove too. The only exception would be a disk that isn't detected in the system. In which case you'd have to write down every single disk serial and look for the disk that has a serial number you didn't write down. There's always a pattern to the disk IDs to sata ports, so it shouldn't normally require you to pull ever disk in any case.

As for the UPS... good luck? One thing I've noticed is that UPS support is a bane for many users. Here in the USA if you buy a UPS that's any of the major brands it will pretty much work with some kind of settings that FreeNAS has. Outside the USA it seems to be a whole different game. There's apparently tons of UPS manufacturers out there that have small slices of the market that have no FreeBSD support. For example, never heard of Aeon. The only advice I can give is to try every driver that FreeNAS has and use Google like its a religion. You might be forced to buy a new UPS though(and from what I've seen that won't be cheap where you are). Here the Cyperpower, APC, and Tripplite seem to be the biggest brands(which 99% of users seem to end up with by convenience and price). Outside the USA those brands are either not available or outrageously expensive. :(

Edit: I can't find the exact frequency computer fans run at, but if memory serves me right its 21-28khz(0-100%) so something like http://www.ebay.com/itm/DC-12V-36V-...584?pt=LH_DefaultDomain_0&hash=item2ecbc7a5d8 would work. That controller looks like it maxes out at 25khz. So you might not be able to go above about 50% fan speed but I don't think you want to go above 50% anyway.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Sorry, UPS is Eaton (thanks iPhone).
http://powerquality.eaton.com/dotnetpages/skupagemobile.aspx?productID=48&mobile=true&cx=3

Far from an unknown brand! IMHO far superior than cyber power.
It's also in the list of freenas supported ups

In regards to fan speed, I did mention in my first post, that I'm not willing to use a voltage adapter. I want to lower the fan speed sure, but I want to be able to make them scream when required.. Something fancontrol does very well on Linux.

I actually think the motherboard does a great job controlling the fan. It respond extremely quickly to CPU usage and change of temperature. Faster than most motherboard I've used. It just set the limit at 2600rpm. The fan themselves are very good too, with a great design. You can swap them extremely easily, no cable to disconnect. Interestingly, as soon as you unplug a fan, the others take over and go on overdrive.
You can tell that everything in that chassis is designed for redundancy and easy servicing: it came with a high price-tag too, and noise :)

As for SGPIO, I should mention that my aim is to have a setup that I'm going to install at a friends house. So when they have a problem, I can log in, detect which drive is faulty, make it blink (the supermicro has a nice sas backplane with 2 LEDs for status. When you make the faulty or locate led blink, you can't miss it, it is different to the activity led (blue)). Each disk bay has its set of LEDs.
Then you know in an instant which disk to remove and replace.. No fussing around, no label that can get lost over time or become unreadable; no need to turn off the server to safely check each serial number...

It's just good practice...

The mega raid utility let you use SGPIO, but only if the card is in raid mode :( I get nothing with the it firmware, the megaraid doesn't even see the adapter.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Sorry, UPS is Eaton (thanks iPhone).
http://powerquality.eaton.com/dotnetpages/skupagemobile.aspx?productID=48&mobile=true&cx=3

Far from an unknown brand! IMHO far superior than cyber power.
Exactly. Eaton is actually the main NUT (Network UPS Tools) supporter:
http://www.networkupstools.org/acknowledgements.html#Eaton
http://powerquality.eaton.com/opensource/Default.asp
Therefore it's no surprise that Eaton/MGE UPSes have excellent support in NUT. The 5130 should work with the usbhid-ups driver.
I'd like also to get a nice UI about the UPS status (btw, that UPS too is damn noisy!)
There's currently no UPS status in the FreeNAS GUI. You can get the UPS status data by running upsc <upsname> in the CLI. Log of the most important values is in /var/log/ups.log.
One item on my to-do list is to add the nut collectd plugin and add UPS status to the reporting graphs (load, battery charge, voltage, ...), but I'm not sure I'll have it ready for 9.2.0.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Exactly. Eaton is actually the main NUT (Network UPS Tools) supporter:
http://www.networkupstools.org/acknowledgements.html#Eaton
http://powerquality.eaton.com/opensource/Default.asp
Therefore it's no surprise that Eaton/MGE UPSes have excellent support in NUT. The 5130 should work with the usbhid-ups driver.

I have selected the "Eaton ups 5 Powerware 5130 (usbhid-ups)" in the drop down list .
Selected the port /dev/ugen1.4
(dmesg shows the entry: ugen1.4: <EATON> at usbus1
Made sure UPS mode is set to master,
select shutdown mode: "UPS reaches low battery"
and use everything else as default

I read about having permission issues on it, /dev/ugen1.4 is a link to /dev/usb/1.4.0 and I did chmod a+rw /dev/usb/1.4.0 just in case

In the log I see (bug several days ago, despite my multiple reboot):

"Can't connect to UPS [ups] (usbhid-ups-ups): No such file or directory"

maybe I should use the serial port instead; USB has always given me grief anyway with ups...

There's currently no UPS status in the FreeNAS GUI. You can get the UPS status data by running upsc <upsname> in the CLI. Log of the most important values is in /var/log/ups.log.
One item on my to-do list is to add the nut collectd plugin and add UPS status to the reporting graphs (load, battery charge, voltage, ...), but I'm not sure I'll have it ready for 9.2.0.


that's good to know; let me know if I can help...
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Thanks for the comprehensive write-up and the following discussion. Interesting read for sure!
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Ok.

I've found the upsd issue...
upsd is started as-is... which gives the error "Can't connect to UPS [ups] (usbhid-ups-ups): No such file or directory"

according to the NUT documentation, upsdrvctl must best started first.
So I killed upsd; then called /usr/local/libexec/nut/upsdrvctl start
then restarted upsd and it's now all good.

Also, the documentation should be updated; with the usbhid-ups driver; there's no need to specify a port; any value will do as it's ignored and the examples just use "auto".

is anyone else seeing upsdrvctl not started or should I report a bug?
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
is anyone else seeing upsdrvctl not started or should I report a bug?
It works correctly for me. This rc.d script is used to start/stop NUT: /etc/local/rc.d/nut
And it does start/stop uspdrvctl as needed:
Code:
nut_prestart() {
        ${nut_prefix}/libexec/nut/upsdrvctl start
}
 
nut_poststop() {
        ${nut_prefix}/libexec/nut/upsdrvctl stop
}

It works realiably in my case with Eaton Ellipse ECO 650 (usbhid-ups).
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
I see...

/etc/rc.conf doesn't define nut_enable="YES" ; now that may be different on how things are done in FreeNAS

but should I try to start it using the startup script it gives me:
# /etc/local/rc.d/nut status

Will not 'status' nut because nut_enable is NO.

Edit. Duh: What an idiot... In Services; UPS was off...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I see...

/etc/rc.conf doesn't define nut_enable="YES" ; now that may be different on how things are done in FreeNAS

but should I try to start it using the startup script it gives me:
# /etc/local/rc.d/nut status

Will not 'status' nut because nut_enable is NO.

Edit. Duh: What an idiot... In Services; UPS was off...

You got schooled by FreeNAS essay! :P

It's so funny to see others miss the forest despite the trees. Glad I'm not alone in doing some dumb things. :P
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The generated rc.conf is here: /tmp/rc.conf.freenas
The proper way to use rc.d scripts is to use the service command: service nut status
But the result will be the same in this case.
The nut_enable="YES" line is added to /tmp/rc.conf.freenas by /etc/rc.conf.local based on the config DB. Do you have UPS enabled in the GUI?
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
The generated rc.conf is here: /tmp/rc.conf.freenas
The proper way to use rc.d scripts is to use the service command: service nut status


it's hard to kill decades of habits :)

The nut_enable="YES" line is added to /tmp/rc.conf.freenas by /etc/rc.conf.local based on the config DB. Do you have UPS enabled in the GUI?


Somehow I got distracted with that I had configured the UPS already, and the upsd process got started and was running; just not connect to it...

would be a bit more user friendly if where you define the settings would let you turn the service on / off too... but I'm being fussy...

now, how do I start a X forwarding using SSH when accessing the ports jail... it seems that xmbmon can access the 83627thf super I/O. The NCT6776D is very similar looking at the linux kernel driver; just a different set of IDs... (mbmon for some reason doesn't work for me, complaining about the binary not being suid root when it is)
 
J

jkh

Guest
Introduction


I became in need for a NAS solution, something to store all my files and provide a great level of safety and redundancy.

ZFS was always going to be my choice of filesystem; it worked well, provide lots of useful features (especially snapshots) and is very reliable.

I looked at the existing professional solutions, but none of them provided the flexibility I was after.
FreeNAS' backer iXsystems has an interesting FreeNAS Mini; but only allows a maximum of four disks; and their professional solutions was just outside my budget.


It's been a while since I had done a ZFS benchmarks (check zfs-raidz-benchmarks I wrote in 2009).
So I'm at it again.

Thanks for putting so much work into a comprehensive benchmark comparison using the same hardware in each case!

Others have already replied at considerable length about the testing methodology, tuning best practices, etc and so forth, so I will endeavor not to revisit those specific topics, but I will say that there are benchmarks and there are benchmarks. Our current benchmark of choice, for example, is the SPEC rig, and not because SPEC is particularly awesome as benchmarks go but because it also tests multi-client load, which is a hugely important factor in deciding how well a storage appliance will actually work in real-world deployment scenarios. You can hammer on the filesystem locally all day long and post your numbers, but most enterprise customers will simply yawn indulgently and come back with "That's all well and good, but how many NFS iOPS/sec can you offer me? How does that number scale as the load increases? Can you handle 500 simultaneous clients without shitting the bed?" and so on.

That's why testing the entire stack is rather more interesting, and in a variety of configurations. 4X10GbE configured as a single LAGG serving NFS. Multipath iSCSI taking to a number of VMWare heads. Again, what's the real-world performance of the whole setup?

I'm also not even going to try to assert that FreeBSD always does better than Linux in all these scenarios because, well, it doesn't. FreeBSD's NFS implementation really needs work, for example. User-land iSCSI (istgt) has some real limitations (fortunately, kernel iSCSI is coming soon). Looking at SPEC rig tests (both NFS and CIFS), or tests with lots of VMWare clients hammering the box over iSCSI are the way we'll get those numbers up, however, since now we're measuring what people actually need to be fast!

This is similar to the exercise I went through for many years with Mac OS X. Various folks would run lmbench against it vs some Linux distribution and come back yelping about the time it took to do X or Y on OS X, and I'd look into it. Sometimes the tests uncovered a valid issue and we'd fix it, but even more often it uncovered an optimization that was done specifically to enable, say, taking 12 channels of 24 bit audio simultaneously without undue latency or interruption of the data stream, or being able to do 1080p video capture / playback (now 4K video!), and this required some certain pre-allocation or deferred context-switching behavior that would pessimize the benchmark since audio/video is an important market for the Mac. Sometimes there is an explicit trade-off required - you just can't do X faster without making Y slower, and wherever "Y" was a synthetic benchmark and "X" a real-world scenario, you can guess the one we always prioritized!

Again, I'm not going to claim that FreeBSD has always made the same wise and considered trade-offs, but you can guess what FreeNAS is going to be prioritizing as its evolution continues!

- Jordan
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
I can see exactly where you are coming from, and I'm guessing you work for ixsystem or a company providing similar type of product/services: enterprise class storage systems.

The test I made were mostly for my own and typical immediate use: few users, NFS and windows clients. I won't use iscsci at home, nor will I use virtual environment: the projects I'm currently working on do not deal well without direct hardware access (and that includes mythtv and DVB/ATSC acquisition cards).
I doubt anyone using freenas at home is going to care much about how it will behave with 500 simultaneous connections, and home users seem to be the greater percentage of freenas users.

I can redo the tests and focus on iops. I can't compare with md+Ext4 anymore though. It takes too long to build an array with md and have it provide optimal performances (it takes over 12 hours to create the initial array)

It's unfortunate that freenas seems to take the enterprise path, the UI is naturally going to take a more complex approach that will raise the (already high) bar of entry for home users or beginners.
As it is, I don't think freenas is for everyone. It requires the user to have an understanding of the underlying file system: zfs.
Other consumer nas makers have hidden that totally, you don't even know what they are using. Netgear did the same with their btrfs based nas. For all you know it could be ufs, fat or ext... But I can see this discussion going off tangent so won't go deeper here.
That's ixsystem focus and it's their prerogative..



As you mentioned (and I also alluded in an earlier post) Linux NFS provides usually greater performance than FreeBSD being in kernel (though it lacks NFSv4 ACL)
Nexenta is something I looked at too simply because it provides (like all Solaris based system) in-kernel smb, and performance are impressive there. I don't like Nexenta distribution license however (that you got to pay a very hefty fee if you want more than 16TB of storage). It also seems to deal better with lower memory amount (Solaris virtual memory handling has always been better than the rest though)

Freenas is the first system I looked at simply because at no time I thought Linux would be appropriate for zfs: always amazed at how quickly things move there.
Well, there's always room for one yet another nas distribution, this time focused on home setup :)
 
J

jpaetzel

Guest
I'd be very interested in knowing what sysctls and tunables you had set as well as the output of zpool status.

We also have a number of big performance improvements just around the corner in FreeNAS 9.2.0. Would you be interested in a sneak peek at that?

To answer Jordan's point about performance under scale, it's fairly easy these days to build a storage rig that will saturate gigE. Unless you have multiple clients or are using a block technology that can do MPIO, or can afford 10Gbe you're limited by the performance of gigE. In that case it doesn't make much difference if your volume can go 200 or 2000 MB/sec.

If you are in a bigger scenario, where you have multiple clients or can do MPIO or have 10Gbe, or some combination of the above, then you almost certainly are concerned about scalability, in which case Jordan's observations do apply.

However, please don't be concerned too about our focus. I just spent many hours last week dialing in mDNS and updating FreeNAS to a new version of netatalk (AFP). We do care about home users. In fact we have a big GUI revamp coming down the pipe in 2014 just to address the types of users that just want the damn thing to work with no fuss or muss
 
Status
Not open for further replies.
Top