Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.
Hardware Recommendations Guide

Hardware Recommendations Guide Discussion Thread Rev 1e) 2017-05-06

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
1,899
Thanks
621
But what's the reliability on those cheapo SSDs? Any time I've gotten a cheap SSD they've failed more than the USB drives I've gotten.

In the four years I've been using FreeNAS on USB drives, I've killed a total of one, and that one was a used one anyway. Get good quality USB drives, you'll be fine.

Of course, a decent SSD will be better than a USB, but if you'd rather not spend the money, I don't think its a big deal.
I've had similar experience with reliable USB 2.0 drives. A non-Micro Sandisk has been happily running my "home photos" box for years now.

If by "cheapo" you mean any of the questionably branded clones originating out of Shenzhen, then yes, those drives are suitable only for skeet shooting. Drives like the 40GB Intel 320s that are frequently recommended are "inexpensive" but not "cheap" - they'll be fine. I've even started to see DC S3500 80GB drives becoming available for under USD$20. Those will be fine and way, way faster.
 

dasti

FreeNAS Aware
Joined
Jun 11, 2014
Messages
63
Thanks
22
I had all kind of usb stick brands / formats failing on me since 6 years, sandisk, toshiba, kingston, samsung, ....
- same brand and model seems to fail exactly at the same time, I mean the same day - so a mirror will not be enough
- the most compact ones have the shortest life span (and run very hot)
- I have a couple of usb sticks inside one machine is running flowlesly since 6 years

In conclusion according to what I experienced, if you have no choice but use usb sticks, I'd recommand :
- make sure the system pool is on the harddrives
- make backups of your configuration
- do not use micro ones (like sandisk ultrafit...)
- install them directly on the motherboard if possible so they can benefit from a good airflow (you might need a accessory like this
- as a boot drive use a tripple-way mirror with 3 different brands (only supported at install)
- have 2 key ready to install in your drawer
 
Last edited:
Joined
Jun 16, 2019
Messages
4
Thanks
0
I need a clarification regarding IDA/SATA/SAS interface/adapter/controller. The hardware recommendation guide says "Mechanical hard drives barely exceed SATA 1.5Gb/s speeds on a good day, meaning that SATA 3Gb/s is more than adequate for most uses. Naturally, SSDs may benefit from SATA 6Gb/s interfaces, particularly when using 10GbE networking." and also regarding additional SATA/SAS connectivity "If more connectivity than is available from the PCH is desired, or if SAS is required (due to the use of expanders, for instance), the only reliable solution is to add..." - meaning that controllers mentioned in following paragraphs are only necessary if there is not enough ports on-board.

I found a benchmark which shows data suggesting that on-board controllers might be unable to get all of performance from HDDs, even when ports are rated high enough. When you Ctrl+F "All SATA controllers are NOT created equal" you will find a table which summarizes performance of a HDD and a SSD connected to a few different controllers:
Code:
1x 2TB a single drive - 1.8 terabytes - Western Digital Black 2TB (WD2002FAEX)

 Asus Sabertooth 990FX sata6 onboard ( w= 39MB/s , rw= 25MB/s , r= 91MB/s )
 SuperMicro X9SRE sata3 onboard      ( w= 31MB/s , rw= 22MB/s , r= 89MB/s )
 LSI MegaRAID 9265-8i sata6 "JBOD"   ( w=130MB/s , rw= 66MB/s , r=150MB/s )

1x 256GB a single drive - 232 gigabytes - Samsung 840 PRO 256GB (MZ-7PD256BW)

 Asus Sabertooth 990FX sata6 onboard ( w=242MB/s , rw=158MB/s , r=533MB/s )
 LSI MegaRAID 9265-8i sata6 "JBOD"   ( w=438MB/s , rw=233MB/s , r=514MB/s )

As you can see from the table, onboard controllers are unable to get performance of LSI controller from the HDD. Even though one of these onboard controllers is able to get high enough (for HDD) performance from the SSD (242MB/s vs 130MB/s). The explanation of this phenomenon is following:
If you are using the motherboard SATA connectors, they are not going to perform as well as a SATA expander or a dedicated raid card. Just because the SATA port says SATA 6 and comes with fancy cables does not mean the port can move data quickly. The onboard chipsets are normally the cheapest silicon the manufacturer can get away with.
I'd like to understand what's going on here. In particular why SATA controller on-board in Asus Sabertooth 990FX, which is capable of reading 242MB/s from the SSD, is reading only 39MB/s from the HDD. Like it's not capping the speed but rather slowing it down by some factor.
How can I judge what is the actual performance of given SATA controller (either on-board or card which I'm considering to buy)?
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,120
Thanks
547
@mpoleski,
I can't answer all your questions, but I do have a comment. The LSI MegaRAID controller does contain a read & write cache. I don't know if it's used during JBOD mode, but if so, that can warp speed numbers.

Next, even if the built in SATA ports are of good quality, it maters how they are connected. When using multiple SATA ports at the same time, (like in a RAID array), if the SATA controller chip is wired too few PCIe lanes, (or lower speed lanes), that is a limiting factor. Not your case in the example you used above, which used just 1 disk. Think of the connection like a funnel. Here are some numbers;

PCIe 1.x - 2.5Gbps
PCIe 2.x - 5Gbps
PCIe 3.x - 8Gbps, (but lower overhead than 1.x & 2.x)
PCIe 4.x - 16Gbps

So 4 x SSDs that can saturate 6Gbps SATA needs 24Gbps. Do the math. That means;

10 or 11 lanes of PCIe 1.x, (due to the overhead issue of earlier PCIe)
5 or 6 lanes of PCIe 2.x, (due to the overhead issue of earlier PCIe)
3 lanes of PCIe 3.x
2 lanes PCIe 4.x, (new enough that not many systems have it yet)

So a 4 port SATA chip on a desktop board might only use 2 lanes of PCIe 3.x with the assumption you are not going to use all the disks at the same time. But, for a storage appliance, (like FreeNAS), this MATERS.

Thus, the recommendation to use server style system boards, server HBAs, (Host Bus Adapters for disks), and server style software. Even then, performance requirements sometimes need you to look at the block diagram to determine speed of internal components. Like how many PCIe lanes are wired to the SATA controller chip. And what speed those lanes are.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,139
Thanks
3,867
There are a few things here that need to be addressed:
The benchmarks you quote are rather light on details about the actual workload for the disk tests. This alone makes the data useless.
Then there's the RAID controller used: it's biasing the results with its cache and, presumably, prefetching. To what degree naturally depends on the workload, which we don't know.

high enough (for HDD) performance from the SSD (242MB/s vs 130MB/s). The explanation of this phenomenon is following:
It is not an explanation, it is a crummy jedi mind trick. Intel's controllers are more or less on par with LSI, although they have more limited bandwidth - the whole PCH shares four PCIe lanes, whereas the LSI controller typically is connected with eight lanes straight to the CPU. Either way, single disks of any variety cannot saturate the controller's up link.

I'd like to understand what's going on here. In particular why SATA controller on-board in Asus Sabertooth 990FX, which is capable of reading 242MB/s from the SSD, is reading only 39MB/s from the HDD.
Those aren't even read speeds, they're writes, as noted by the "w". But they're pretty miserable for a Samsung 840 Pro, so it sounds to me like a decent amount of random I/O is involved in the workload.

tl;dr - The article is poorly written and lacks rigour, so its conclusions aren't trustworthy.
 
Joined
May 10, 2017
Messages
706
Thanks
294
Just a heads up, WD recently release a couple of SMR RED drives, while I haven't tested them with FreeNAS they will likely suffer from the same poor performance, at least during resilvers, as other similar SMR drives, so it might be a good idea to avoid them with FreeNAS.

2TB - WD20EFAX (single 2TB platter)
6TB - WD60EFAX (three 2TB platters)

AFAIK only these two are SMR, all the other currently available RED models are PMR.

These new SMR drives are also available in the Blue range:

2TB - WD20EZAZ
6TB - WD60EZAZ
 
Joined
Jun 16, 2019
Messages
4
Thanks
0
The article mentions that cache is disabled:
Bonnie++ can do asynchronous I/O, which means the local disk cache of the 4k drive is heavily utilized with a flush between ZFS commits once every 30 seconds. Since the disk cache can artificially inflate the results we choose to disable drive caches completely using Bonnie++ in synchronous test mode only. Syncing after each write will result in lower benchmark values, but the numbers will more closely resemble a server which is heavily loaded and using all of its RAM.
I don't know if LSI MegaRAID honors synchronous mode.

The number of PCIe lanes is definitely important to look at, but should not matter for that single-drive test.

My point is that I want HDDs to be the bottleneck, not the controller, CPU, etc. Do you think that cache can explain observed phenomenon on files with sizes of 16GB?
If you know sources where I can learn more about performance of controllers, would you share? I don't want to make decisions based on "brand X is good", "brand Y is bad".
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,139
Thanks
3,867
The article mentions that cache is disabled:
And I don't believe it without a good explanation. I'm going to go out on a limb and assume it's using system calls and not taking over the OS and drivers like malware. There is no "disable the cache on that LSI SAS controller" system call, for a number of excellent reasons. The most the benchmark can do is issue sync writes. The cache on the RAID controller, if non-volatile, is considered an acceptable intermediate state from which the sync write can be returned, so it will almost certainly be in play here.

Do you think that cache can explain observed phenomenon on files with sizes of 16GB?
Yes, of course. Even just caching ZFS metadata (by coincidence, not integrated design) would provide a considerable boost even for files an order of magnitude or two larger than the cache.
 
Top