Does drive type still matter? 512e vs 4Kn, SATA vs SAS

Exhorder

Explorer
Joined
Jul 12, 2019
Messages
66
Hi,
for 6x 10 TB RAID-Z2 system does it still matter:
* if I buy 512e or 4Kn drives? The 512e drives are usually cheaper and better available.
* if I buy SATA or SAS disks?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For a long time there was an obsession with ashift=9 vs ashift=12 because of various reasons. This has an impact on a variety of things including space consumption and performance. Using ashift=9 on 4Kn is particularly bad for performance. Several different things have been done over the years to try to make this work correctly. I don't recall what the exact current strategy is, but I believe it tries to optimize correctly.

SATA disks are generally cheaper and ZFS works fine with them. Going SAS puts you in the minority and creates some additional opportunities for trouble.
 

nephri

Dabbler
Joined
Sep 20, 2015
Messages
40
"SATA disks are generally cheaper and ZFS works fine with them "
This statements is true for new disks but on "used" market, this statement maybe inverted !

And i had never trouble with SAS disks
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I don't recall what the exact current strategy is, but I believe it tries to optimize correctly.
FreeNAS brute-forces things and defaults to ashift=12, which is correct 99% of the time and shouldn't be that bad the rest of the time. On the ZFS side, there have been some discussions about making this more flexible and more intuitive, but with little actual forward progress, as far as I've seen.
 

Exhorder

Explorer
Joined
Jul 12, 2019
Messages
66
For a long time there was an obsession with ashift=9 vs ashift=12 because of various reasons. This has an impact on a variety of things including space consumption and performance. Using ashift=9 on 4Kn is particularly bad for performance. Several different things have been done over the years to try to make this work correctly. I don't recall what the exact current strategy is, but I believe it tries to optimize correctly.

SATA disks are generally cheaper and ZFS works fine with them. Going SAS puts you in the minority and creates some additional opportunities for trouble.
Yes, my understanding was that using ashift=12 is fine for 512e and 4Kn.

Also recent mainboards come with 8, 10 oder 12 SATA ports. So I'd just use these instead of a SAS controller.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Also recent mainboards come with 8, 10 oder 12 SATA ports. So I'd just use these instead of a SAS controller.
That's a rather simplistic statement. 8 is common on Intel C236 and C246 boards, 10 used to be common on LGA2011v3. AMD boards typically have 6 of somewhat more dubious quality than the Intel ones. Anything beyond that firmly puts you in either SAS controller territory (good) or weird and wonderful jellybean SATA controller territory (meh at best and atrocious at worst).

So I'd just use these instead of a SAS controller.
Just to be clear: SATA disks will work on SAS controllers, but SATA controllers cannot operate with SAS disks.
 

Exhorder

Explorer
Joined
Jul 12, 2019
Messages
66
Hi, Intel C621/C622 has 14 SATA ports. Not sure if these are OK for FreeNAS though.
 

nephri

Dabbler
Joined
Sep 20, 2015
Messages
40
In addition, On some motherboards, you can have SATA ports that are bridged differently from others and in the way perform differently and it can have finally an impact on the overall pool performance.

With a SAS controller (even if it's for using SATA disks), you can avoid theses issues.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Hi, Intel C621/C622 has 14 SATA ports. Not sure if these are OK for FreeNAS though.
Going from memory, that's a theoretical max in the framework of their flexible I/O thing. I don't think I've seen boards typically expose more than a handful of SATA channels plus maybe route one or two to M.2 slots because why not. At the C62x price range, SAS is going to be used for any serious storage that doesn't happen over PCIe anyway. You don't buy a Xeon Scalable system for your little home NAS that sits in a corner.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's like you're baiting someone to prove you wrong.
I'm going to say that a "little home NAS" by definition cannot have a CPU with a footprint similar to an entire Raspberry Pi. It would be a "big home NAS".
 

Exhorder

Explorer
Joined
Jul 12, 2019
Messages
66
Going from memory, that's a theoretical max in the framework of their flexible I/O thing. I don't think I've seen boards typically expose more than a handful of SATA channels plus maybe route one or two to M.2 slots because why not. At the C62x price range, SAS is going to be used for any serious storage that doesn't happen over PCIe anyway. You don't buy a Xeon Scalable system for your little home NAS that sits in a corner.
But these mainboards exist. For example:
* https://www.supermicro.com/en/products/motherboard/X10SRi-F It has 10 SATA ports from C612. A very similiar model with additional SAS controller exists, too. Therefore the question: Any reason I should buy the more expensive SAS version if I do not need more than 10 SATA ports anyway?
* Or this one: https://www.supermicro.com/en/products/motherboard/X11SPM-TPF It has 12 SATA ports.

These mainboards are on my list for a possible Proxmox VM server.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That's not Xeon Scalable, though. LGA 2011v3 seemed to have more products for the not-quite-small NAS niche. Still, a perfectly viable board.

Any reason I should buy the more expensive SAS version if I do not need more than 10 SATA ports anyway?
Well, SAS cabling might be neater and you need SAS for SAS disks. Otherwise, no.

Pretty cool board. I'm not too keen on Xeon Scalable because I can't make even begin to understand Intel's CPU lineup. And I gave it a try. It's just not worth my time to research. If you find a good resource for choosing the right CPU, they'll probably make a fine system - also, please show me the resource, because I hate not knowing things.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That's not Xeon Scalable, though. LGA 2011v3 seemed to have more products for the not-quite-small NAS niche. Still, a perfectly viable board.

Any reason I should buy the more expensive SAS version if I do not need more than 10 SATA ports anyway?
Well, SAS cabling might be neater and you need SAS for SAS disks. Otherwise, no.

Pretty cool board. I'm not too keen on Xeon Scalable because I can't make even begin to understand Intel's CPU lineup. And I gave it a try. It's just not worth my time to research. If you find a good resource for choosing the right CPU, they'll probably make a fine system - also, please show me the resource, because I hate not knowing things.
 

nephri

Dabbler
Joined
Sep 20, 2015
Messages
40
Just to say (and i may be wrong) :

The X10SRi-F use the Intel C612 PCH that provide your 10 SATA ports that all capables to do 6Gb/s
But the PCH link to CPU are done with a DMI 2.0 that if i understand correctly use like a PCIe x4 2.0 bus (or something near)

Regarding https://en.wikipedia.org/wiki/Direct_Media_Interface this bus can achieve around 2 GB/s (in both side ?)

If you connect by example 10 SSD, you will be near to load 6G/s * 10 = 60Gb/s = 7.5 GB/s (very theorically, but maybe reach the PCH bus limit ?)
The PCH Bus link to CPU will be in this case a bottleneck ?

On the other side, you have also 2 PCIe slots that use the PCH and if you use it, it's maybe the same.

For high performance pool (means SSD ones), i always tried to avoid PCH and prefered PCIe linked directly to CPU. I'm maybe a bit paranoiac.

PS: Even with a 12 SSD pools it's hard to handle more than 2GB/s READ/WRITE throughtput within a ZFS Pool, so it can be enough for your pool.

Séb.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes, the PCH on LGA115x and LGA20xx platforms is limited to a x4 PCIe link to the CPU (some generations 2.0, some 3.0). That means potential bottlenecks exist, not just for SATA, but for everything hanging off the PCH, like networking, USB, other PCIe devices not attached to the CPU, etc.
 
Joined
May 10, 2017
Messages
838
Yes, the PCH on LGA115x and LGA20xx platforms is limited to a x4 PCIe link to the CPU (some generations 2.0, some 3.0).

Yes, and even with the DMI 3.0 platforms, at lest the ones based on LGA1151, somewhat surprisingly the SATA controller is limited to 2000MB/s max, so for example if you have a X11SSM with 8 SSDs (and even some fast disks) you can't get more than 250MB/s per port when all are used simultaniously.
 
Top