How fast should a 6x12TB Array @ Raid Z2 be? + SMB speed goes up and down

Grinchy

Explorer
Joined
Aug 5, 2017
Messages
78
Hello,

i just switched my system to 6x WD120EDAZ 12TB Drives.

It works great, but I'm a little bit uncertain whether the write/read are slower than they should be.


Tried it with dd and got about ~550MB/s on my Z2 Array.

The Gui showed my about ~125MB/s for each disk. But shouldn't this be at least about ~150MB/s per Drive? 125mb/s seems to be a little bit slow for 12TB Drives. (57% of the 42TB are used)

Code:
# dd if=/dev/zero of=/mnt/ZFS/test/ddfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 36.622068 secs (572647073 bytes/sec)




Also there's a "Problem" while sending files to my NAS with SMB (Windows 10).

Just tried it with one ~80gb big file, and it goes up and down in speed like crazy. Every Second it goes down to ~180mb/s and back to ~550mb/s.

1.jpg

Also tried it with a single 256gb Evo 960 in my NAS and it seems to be the same effect, just not as strong as with the Z2 Array.

2.jpg
 
Last edited:

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,175
Are these SMR drives ?
If so all bets are off
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
653
@Grinchy Check if the label says R/N: US7SAM120 bottom right. If yes then it is Ultrastar DC HC520 (formerly Ultrastar He12, WD web). And in that case it is the same base model like WD120EMAZ.

The difference seems to be:
M = Branded/WD Branded
D = Enterprise Self Encrypting Drive (SED)/WD /WD Re™ // Info based on updated model PDF file. Old version says "D" for Raptor which is nonsense.

Anyway it is a bit weird because as per the datasheet the HC520 is either HUH721212ALE600/4 or HUH721212ALN600/4 (512 or 4k format). But neither of these has "Encryption" defined as per the Part-Number table.

Confused ... anyway if the EDAZ is "SED" then it might explain the slower speeds...
 
Last edited:

Grinchy

Explorer
Joined
Aug 5, 2017
Messages
78
Are these SMR drives ?
If so all bets are off

It should be PMR. There's no real info @ WD, but it seems to be PMR.

@Grinchy Check if the label says R/N: US7SAM120 bottom right. If yes then it is Ultrastar DC HC520 (formerly Ultrastar He12, WD web). And in that case it is the same base model like WD120EMAZ.

The difference seems to be:
M = Branded/WD Branded
D = Enterprise Self Encrypting Drive (SED)/WD /WD Re™ // Info based on updated model PDF file. Old version says "D" for Raptor which is nonsense.

Anyway it is a bit weird because as per the datasheet the HC520 is either HUH721212ALE600/4 or HUH721212ALN600/4 (512 or 4k format). But neither of these has "Encryption" defined as per the Part-Number table.

Confused ... anyway if the EDAZ is "SED" then it might explain the slower speeds...


I now killed my Array and did alot of testing.

The Drives seems to be really fast. With a clean drive i get about 200mb/s @ Windows. Also all 6 Drives are about the same speed, so there shouldn't be a bad drive.

Even if I add them as singly drive in FreeNAS they get about ~190mb/s writespeed.

The "funny" part about this is, that the moment i put them together in a Z2 Raid the won't write with more than ~125mb/s. Reads are about ~180mb/s so.

Also tried it with 1m and 128k Recordsize, and with dd it's about the same speed.
But transferring big files over SMB 1m Recordsize gets up to ~160mb/s while 128k won't get faster than 125/s (per Drive). I don't even understand why there's a difference between dd and smb?!


I really don't know what to do anymore :-(
Anyone with some tipps, or an Idea what the Problem could be?
Or someone who's using a similar setup and could get me some infos about how fast it should be?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
The drive WD120EDAZ looks identical to my HGST 10TB 0S04037:
HGST Deskstar NAS 3.5" 10TB 7200 RPM 256MB Cache SATA 6.0Gb/s High-Performance Hard Drive for Desktop NAS Systems

I think mine were doing about 180MB/s or even better.

And indeed, they perform really well.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
@Grinchy , Your dd test doesn't test the speed of the entire array as a unit element. The result is the sum of the throughput of the data disk and the parity disks.
For instance, speed of 572MB/s is aproximately equal to 3 x 190MB/s (Throughput for 1 Data + 2 x Parity for RAIDZ2 disk).
What is misleading is the writing or reading to the pool will never exceed 190MB/s, however, all the transaction within the pool (Because it is composed of 1 vdev) will aggregate to the sum of 3 disk).

What I find surprising is how Windows advertise the speed at 550MB/s. Are you on a 10Gbe network?
From what I can tell, your Freenas box is caching the first portion of the transfer and as the data is being written to the disk at 190MB/s, the cache fills up and the transfer is throttling as a result.
I think the system is behaving exactly as expected.
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
653
It should be PMR. There's no real info @ WD, but it seems to be PMR.
-As per this statement all external SMR supports TRIM. The DC HC520 does not support TRIM (as per S.M.A.R.T) thus it should be PMR
-Also reddit somehow confirms the same (after chat with WD support)

I now killed my Array and did a lot of testing.

The Drives seems to be really fast. With a clean drive i get about 200mb/s @ Windows. Also all 6 Drives are about the same speed, so there shouldn't be a bad drive.

I really don't know what to do any more :-(
Anyone with some tipps, or an Idea what the Problem could be?
Or someone who's using a similar setup and could get me some infos about how fast it should be?
I have 6x WD120EMAZ . Four of them finished full badblock test already. Two more to go (80%) + long smart test. Once done i can give it a try w/o geli and test it and then rebuild it again as per my need. But i don't have 10G network so can't test SMB (I have 1G only)

How about less than 6 drives in Z2? Any differences when you go with 3, 4, 5 disks? I know it is non-sense setup but for test purposes ...

Anyway please post your full system specs. Mainly MoBo, CPU and controller (if any extra one).
 

Grinchy

Explorer
Joined
Aug 5, 2017
Messages
78
Did a lot of testing and this is the speed / drive i get with different Settings. It's mb/s per Drive, cause it's hard to tell the speed with smb going up and down like crazy.


6x z1 @ 128k =115 mb/s
6x z1 @ 1m =150 mb/s
6x z1 @ 128k =115 mb/s
5x z1 @ 128k =120 mb/s
5x z1 @ 1m =180 mb/s
4x stripe @ 128k =190 mb/s
4x stripe @ 1m =200 mb/s
6x stripe @ 128k = maxed out my 10G Connection ...
6x z2 @ 1m =165 mb/s
6x z2 @ 128k =120 mb/s


So it seems like with Z1 and Z2 it's impossible to use the full drives performance.

With z2 and Recordsize 1m its more than 40mb/s faster per drive than with 128k
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
What I find surprising is how Windows advertise the speed at 550MB/s. Are you on a 10Gbe network?
From what I can tell, your Freenas box is caching the first portion of the transfer and as the data is being written to the disk at 190MB/s, the cache fills up and the transfer is throttling as a result.
I think the system is behaving exactly as expected.
This is incorrect. You can write to a RAIDz pool faster that single disk write speed. For my pool (10x10TB WD EMAZ drives), I see stable file transfers (write to server, from a Windows 10 client) of 600MB/s for hundreds of GB of data. This is on a server without any cache drives, and 64GB RAM.


One specific test (over SMB) that I did:
269 144MB (13x 1080p movies)
Time: 7:20 (440 sekunder)
Average speed: 612MB/s


I am unsure where this busted myth comes from, but for single stream read/write, speeds are not limited to the normal rule of single disk write speed, or 6 operations per write (which for me, gives a theoretical maximum of 10/6 * 190MB/s = 317MB/s, which obviously is wrong.

Edit: Source confirming this: https://www.ixsystems.com/blog/zfs-pool-performance-2/

For streaming write and read, speeds are limited by N-p, where N is the total number of drives, and p is the number of parity drives. For a RAIDz2 pool of 6 drives in RAIDz2, the speed should be 6-2 = 4*single drive performance, which limits to about 760MB/s.

Have you tried SMB tunables for 10Gbit speeds? (google it, 45 drives has a nice article), and gave me a performance increase.

That being said: These are theoretical, and you will see loss and bottlenecks. I.e. for my system, I see write speeds around 600-700MB/s sustained, compared to ~1400MB/s theoretical for the pool (10-2 * 180MB/s). The 10GbE limits this to 1250MB/s minus overhead of course, but I believe the main bottleneck is encryption and compression performance, in addition to SMB which is mostly single threaded, and requires high CPU frequencies.My issue however, is read speeds. Read speeds from "cold" files (not stored in ARC), are around 300-400MB/s. If I transfer the same file several times, forcing it to be stored in ARC, I see sustained read speeds (on that single file) of 1,10GB/s, showing my network is able to handle full 10Gb/s speeds.
 
Last edited:
Top