Resilvering speeds with 10TB drives

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Hi

I am currently planning to upgrade my NAS to either 9 or 10 white label WD Reds (shucked from WD Easystore 10TB external drives). I am planning to run one large ZFS2 pool, resulting in around 65 TB usable space. While planning this, I am trying to figure out the probabilities for failure, but I have no clue what to expect with regards to resilver performance (MB/s).

So, with the following specs, what kind of speeds should I expect?

Motherboard: Supermicro X9SRL-F
CPU: Xeon E5-2630L v2
RAM: Hynix 24GB (6x4GB sticks)
Drives: 10x10 white label WD Reds

Also, I might upgrade my RAM and CPU together with the drives, to a Xeon 2650 v2, and 64GB RAM (4x16GB sticks, with the possibility to expand to 128GB later). Will this significantly affect resilver performance?

Edit: If it matters, the vast majority of files on the pool, are largers files (video files) with sizes varying from 100MB to 60GB per file). I would also be able to stop using the server while resilvering, if this impacts performance to a significant degree.
 
Last edited:

rknaub

Explorer
Joined
Jan 31, 2017
Messages
75
I’m about to do the same, except I’ll be going from 10 4TB drives to 10x10. Currently using z1 and will probably go to z2. I’m thinking I will install the 10 drives, Find a way to temporarily power them all, then create a new pool, copy everything over, then delete the old pool and sell the 4TB drives or something.
 

nemisisak

Explorer
Joined
Jun 19, 2015
Messages
69
There is a huge amount of factors to take into account when resilvering a drive other than number of disks and disk size. This includes pool usesage, disk speed, disk quality, fragmentation, useuage during process, bad working disks etc.

I believe FreeBSD has also upgraded the resilvering algorithm not that long ago which should speed things up, though I havent had reason to test it yet. Therefore any number would be pure conjecture but around 30hours would not be surprising for an average.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Edit: If it matters
Everything matters. You need a ton of IOPS to get all the block pointers and a ton of sequential read bandwidth to read the data. It'll depend hugely on your specific workload - you mention large files, so that sounds to me like low-ish fragmentation, particularly if you're using large blocks.

On my systems, with a decent mix of large and small files, I see something like 2 TB/hour resilvers.
 
Top