Building a 45 x 10TB FreeNAS Server, Afraid of rebuild times.

Status
Not open for further replies.

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
I've been running eight 200TB boxes 6TB disks in raidz3 (14 disks in 3 spans) and they have been working quite well although rebuild times are obviously quite lengthy.

I'm afraid if I do 14 disks per span and I have to replace a disk it will become completely insane and untenable.

From what I can see my options for my 45 disk array are:

14 disks in 3 raidz3 groups @ 330TB usable + 3 spares max
11 disks in 4 raidz3 groups @ 320TB usable + 1 spare max
10 disks in 4 raidz3 groups @ 280TB usable + 5 spares max
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
14 is a bit wider than generally recommended, for any vdev layout.

When you say "spares", do you mean hot spares? Do you really need RAIDZ3 and hot spares?

Your options are almost unlimited. A couple of obvious ones are 5 * 9 RAIDZ3, and 7 x 6 RAIDZ2 + 3 hot spares. More, smaller vdevs should deliver faster resilver times.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Well in this context 'max spares' means the max amount of standby drives I can have sitting in the chassis. Do I need them? no but I want them there so I can initiate a rebuild immediately then take out the old drive when I get to the datacenter.

I ended up going with 10 * 4 which should shave off 30-40% of the time rebuilding a drive compared to 14 and I still have plenty of usable space.
 

Vito Reiter

Wise in the Ways of Science
Joined
Jan 18, 2017
Messages
232
DO NOT put more than 11 drives in a vdev array. There have been for unknown reasons a lot of issues when it comes to 11+ drives in a RaidZ1,2,3 etc. Given that you have a ton of space and are using RaidZ3 I assume your data is important. I would keep your drives in a 7-11 drive RaidZ3 stay away from anything over 11, you wouldn't want to lose all your data. Also take note that smaller vdevs will allow for a total of more disk failure tolerance and quicker rebuild times.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
DO NOT put more than 11 drives in a vdev array. There have been for unknown reasons a lot of issues when it comes to 11+ drives in a RaidZ1,2,3 etc. Given that you have a ton of space and are using RaidZ3 I assume your data is important. I would keep your drives in a 7-11 drive RaidZ3 stay away from anything over 11, you wouldn't want to lose all your data. Also take note that smaller vdevs will allow for a total of more disk failure tolerance and quicker rebuild times.

Oh OK, it's fine I don't have a lot of data on these arrays.. only 1.5 petabytes.

Jokes aside my uptime on these beasts are a year now and performance has been solid (not with CIFS because its garbage but FTP/SSH). I've lost disks but at an expected rate. Rebuild times are painful but having lots of spares on standby + raidz3 has saved me some white hairs.
 

Vito Reiter

Wise in the Ways of Science
Joined
Jan 18, 2017
Messages
232
Haha, I'm just looking out man. I ran a 2TB pool on 6GB of RAM once under the minimum 8GB thinking 'I only have 2TB' and one morning the entire pool had disappeared and wouldn't mount again. The FreeNAS devs aren't great at explaining why the minimum hardware is where it's at but they sure mean it when they say minimum.
 
Status
Not open for further replies.
Top