What's the best ZFS raidz1 configuration for 8 disks

Status
Not open for further replies.

jlray

Cadet
Joined
Jul 3, 2012
Messages
9
I read from the wiki that ZFS raidz1 starts with 3 disks and works best with 8 disks.

If I have 8 disk and I don't care how many volumes to have, what is the best scenario?
1. create one ZFS raidz1 volume over 8 disks
2. create one ZFS raidz1 volume over 3 disks; create another ZFS raidz1 volume over 5 disks

Any CON and PRO?

Thanks.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I read from the wiki that ZFS raidz1 starts with 3 disks and works best with 8 disks.
You miss understood. The optimal raidz1 vdevs are 3, 5 or 9 disks. IMO, 9 disks are at least 4 disks too many to be considering a single-parity array.

If I have 8 disk and I don't care how many volumes to have, what is the best scenario?
1. create one ZFS raidz1 volume over 8 disks
2. create one ZFS raidz1 volume over 3 disks; create another ZFS raidz1 volume over 5 disks
What are your storage requirements and why do I keep asking this question?

Without any additional information I would say create a raidz2 volume of 6 disks and keep the other 2 as cold spares.
 

jlray

Cadet
Joined
Jul 3, 2012
Messages
9
You miss understood. The optimal raidz1 vdevs are 3, 5 or 9 disks. IMO, 9 disks are at least 4 disks too many to be considering a single-parity array.

What are your storage requirements and why do I keep asking this question?

Without any additional information I would say create a raidz2 volume of 6 disks and keep the other 2 as cold spares.

Hi PaleoN,

Thanks for your reply. I'd like to get the maximum space out of my 8 disks. Speed/performance is not a concern.

I will use the storage for home sharing and that's why I don't think 2 disks will fail at the same time as the loading is not high, or am I wrong here?
 

sska

Cadet
Joined
Jul 13, 2012
Messages
4
I don't think 2 disks will fail at the same time as the loading is not high, or am I wrong here?
not quite, but by replacing the load will be hight and there is a chance... to lose all your data...
I, personally, have changed two disks in my pool - this did not happen - and i'm happy... but 16-20 hours for disc replacement in 4x1.5TB pool... can all happen.
 

jlray

Cadet
Joined
Jul 3, 2012
Messages
9
not quite, but by replacing the load will be hight and there is a chance... to lose all your data...
I, personally, have changed two disks in my pool - this did not happen - and i'm happy... but 16-20 hours for disc replacement in 4x1.5TB pool... can all happen.

Then probably I will go with raidz2 so I can survive with two discs failing without data loss and it will give me 6 discs of space. How does that sound?

Currently I'm using one 3T disc in UFS and one 3T disc in ZFS, just to get myself familier with FreeNAS. I found CIFS share is faster on UFS than on ZFS (write speed 90+M/s over 60+M/s). Is this expected? Is it because I'm using one disc ZFS? I'm using AMD CPU on board with 8GB memory.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Then probably I will go with raidz2 so I can survive with two discs failing without data loss and it will give me 6 discs of space. How does that sound?
It sounds fine. I would still have at least 1 cold spare on hand for when you do get a disk failure.

Currently I'm using one 3T disc in UFS and one 3T disc in ZFS, just to get myself familier with FreeNAS.
Rebuild times are way to long for 3TB disks. I would only consider running them in a mirror or double-parity, raidz2.

I found CIFS share is faster on UFS than on ZFS (write speed 90+M/s over 60+M/s). Is this expected? Is it because I'm using one disc ZFS? I'm using AMD CPU on board with 8GB memory.
I didn't see it with my setup, both maxed out at 70M/s. :( It can depend on what version of FreeNAS you are using and how you performed the tests. This [thread=5338]sticky[/thread] has some CIFs tuning settings. Do note not all of those settings are necessary.
 

jlray

Cadet
Joined
Jul 3, 2012
Messages
9
It sounds fine. I would still have at least 1 cold spare on hand for when you do get a disk failure.


Rebuild times are way to long for 3TB disks. I would only consider running them in a mirror or double-parity, raidz2.

I didn't see it with my setup, both maxed out at 70M/s. :( It can depend on what version of FreeNAS you are using and how you performed the tests. This [thread=5338]sticky[/thread] has some CIFs tuning settings. Do note not all of those settings are necessary.

By code spare, you mean having the 9th disk on hand but not installed in the system so that it could be used to replace a failure disk should it happen, right?
 

sska

Cadet
Joined
Jul 13, 2012
Messages
4
By code spare, you mean having the 9th disk on hand but not installed in the system so that it could be used to replace a failure disk should it happen, right?
I think yes
The term cold spare refers to a component, such as a hard disk, that resides within a computer system but requires manual intervention in case of component failure. A hot spare will engage automatically, but a cold spare might require configuration settings or some other action to engage it.
 
Status
Not open for further replies.
Top