Optimal disk setup ?

Status
Not open for further replies.

johngillespie

Dabbler
Joined
Aug 18, 2014
Messages
13
Hi,

I have purchased 12 x 4TB WD RED drives with the intention of building a raidz2 pool but I'm getting a non-optimal message from the GUI.

What's wrong with this setup ?
In what way is this not optimal ? is it from a performance or wasted space point of view ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The warning is removed in 9.3-alphas. The actual penalty is not cut-and-dry though. I'm still following the optimal configuration though. ;)
 

johngillespie

Dabbler
Joined
Aug 18, 2014
Messages
13
Thanks for your replies. I've read the information contained on the link provided but am not certain how I should be reading the spreadsheet.
What value should I be looking at in the "block size in sectors" column ?

I'm using 4TB WD RED drives.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, that's part of the debate. See, you don't have control of the block size. You can set a maximum block size (bigger blocks are more efficient), but you can't actually force a block size at all. The block size is totally dependent on the data when its being written. So that pretty chart isn't as useful as you might think. For me, I'll be writing large 4GB files all at once, and I can bet that nearly 100% of the writes will be 128KB.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
RAIDZ2 shouldn't be bigger than 10 disks and even then some people find that to be a bit wide.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
Optimal number of disks in RAIDZ2 come from 2 different sources of space overhead (when talking about 4K disks and using ashift=12 for full performance)

You can find the details about that here:

https://web.archive.org/web/2014040...s.org/ritk/zfs-4k-aligned-space-overhead.html

As you can see, 6 and 18 disks are the 2 numbers of disks where both sources of overhead are minimized. Obviously 18 disks is too wide for RAIDZ2 so that really leaves 6 disks as the most optimal, but 8 and 10 are ultimately just fine too. Obviously 2 vdevs of 6 disks each for your 12 disks would be a fine setup if you are willing to give 4 disks to parity.
 

johngillespie

Dabbler
Joined
Aug 18, 2014
Messages
13
RAIDZ2 shouldn't be bigger than 10 disks and even then some people find that to be a bit wide.

What's the risk associated in going against that recommendation ?

Based on what I have read, if I were to follow all recommendations, I should stick to identically sized and spec'd vdevs which could be a problem when I'll want to expand.
This being a home server, I'm trying to find the right balance between building this correctly and it not costing much more than it already has :)

I'm using a 24 bay supermicro case.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you go too wide you end up with a situation where you are writing extremely small amounts of data across a large number of disks, and this hurts performance. Also when you are trying to write small amounts of data you might write 4K of data, but then have to write several times that in parity data. Kind of inefficient when your checksums are several times what your actual data uses. Throughput still increase for linear writes and linear reads (which really aren't linear in ZFS but hopefully the thought is still conveyed) but when you need more than the smallest amount of iops your pool performance is basically crap. As pools get bigger you end up needing more iops just because your file system metadata needs it. So directory listing start getting slower and slower. It's just not a pleasant experience in the long term. But, early on it looks like it'll be fine. I did tests with 10 disk RAIDZ2, 11 disk RAIDZ3 and 18 disk RAIDZ3 and I was convinced that the recommendat

I did an 18 disk raidz3 because I wanted to see if even as a home user would the recommendations matter. I'm a single user and live alone and yet I was disgusted with the performance. My new pool (which is about to start getting filled with data) is a 10 disk RAIDZ2. I learned my lesson and I'll never go wider than the "recommendations" again. That was just too much of a pain and once you've realized what you've done, undoing it requires you to destroy the pool. Obviously that isn't something that most people can "easily" do.

Yes, this does put some limitations on how you expand. Sorry, but that basically "life" with ZFS. ZFS isn't designed for home users. ZFS is for enterprise class users that are willing to spend money to get ZFS' great features. The "limitations" you have are mostly money problems. It's expensive to get it all right. For companies going with ZFS, the data is worth far more than the money spent on the server. I've joked that BTRFS is "poor man's ZFS" because it has the potential to be a good alternative for ZFS when dealing with home users (and even to challenge some enterprises that want ZFS-like quality). It's supposed to let you do things like add single disks, take single disks out, mix and match disk sizes, and all sorts of other stuff I'm sure I don't even know of. Only time will tell though as BTRFS isn't anywhere near as good as ZFS currently. BTRFS may grow and compete with ZFS or it may die before it gets anywhere.
 

Market Guru

Dabbler
Joined
Aug 10, 2014
Messages
17
I am doing a cost analysis on the the number, type of drives I should buy.

when you say no more than 10:
RAIDZ2, p = 2
1. (n+p) = (8+2) = 10
or
2. (n+p) = (10+2) = 12

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I mean 10 disk RAIDZ2 or 11 disk RAIDZ3. Another way to look at it is 8 disks + whatever parity you choose.
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
cyberjock, do you have any experience with performance between the three scenarios I have described below, I'm curious to know what you personally think of option 1 in particular:

1. Start off with a zpool containing a raidz2 vdev consisting of 6 drives and adding another raidz2 vdev consisting of another 6 drives later on when more space is required.

2. Create a zpool containing a raidz2 vdev consisting of 10 drives

3. Create a zpool containing a raidz3 vdev consisting of 11 drives

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not sure what you are wanting to know... option 1 gives you lower throughput (which is based on the number of disks you have in the pool) and higher iops (because of more vdevs).

It's really just about balancing what you need to work for you. If you are trying to maximize throughput only then you want more disks in the vdevs(and more vdevs too). If you want more iops you want as many vdevs as you can.

It's just trading one for another to an extent.
 
Status
Not open for further replies.
Top