I've been trying to understand why there are recommended numbers of drives for ZFS RAIDz Vdevs. I found this thread: What number of drives are allowed in a RAIDZ config? which suggests this is because ZFS writes 128KIB blocks and drives in other than the recommended n^2-parity cause the system to slow down because it can't write evenly across disks. This appears to be confirmed in this thread: Weird raidz1 and raidz2 performance with 4 drives - any explanation?
One of the things I hope to test with my benchmark project is the impact of this at larger array sizes, but in the meantime, here is the matrix for all RAIDz levels. Good array sizes (N^2) are in bold.
One of the things I hope to test with my benchmark project is the impact of this at larger array sizes, but in the meantime, here is the matrix for all RAIDz levels. Good array sizes (N^2) are in bold.
Level | |||
Drives | 1 | 2 | 3 |
3 | 64 | 128 | na |
4 | 43 | 64 | 128 |
5 | 32 | 43 | 64 |
6 | 26 | 32 | 43 |
7 | 21 | 26 | 32 |
8 | 18 | 21 | 26 |
9 | 16 | 18 | 21 |
10 | 14 | 16 | 18 |
11 | 13 | 14 | 16 |
12 | 12 | 13 | 14 |