How would you configure a ZFS system(s) with these drives

Status
Not open for further replies.
Joined
Nov 1, 2017
Messages
6
Here are the drives I have at my disposal. I have 3 machines each capable of holding a max 12 drives. Currently I have 12x2TB drives in a Raidz2 which is ok but performance is lack luster. Most of the files are videos and are not accessed often. My current archive is 12.3TB and I would like to expand the system as well as create a backup system. What do you guys recommend?
20x2TB
5x5TB
1x4TB
6x1TB
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
I'd get rid of 1 system and the 1 & 4TB drives, get some extra 5TB drives for the online system and cram all 2TB drives in the backup system.
That is exactly what I did anyways ;) But really, what are your needs in size and performance, what is the hardware and why the 12 drives limit? (space, connectivity, .. ?)
 
Joined
Nov 1, 2017
Messages
6
Thank you for your reply!

While most of the media is read only, I frequently perform backups and would like the process to go as fast as possible. I use a 10gbps Ethernet link between two of the servers for that purpose. However, from my experience with my current 12x2TB raidz2 config, performance is not as optimal as I would hope. I think mirrored VDevs might be overkill and wasteful on space for my purposes but am open to suggestions. I'm leaning more towards some sort of raid or mirrored raid topology.

The 12 drive limit stems from the server chassis limit. Each case can hold 12 hot-swappable drives. They are all quad-core i7s with one being gen 7 and the other two being gen 3. I own all these systems so i'm not really in the business of throwing servers away, but I could re-purpose it for other things if its not needed in this setup.
 
Joined
Apr 9, 2015
Messages
1,258
Using two vDev's will increase the performance of the pool. The problem is that you will probably need more space to put drives in to make that work. Otherwise you will have to create a new pool. Since you can only fit 12 drives then it would be 2 vDev's with 6 drives in each. You would be looking at a total usable space of around 14 TB using 2TB drives. You could buy one more 5TB drive and make one of the vDev's using 5TB drives or use the use the 5 drives you have with a 4TB and then replace the 4TB later on and expand the pool.

Just using the larger drives will increase the performance of the pool. Two vDev's also increases the performance for write speeds especially.

As far as the i7 systems well I hope that you realize that they are not idea for FreeNAS for a big reason and that is no ECC ram support. If the boards are actual server boards I would be swapping the CPU's and ram for ECC support. You also failed to mention specs of the server in question which is a forum requirement so I have no idea how much ram you are using but the system does benefit from lots of memory so increasing it can help performance as well especially if it is starting to use swap on the pool.
 
Joined
Nov 1, 2017
Messages
6
That's interesting you say that larger disks would have better performance. I was under the impression they would be harder to re-silver and therefore originally opted for more 2TB Disks. Would you recommend 2 x (6x2TB) vdevs in Raidz as opposed to 12x2TB Raidz2

As far as ECC ram goes, that's a non issue. Its been proven over and over again that ZFS is unaffected by the presence of ECC ram
http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

My data is mostly media and I am not particularly concerned with data being corrupted during storage as much as I am data being retrievable later.

Server 1 Specs
i7-7700k 4.5Ghz
32GB DDR4-3000mhz ram
nvme ssd 256gb OS partition
10GBPS SFP+ Ethernet
Space for 12 Sata3 drives at 6 gbps

Servers 2 & 3
i7-3770k 3.4 Ghz
48GB DDR3-1600mhz ram
256 SSD OS partition
10GBPS SFP+ Ethernet (Server 2)
 
Joined
Apr 9, 2015
Messages
1,258
Nope, RaidZ1 is dead. You lose one drive and you are rolling the dice on the whole pool. If you do multiple vDev's you had better have perfect backups or have a little better redundancy. So with 2 vDev's in a pool both being a 6 drive RaidZ2. Larger disks are faster, you take a drive that has 3 platters and compare it to a drive with 4 platters and lets say that each platter holds 1TB, the drive with 4 platters will be faster than the drive with 3 everything else being the same. If you want to test it drop a 5TB drive in a machine along with a 2TB drive and speed test them each separately there will be a difference probably in the range of 10 to 20 Mbps extrapolate that out to more drives in a pool. And since it doesn't matter the os you can test anywhere you want and get the same conclusions.

http://hdd.userbenchmark.com/Compare/WD-Red-2TB-2012-vs-WD-Red-5TB-2014/1789vs3524
https://macperformanceguide.com/Storage-4_5_6TB-relativeSpeed.html

https://calomel.org/zfs_raid_speed_capacity.html

Do they take longer to resilver, sure. There is potentially more data per disk. But if you have backups and are running a RaidZ2 or RaidZ3 it shouldn't be much of an issue. Plus the life of the drive is reading and writing not resilvering, you say the pool isn't performing a larger drive is a way to change that.



You can do what you want about the ram and believe you are safe. There has been enough around here to say that it's a good idea and even the article you mentioned says so.

Now that we know what ECC RAM is, is it a good idea? Absolutely. In-memory errors, whether due to faults in the hardware or to the impact of cosmic radiation (yes, really) are a thing. They do happen. And if it happens in a particularly strategic place, you will lose data to it. Period.

ZFS works in ram a lot, so better to be safe than sorry.
 
Joined
Nov 1, 2017
Messages
6
Thanks a bunch for the information about the trade offs of different configurations, its given me a lot to think about.

I have a mechanism for verifying files were correctly written to disk via checksums so again initial data integrity is not a problem. These are video files and the biggest problem I was having is that after 3 or 4 years of not playing a given file the video would have distortions or digital pixelations during certain frames that weren't there when it was played before. ZFS cured that problem for me.

Although ZFS works in memory extensively, it is not possible for bad ram to corrupt a correctly written sector as the article I posted earlier shows. At the very end is a quote by Mathew Ahrens, one of the co-founders of ZFS at Sun Microsystems stating
There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem.
 
Status
Not open for further replies.
Top