I seem to have a love/hate relationship with FreeNAS and ZFS

Status
Not open for further replies.

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I love the features and performance that ZFS provides, but the space restrictions (leaving at least 20% of the pool free) always caused a slight issue for me since I would store a lot of data and don't have thousands of dollars to buy a bunch of 8 TB drives. I'm also a speed demon, so seeing that I had multiple terabytes at my disposal I decided to go with striped mirrors. The performance was great for random reads and writes of large movies and other data (around 300-500 MB/sec), sequential reads and writes would often hit (1.0-1.5 GB/sec) but of course this meant that I halved my total storage (about 13 TB total, around 10 TB usable). I also liked the ability to easily expand the pool just by adding another mirrored pair, unlike with RAIDZ/RAIDZ2, also the resilver times were great.

Now on to FreeNAS itself....I love a nice GUI and the "old" GUI for FreeNAS was always off putting to me ever since I laid eyes on it in 9.3, I stuck with it for a few months and eventually ditched FreeNAS completely (but kept ZFS via ZFS on Linux) due to poor VM support via VirtualBox. I came back about 6 months before Corral was released and used it on and off for a while (maybe installed and removed it about 3 or 4 times in those 6 months) until Corral was put out to pasture. I loved the GUI in Corral, which it seems a lot of people did as well. I also loved the inclusion of Docker for the plugins instead of the small selection of "custom" plugins that IX provided, as well as the switch from VirtualBox to Bhyve. I then moved to 11 Beta/RC and then to 11.0 and 11.0-U1, but was once again stuck with the old GUI and the new GUI wasn't completely usable since it's still under development (which I understand, unlike some people lol) and also the provisioning of KVMs was clunky and there was no native Docker support, I could have done it myself in a KVM but I'm not too familiar with Docker itself. At my current job one of the other SysAdmins put a bug in my ear by saying "Why ZFS? That just adds another layer of unneeded complexity" when I was telling about my server and I pondered it for about the next week.

I finally decided that I didn't need ZFS and striped mirrors since the majority of my data was just multimedia which could be easily acquired again, and the data protection that ZFS provides once again wasn't necessary for the same reason. I looked around for a few solutions and decided to give unRAID a try since I had seen it in a few YouTube videos. The first thing that threw me off was that it's more for archive storage or low performance storage since it reads and writes to each drive individually, you can use an SSD cache drive to speed up writes to the array but reads come directly from the drive the data is on. The second thing that threw me off was that you have to pay anywhere from $50-$130 depending on how many drives you have attached to your system, the license is registered to your USB drive's ID (so the OS can't run off of a HDD/SSD/SATA DOM) and you can only transfer that license once a year. When I first read that the first thought that came to my mind was "Who the hell do they think they are? Microsoft?!" o_O

I gave it a try for a full month and decided to buy the "Pro" version for $80 because when it works, it works pretty well. I love the interface and all the tools and plugins that are offered....but the drive performance is horrible compared to what I was used to with ZFS/FreeNAS. Writes to the cache drive (a 512 GB Samsung 840 Evo) are speedy, but when I go to flush the data to the disk stuff slows to a crawl (well compared to ZFS) and goes from 15 MB/sec to 150 MB/sec, usually averaging around 50 MB/sec since it is only writing to one drive and has to calculate parity. I've had the system lock up completely a few times because the cache drive was 100% full or Docker froze the system while it was trying to shut down about 12 different containers. Part of the problem is my poor choice in my motherboard/CPU combo since it was also giving me a bunch of problems with Bhyve in Corral so I'm waiting to get my Xeon E5-1650 and Asus X99-W :cool: I recently had the opportunity to upgrade my Fios connection to 1 Gbps and you know for damn sure I jumped on it :D I never thought I'd see the day where my storage couldn't keep up with my network bandwidth hahaha. Pretty much all of my movies are either 1080P BluRay or 4K UHD, so sizes for a single file range from 15 GB to 60 GB which unRAID doesn't seem to like. When downloading from Usenet the array can't keep up with the rate that I'm downloading content, so the queue backs up on my cache drive even though I have "The Mover" set to run every hour and it eventually brings my containers and my VMs to halt until I manually stop everything and rsync it directly to one of the drives :mad: A single file can take up to 15 minutes to transfer and calculate parity, which severely slows everything else down. I've had Radarr post-processing about 25 torrents and in 2 hours it has completed 11 movies :( I could improve this by utilizing the cache (this is HDD to HDD) but if I enable the cache for both my movies and download shares the thing will fill up in less than an hour.

It's only been a month and I'm already considering moving back to ZFS haha. The cool thing is that someone developed a ZFS plugin which allows you to use ZoL since unRAID is based off of Slackware, but you cant manage it via the GUI, everything has to be done via the CLI, which kinda kills it for me. I would like to use FreeNAS 11 again, but I definitely have to wait until the new GUI is finalized and Docker and Bhyve support are fully integrated into it. If I do switch back I'll be using RAIDZ2 this time because striped mirrors were nice but weren't worth the capacity sacrifice for my needs in the end. I have 26 TB of useable space and unRAID allows me to utilize 22 TB of that (4 TB for parity), whereas RAIDZ2 would allow my to utilize ~19 TB but accounting for the 20% buffer that cuts it down to about 16 TB which isn't enough for my current needs. I'm approaching 13 TB used now out of 16 TB total (I'm down 2x 4 TB drives since my HBA decided to bite the dust and my motherboard only has 6 SATA ports, the new board has 8) so I would definitely have to get 8 TB drives if and when I do switch back.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I guess you're asking for advice?

You would be better served with a SAS controller instead of just the built-in SATA of the system board. You don't tell much about your hardware, but it is a simple matter to pick up a used 24 bay server from eBay and get that working with FreeNAS.
A setup with 50ish TB of storage would be easy as long as you can afford to buy the hardware.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Joined
Jan 18, 2017
Messages
525
@brando56894 I hear ya on the funds issue and i'm sure a number of others do as wells but are you in the GUI enough to worry about it? I've lost a fair amount of data that I believed I could reclaim easily and was mistaken on some of it, that was the beginning of my raid journey. I had mirrors in my desktop for years after that and liked it for the same reasons you did but the lost space became an issue when i started to fill them up. Then i found freenas and raidZ2 and I've been happy ever since it's takes a long time to restore from a backup or re-rip all your dvd's (still haven't completed that....) I feel the time I've lost having to restore the data I've lost is worth the 25 or so percent of inaccessible storage +the offsite storage (if you do the math I actually have less then %50 of the storage usable at the moment if you include off-site storage lol). I'm upgrading to 8TB drives as my 2TB drives pass the 40k hours mark and start failing (this weekend lost one) I did what @Chris Moore suggested and grabbed a noisy, really noisy used multibay case online and it has made everything easier from backing up using @Arwen method to replacing failed drives/adding addition storage.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
RaidZ2 will actually write faster than mirrors for sequential transfers. I'd ignore the ugly gui personally... afterall... are yu really going to be staring at it much once its setup? Meanwhile you don't *have* to respect the 80% ruleguideline. Things slow down significantly at 90% though.
 
Joined
Jan 18, 2017
Messages
525
I have absolutely no complaints about raidZ2's performance for my usage and have not regretted for a day setting it up instead of mirrors, just upgrading a vdev is going to be costly at 8 drives wide (or for people who went with wider)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
grabbed a noisy, really noisy used multibay case online
I made mine quieter by replacing the stock server fans with standard speed fans and the hard drives stay cool enough. You just have to replace the stock heat-sync on the CPU with one that has a fan because the reduced airflow is not enough for the original heat-sync as it is not moving fast enough. I have done that on two 3U servers and a 4U server now and it is a workable solution. The only fans I didn't change are the 40mm fans in the hot-swap power supplies.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
upgrading a vdev is going to be costly at 8 drives wide
That is the reason that I went with 2 vdevs at 6 drives each. I get the additional space after only buying 6 drives and the drives in the other vdev don't have to be the same size.
 
Joined
Jan 18, 2017
Messages
525
That is the reason that I went with 2 vdevs at 6 drives each. I get the additional space after only buying 6 drives and the drives in the other vdev don't have to be the same size.
That's the plan when I go with 12TB HDD's, the fans are fine they don't bother me but i was told they were loud and the seller was being very honest about it :smile:
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
That is the reason that I went with 2 vdevs at 6 drives each. I get the additional space after only buying 6 drives and the drives in the other vdev don't have to be the same size.
The only annoyance with 2 vDevs and growing them, is that you might end up with a few larger drives in each vDev due to failures. At some point you almost have to degrade both vDevs to put all the larger drives in one vDev to get that last large drive installed, before the vDev will grow. (Hope I explained that right...)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The only annoyance with 2 vDevs and growing them, is that you might end up with a few larger drives in each vDev due to failures. At some point you almost have to degrade both vDevs to put all the larger drives in one vDev to get that last large drive installed, before the vDev will grow. (Hope I explained that right...)
I did it when I went from 1TB drives to 2TB drives. I still have 5 perfectly good 1TB drives that have between 40000 to 50000 hours on them, but they are HGST, so they are probably good for another 5 years.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I did it when I went from 1TB drives to 2TB drives. I still have 5 perfectly good 1TB drives that have between 40000 to 50000 hours on them, but they are HGST, so they are probably good for another 5 years.

Looks like you're sorted for boot drives then ;)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Looks like you're sorted for boot drives then ;)
The ridiculous number of small hard drives I had was going to cause my house foundation to crack. I have been selling them off on eBay for the past few years and I am down to 3 x 500GB drives and 5 x 1TB drives. I have already bought my first few 4TB drives, so I will be selling off the 2TB drives soon.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
vdev of RAIDZ2 of 5 or 6 drives is a great way to go. You can always create a second vdev and grow. The second vdev can get all of the hand me downs from your primary vdev as you update things. This enables you to grow in a reasonable fashion.

At 12TB+ I would seriously start looking at using RAIDZ3. The length of time it takes to synch a new drive just keeps getting worse and window of possible failure grows.

I would not use the new 10TB and 12TB for another year or so. I want everyone else to have the manufacturer of new drive failures. Seagate is talking about 14TB and 16TB in the near future. It will window is only going to grow.

These numbers are more than likely best case. If you are actively using the array, it could take much longer.

Example:

Code:
Size   Time
10	 8.333 hours @ 300MB/sec
12	10.000 hours @ 300MB/sec
14	11.667 hours @ 300MB/sec
16	13.333 hours @ 300MB/sec
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
vdev of RAIDZ2 of 5 or 6 drives is a great way to go. You can always create a second vdev and grow. The second vdev can get all of the hand me downs from your primary vdev as you update things. This enables you to grow in a reasonable fashion.

At 12TB+ I would seriously start looking at using RAIDZ3. The length of time it takes to synch a new drive just keeps getting worse and window of possible failure grows.

I would not use the new 10TB and 12TB for another year or so. I want everyone else to have the manufacturer of new drive failures. Seagate is talking about 14TB and 16TB in the near future. It will window is only going to grow.

These numbers are more than likely best case. If you are actively using the array, it could take much longer.

Example:

Code:
Size   Time
10	 8.333 hours @ 300MB/sec
12	10.000 hours @ 300MB/sec
14	11.667 hours @ 300MB/sec
16	13.333 hours @ 300MB/sec
I have a server at work that uses 6TB drives and it took over 36 hours to resilver one of the disks when I had to replace it. The pool is only 33% full, I can't imagine how long it will take to resilver once the pool is closer to capacity.
The time to recover a failed drive is why I am still using 2TB drives at home and I don't really want to move up to 4TB drives, but I will have to do it in the next 12 months because of my rate of data growth.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
The ridiculous number of small hard drives I had was going to cause my house foundation to crack. I have been selling them off on eBay for the past few years and I am down to 3 x 500GB drives and 5 x 1TB drives. I have already bought my first few 4TB drives, so I will be selling off the 2TB drives soon.
You're supposed to sell them when you're done with them?
 
Joined
Jan 18, 2017
Messages
525
At 12TB+ I would seriously start looking at using RAIDZ3. The length of time it takes to synch a new drive just keeps getting worse and window of possible failure grows.
These numbers are more than likely best case. If you are actively using the array, it could take much longer.

Example:

Code:
Size   Time
10	 8.333 hours @ 300MB/sec
12	10.000 hours @ 300MB/sec
14	11.667 hours @ 300MB/sec
16	13.333 hours @ 300MB/sec

12TB drives I will move to 6 wide raidZ2's (Z3 is a serious consideration though) that falls in my comfort zone but just barely. It just took me 14 hours to resilver 8TB of data on a 8 wide 56% full RaidZ2 so i'm guessing those numbers are wishful thinking lol
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You're supposed to sell them when you're done with them?
I don't know if that is what you are supposed to do with them, but what I have been doing is using 'dban boot and nuke' to do a DOD wipe on the drive and if it still checks out as healthy after, I put it up on eBay for auction. I sold 6 x 2TB drives in the last 30 days. I will be listing up these 1TB drives, probably tonight.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
12TB drives I will move to 6 wide raidZ2's (Z3 is a serious consideration though) that falls in my comfort zone but just barely. It just took me 14 hours to resilver 8TB of data on a 8 wide 56% full RaidZ2 so i'm guessing those numbers are wishful thinking lol

Well... I did say best case! :confused:

There are so many things involved with the performance aspect of resilvering/scrubbing.
  • Disk access speed (7200 RPM drives usually have better numbers)
  • Disk transfer speed (7200 RPM drives have better transfer speeds)
  • Motherboard chipset (north & south bridge) - Newer ones are usually faster
  • Disk controller embedded (lanes in use, etc.)
  • Disk controller card and interface type (lanes in use, PCIE 3/2, etc.)
  • SAS (12/6/3Gbs) / SATA (6/3/1.5Gbs) / SCSI / IDE
  • Memory speed and enough available for buffering
  • CPU architecture (Atom / Intel/ AMD, etc.)
  • CPU speed
  • CPU cores
  • Does ZFS use any of the CPU instructions for checksuming?
A lower end controller might not be able to handle 5 - 8 disks at once and become the bottleneck.

Budget not being a problem, having more controllers would be preferred when you have 6+ disks. Check the controller specs and remember they are usually best case. You might find that you are not getting full throughput with everything on one controller.

If all your disks can handle 200MB/sec and you have 8 disks, you would like to see your throughput be somewhat "close" to 8*200MB/sec = 1.6GB/sec Close might be within 20-30%. I have not benchmarked the speed when I intentionally remove a member and let a "spare" start to silver. (Yes, I do this to see how robust ZFS is. Sometimes the spare does not automatically kick in - I will spend time looking into this in the future.)

The disk drives are still the major bottleneck. When a higher end drive gets 240MB/sec and the throughput between drive motherboard is 6Gbs, going to SAS 12Gbs only improves the transfer speed a small amount as compared to 1.5Gbs -> 3Gbs. Going from 1.5Gbs -> 3Gbs was pretty major. 3Gbs -> 6Gbs was nice, but 6Gbs to 12Gbs is not a huge improvement. Now with SSD's that is a different story - 24/36/48Gbs or more would be nice!

In the end, I would counsel people who are buying 10TB drives to go to Z3 if at all possible. It seems like overkill, but if it takes 4 or 5 days to resilver a drive, that is a huge window relatively speaking. It is only 1.4% in a year, but that is eternity in uptime for enterprise computing. For the home user it might not seem like such a big deal, but you lose all your kids pictures, you will wish you had the extra parity drive.

When drives fail, they seem to fail in clusters. Usually because disk drives are bought in a single purchase. This means they come out of the same lot, parts lot, assembly plant, assembly day, etc. Platters magnetics start failing around the same time because they are laid onto media the same way/day, etc.

A lot of enterprise companies will follow their drives and around 3 years into a 5 year warranty, they pull and sale the drives. They then use new drives and repeat the cycle.

Generally, when you see "used" enterprise on market, that are refurbished, this is where they come from.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I would counsel having a backup of said pictures rather than going to raidz3.
 
Status
Not open for further replies.
Top