Upgrade 6 year old system to 8TB drives

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
My Freenas system has been running well for the past 6 years, but it's finally filling up. Would like to upgrade the drives from 2TB to 8TB.

Possible?

Software: It's still on Freenas 8.31.

Hardware: Intel G620 (SandyBridge) @ 2.60GHz
8GB DDR3
IBM - LSI SAS Card
Intel Gigabit Ethernet
USB thumb drive boot
5 x 2TB drives, most of the original drives have been replaced over time.

Use case: File store. No other features needed.

Would like to upgrade the system to 8TB drives.

Questions:

Will this be enough RAM and processor to go from 2TB x 5 to 8TB x 5 or perhaps 8TB x 6? If not, what upgrades would be necessary?

Would it be better to upgrade the system to the latest Freenas, then restore from backup? I don't need any new features, though if the newer versions are faster when performing multiple network connection, or allow better functionality while scrubs are running, that would be a benefit.

I seem to recall that the newer versions no longer use USB thumb drives. Instead requiring physical drives to boot. I do have some small older M2 SATA SSD's that could be used for this.

Thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
8GB will probably not be great for that much data...16GB would be a good starting point, but go as high as you can afford.

The CPU is probably fine unless you want to do lots of concurrent transcoding in plex (you say you won't).

Certainly switch to SSD/M.2 for boot as heavy wear on the boot drive as is now the case for 11.2 will typically break a USB quickly (although it does still work if you are happy with the risk/annoyance of that... back up your config often).

For what sounds like a simple config, I would suggest just do a fresh install and import your pool, then re-do whatever sharing you had. I suspect importing a v8 config to 11.2 is a no-go.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Upgrading that system might be a bit tricky. You could use the 9.3 to 9.10 FAQ to guide you, and give it a try. The upgrade process will create a boot environment, and should warn you if it can’t upgrade from 8.3. Do read that faq though for things like “how do I make sure my GUI access still works”.
And yes, it might be easier to just do a fresh install, if upgrade is not an option.

Memory - 16GB will help, mostly with speed of reading directories. You’ll have more room for metadata in the ARC. You’re not using ECC with that processor. The question then is: do you love your data, or merely like it? That is, if FreeNAS storage becomes corrupted, is that a minor annoyance, or a Mayor event? That really depends on the data and where else it is kept.

ZFS is no worse than say NTFS in how it deals with memory errors. That said, people have lost entire pools because of memory errors. Which board is this? I am wondering whether you have a reasonably inexpensive path to ECC.

Boot - USB 2 sticks are recommended because they won’t overheat as quickly as usb 3. If you stick with usb, I’d mirror the boot drive. If you have an extra sata port, a sata ssd will be good boot media.

Ethernet - yes, newer versions support active lacp. If you have a managed switch, know your way around LAGs, have two Ethernet ports, and enough connections that go above 1Gb in aggregate, then an update can be worth it.

Let’s talk about your pool. Single vdev? Raidz1 or raidz2? If the former, the recommended path would be to create a new raidz2 vdev in a new pool with your 8TB disks and copy data over. Do you have enough sata ports on your HBA for that?

If you are using raidz2 now, you can replace disks one by one, wait for the resilver after each, and my understanding is your pool will have 5x8TB capacity once the last disk has been replaced.

Also keep in mind you cannot extend a raidz vdev. That functionality is being worked on, and I wouldn’t hold my breath for anything before the 2020/2021 timeframe. That means if you want 6x8TB, that’s definitely a new vdev/pool and then copying data over.

The CPU will do just fine for general file serving. What will impact your performance is how much of your metadata and data you can keep in ARC: More RAM is more better, within reason. I have 32GB, 5x8TB half filled, and the ARC takes up almost but not quite all of it, the bulk of it in metadata. For my use case, I would not benefit from more RAM. I use one jail and one lightweight VM. If I didn’t, I’d be comfortable with 16GB.

Security: TLS 1.2, various security fixes: A good case can be made for keeping FreeNAS, or any system, up to date just for peace of mind.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you are using raidz2 now, you can replace disks one by one, wait for the resilver after each, and my understanding is your pool will h
I have done it going from 1TB to 2TB drives and again going from 2TB to 4TB drives. It's perfect, in my experience.

Very good points
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
Thanks everyone.

Recall considering ECC RAM when the system was built. Seems to remember that the CPU / Motherboard combination would probably have supported ECC, but not officially. It's run for 6 years with no issues and weekly scrubs.

One worry is scrub time with 8TB drives. Scrubs with 5 x 2TB currently take 24 hours and tend to produce quite a bit of latency in the system while the scrub is ongoing. Scrubs took substantially fewer hours when the drives were less full.

Will scrubs with 8TB drives take 96 hours? Will the newer versions of Freenas handle this better, with less system degradation while the scrub is ongoing? Is it a function of RAM? Have some DDR3 around, so can up the RAM to 16GB.
.
Yes. Will probably not perform a direct upgrade. Instead start over with a newer version of Freenas, then copy data from backup.

Upgrading that system might be a bit tricky. You could use the 9.3 to 9.10 FAQ to guide you, and give it a try. The upgrade process will create a boot environment, and should warn you if it can’t upgrade from 8.3. Do read that faq though for things like “how do I make sure my GUI access still works”.
And yes, it might be easier to just do a fresh install, if upgrade is not an option.

Memory - 16GB will help, mostly with speed of reading directories. You’ll have more room for metadata in the ARC. You’re not using ECC with that processor. The question then is: do you love your data, or merely like it? That is, if FreeNAS storage becomes corrupted, is that a minor annoyance, or a Mayor event? That really depends on the data and where else it is kept.

ZFS is no worse than say NTFS in how it deals with memory errors. That said, people have lost entire pools because of memory errors. Which board is this? I am wondering whether you have a reasonably inexpensive path to ECC.

Boot - USB 2 sticks are recommended because they won’t overheat as quickly as usb 3. If you stick with usb, I’d mirror the boot drive. If you have an extra sata port, a sata ssd will be good boot media.

Ethernet - yes, newer versions support active lacp. If you have a managed switch, know your way around LAGs, have two Ethernet ports, and enough connections that go above 1Gb in aggregate, then an update can be worth it.

Let’s talk about your pool. Single vdev? Raidz1 or raidz2? If the former, the recommended path would be to create a new raidz2 vdev in a new pool with your 8TB disks and copy data over. Do you have enough sata ports on your HBA for that?

If you are using raidz2 now, you can replace disks one by one, wait for the resilver after each, and my understanding is your pool will have 5x8TB capacity once the last disk has been replaced.

Also keep in mind you cannot extend a raidz vdev. That functionality is being worked on, and I wouldn’t hold my breath for anything before the 2020/2021 timeframe. That means if you want 6x8TB, that’s definitely a new vdev/pool and then copying data over.

The CPU will do just fine for general file serving. What will impact your performance is how much of your metadata and data you can keep in ARC: More RAM is more better, within reason. I have 32GB, 5x8TB half filled, and the ARC takes up almost but not quite all of it, the bulk of it in metadata. For my use case, I would not benefit from more RAM. I use one jail and one lightweight VM. If I didn’t, I’d be comfortable with 16GB.

Security: TLS 1.2, various security fixes: A good case can be made for keeping FreeNAS, or any system, up to date just for peace of mind.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Recall considering ECC RAM when the system was built. Seems to remember that the CPU / Motherboard combination would probably have supported ECC, but not officially. It's run for 6 years with no issues and weekly scrubs.

And that's what you'd expect. Memory errors aren't all that common, until you have a LOT of servers, then they happen all the time. Yay statistics.

It really comes down to how devastating "catastrophic loss of data" is. If these files are backed up elsewhere, then the concern might not be that great.

You could keep your case, drives, HBA, PSU, and replace the motherboard, CPU and RAM as per @Chris Moore 's "eBay scrounger" suggestion. Just for giggles - it looks like this:


Motherboard: Supermicro X9SRL-F. That's an ATX board. $180-$200
CPU: Intel Xeon E5-2650 v2, an octa-core with a Passmark of just above 13k. $60
Memory: 32GB of Samsung 16GB PC3L-12800R - at these prices, might as well do 32GB. $60 or $30 for 16GB
==
US $320 gets you a refreshed system that will not succumb to memory errors

Whether that expense is worth it is very much an individual decision.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
I've done exactly what you're doing. I started with drives much smaller than yours and a system a few years older. I backed up my data to a single drive, which I suggest you do in case you need to recreate your pool / volumes

I went to the latest version of FreeNAS at the time FIRST by upgrading incrementally (not all at once), replacing my board, CPU, and RAM (moved to ECC) and then replaced the drives in each vdev one at a time (I had 6 drives 3 per vdev).

I've done the drive upgrade/replace process many times over the years starting with spare drives desktop drives, then buying drive, then what I'll only refer to as the 'shucked Seagate incident' and finally some 8tb Reds.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
What is the scrub / rebuild time on a system like that?

My current scrub time / drive replace and rebuild time is about 24 hours. Would it be 4 times that with 8TB drives?

What's the best way to shorten that, and diminish the latency when scrubs are ongoing.

Thanks


And that's what you'd expect. Memory errors aren't all that common, until you have a LOT of servers, then they happen all the time. Yay statistics.

It really comes down to how devastating "catastrophic loss of data" is. If these files are backed up elsewhere, then the concern might not be that great.

You could keep your case, drives, HBA, PSU, and replace the motherboard, CPU and RAM as per @Chris Moore 's "eBay scrounger" suggestion. Just for giggles - it looks like this:


Motherboard: Supermicro X9SRL-F. That's an ATX board. $180-$200
CPU: Intel Xeon E5-2650 v2, an octa-core with a Passmark of just above 13k. $60
Memory: 32GB of Samsung 16GB PC3L-12800R - at these prices, might as well do 32GB. $60 or $30 for 16GB
==
US $320 gets you a refreshed system that will not succumb to memory errors

Whether that expense is worth it is very much an individual decision.
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
I've done exactly what you're doing. I started with drives much smaller than yours and a system a few years older. I backed up my data to a single drive, which I suggest you do in case you need to recreate your pool / volumes

I went to the latest version of FreeNAS at the time FIRST and then replaced the drives in each vdev one at a time (I had 6 drives 3 per vdev).

I've done the drive upgrade/replace process many times over the years starting with spare drives desktop drives, then buying drive, then what I'll only refer to as the 'shucked Seagate incident' and finally some 8tb Reds.
That makes me curious. Will you share details?

@Chris Moore , I posted on it quite a while ago. I ended up picking up lots of 5TB (if I remember correctly) Seagate backup plus drives. They were a mix of labels on the drives, all were brand new retail and all from the same big box retailer.

They failed quickly, most not even lasting a year (average was about 9 months). Even the two that I kept for backup of offline files from laptops failed with just a few hours of time plugged in and spinning. At one point, I had to check several times a day because I was getting 2-3 failures a week and was paranoid about data loss.

At the time, I was also picking up the 8tb versions to begin eliminating older, smaller drive vdevs. Those were all desktop drives inside of the USB cases. Some of them had hubs built in, some were just the square case with blue top like the 5GB. Some from newgg, bestbuy, etc. All the same model of drive. All of them lived precisely 2 years (I just replaced the last one this past weekend). It's like they have a built-in timer and are set to detonate the instant it expires. I ended up replacing all of the 'bad' version with the 8tb ones. I suspect they were actively using refurbished drives, factory seconds, or just leftovers for the USB backup drives for a while.

The ACTUAL problem was that my storage needs had grown and I added 5 new 3-drive vdevs when I bought these... then the failures begain... so did my hair loss.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What is the scrub / rebuild time on a system like that?
That is entirely dependent on the drives and the kind of pool. On my system with six drives per vdev, using RAIDz2, I complete a scrub (or resilver) in about four hours. Faster drives cause that to go faster, but the amount of data in the pool also makes a big difference, to illustrate that I will tell you this.
I have a server at work with 303TB of data in the pool and it completes a scrub in 17 hours.
I have another server at work with 244TB of data in the pool and it takes 99 hours and 45 minutes.
The difference is partly the speed of the drives, but also the vdev layout.
The slower system has 15 drives in each RAIDz2 vdev. More drives in a vdev make it a bit slower.

Between the FreeNAS systems I have (or have had) at home and the servers I manage for work, I have a good bit of experience to draw on.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
They failed quickly, most not even lasting a year (average was about 9 months). Even the two that I kept for backup of offline files from laptops failed with just a few hours of time plugged in and spinning. At one point, I had to check several times a day because I was getting 2-3 failures a week and was paranoid about data loss.
I had some Seagate drives that disappointed me before. The ones I am running now have not hurt my feelings yet.
The WD Red drives you have now, are they shucked drives?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
What is the scrub / rebuild time on a system like that?

What's the best way to shorten that, and diminish the latency when scrubs are ongoing.

On my system, scrubs take 9 and a half hours, with the 5x8TB vdev half full. If that file store isn't in use 24/7, you can schedule the scrub so users are asleep during the bulk of it.

I'm not sure what you mean by "latency", but my guess is slow response when browsing directories. The best way to solve that is more RAM. Metadata will live in ARC and you'll be less reliant on disk IOPS.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
Thinking of using a bunch of the WD Easystore 8TB. Currently my 5 x 2TB is RAID Z1 (yes, I know, dangerous), but it's lasted 6 years. Next build will definitely be Z2, especially with 8TB drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
I had some Seagate drives that disappointed me before. The ones I am running now have not hurt my feelings yet.
The WD Red drives you have now, are they shucked drives?
Nope, all brown box reds from newegg and bestbuy (the provider of the abysmal seagate drives). I'd always had the worst luck with WD, I'm not going to curse myself by complimenting these... but they've lasted longer than the desktop drives, so I'm pleased with them.

I don't think I'll trust shucked drives again.
 

Trianian

Explorer
Joined
Feb 10, 2012
Messages
60
Ethernet - yes, newer versions support active lacp. If you have a managed switch, know your way around LAGs, have two Ethernet ports, and enough connections that go above 1Gb in aggregate, then an update can be worth it.

Have heard there a lot of issues with LACP.

What about 10 Gigabit ethernet? Any barrier to a machine this old supporting it?

With 10G NICs under $50 and switches as cheap as $149, it seems a better solution. https://mikrotik.com/product/crs305_1g_4s_in
 
Joined
Dec 29, 2014
Messages
1,135
Have heard there a lot of issues with LACP.
You can take this with a grain of salt (spoken as a network guy), but it works great if you configure it correctly. The biggest issue (IMHO) is that any layer 2 ethernet aggregation is load balancing, NOT bonding. That means that any conversation/connection will get at most the bandwidth of a single link. If you pick the right load balancing (if your switch/device lets you choose), that works great as well. Frequently people don't understand why their 4 port LAGG doesn't allow a single transfer to hit 4G throughput. It never will because that isn't what it is designed to do.
What about 10 Gigabit ethernet? Any barrier to a machine this old supporting it?
I have used 10G in systems as old or older than that. You may not be able to fill a 10G link, but you should get more than 1G. Your system doesn't have a ton of memory, so that could slow it down too. You are only as fast as the slowest component, and that could easily end up being something besides the NIC. FWIW, I just got 9.1G reading from primary FreeNAS (specs in sig) to a local disk in an ESXi host doing a storage Vmotion which is a new record. I have put a bunch into all these systems lately, and I am very happy to see it paying off!
 
Top