Upgrading FreeNAS hardware

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hi there,

I have a FreeNAS build, which is already 5 years old (except the HDs) so I am currently thinking about upgrading it before parts of the Hardware will die.

My current build is a Supermicro X10SL7-F board with Xeon E3-1231v3, 32GB RAM and 8 IronWolf 12TB HDs.

I am planning to upgrade to the following:
- Supermicro X11SCH-F board
- Intel Xeon E-2126G (6 core 3,3GHz)
- 2x32GB Samsung RAM (M391A4G43MB1-CTD)
- 2xToshiba XG6 256GB NVMe M.2 SSD (KXG60ZNV256G) to boot from
- Seasonic SS-500L2U power supply

I don't want to upgrade the IronWolf HDs yet, I am planning to do later.

The system is used in a home environment to store data, especially movies and is accessed by ~5 clients.

Currently I am using LSI controller onboard X10SL7-F. The X11SCH-F does not have that anymore. Will this have any disadvantages?

What do you think about the planned build?

Best regards,
AMiGAmann
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Power supply might be slightly weedy. Please do take a look at


if you have any plans to go beyond the 8 drives.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
OK. Thanks for the information.

I will never go beyond 8 drives, because that's the maximum the (19"-) chassis can take. Following the calculation suggestions in your link I guess that the maximum power necessary (during spinup of disks) for the planned system will be approximately 400W. So I guess I'm fine with 500W?

What do you think about the X11SCH-F - will it have any disadvantages using Intel C246 controller instead of a LSI controller for the drives?

Are those 64GB RAM enough for a pool with one big RAIDZ1 vdev? I know RAIDZ1 is not recommended because resilvering of a failed 18TB drive will take very long and because performance of RAIDZ1 will be worse than 4 mirror-vdevs.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
so I am currently thinking about upgrading it before parts of the Hardware will die.
If that is the only reason for you to upgrade then I don't see why you have the need. The hardware doesn't die just because it hit the 5 year mark. Especially if you have been careful to enable the correct notifications etc sent to you for every problem that might arise.
The system is used in a home environment to store data, especially movies and is accessed by ~5 clients.
More importantly, the use case that you have defined doesn't need an upgrade. Even your X10SL7-F build is capable of far more than being accessed by 5 clients and storing some data.

Basically I am failing to understand the need for the upgrade.

In any case, as for your questions...
What do you think about the X11SCH-F - will it have any disadvantages using Intel C246 controller instead of a LSI controller for the drives?
Not really sure what you are getting at? If you feel the need, you can always hook up your drives using a LSI HBA card. It wouldn't change anything though.

I know RAIDZ1 is not recommended because resilvering of a failed 18TB drive will take very long and because performance of RAIDZ1 will be worse than 4 mirror-vdevs.
Then why would you want to use RAIDZ1?

You have 8 drives -- you can use all 8 to create a RAIDZ2 pool and still get the space of about 6 drives.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
If that is the only reason for you to upgrade then I don't see why you have the need. The hardware doesn't die just because it hit the 5 year mark.
I am working in the IT and we are replacing most of the hardware after it has run 5 years. The chance that the hardware will die is getting higher and higher after that mark. In the worst case, if CPU/Mainboard die (in my FreeNAS build), it will take me several days to find the defective component, some more days to get a replacement (probably a newer generation) and then I maybe have to reinstall FreeNAS unplanned because the mainboard changed. Therefore I planned to upgrade now, but I know it might seem unnecessary.

More importantly, the use case that you have defined doesn't need an upgrade. Even your X10SL7-F build is capable of far more than being accessed by 5 clients and storing some data.
You are right, it is not because of performance reasons, I am okay with my current build. It's because of the chance that the hardware might fail and to avoid an unplanned reinstallation.

Not really sure what you are getting at? If you feel the need, you can always hook up your drives using a LSI HBA card. It wouldn't change anything though.
I had in mind that onboard (RAID) controllers should be avoided and that the LSI (working in IT mode) would be the most common and recommended method to access drives.

Then why would you want to use RAIDZ1?
To get the maximum space and still allow one drive to fail. Of course I know I have to have an additional backup of the data.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I am working in the IT and we are replacing most of the hardware after it has run 5 years. The chance that the hardware will die is getting higher and higher after that mark. In the worst case, if CPU/Mainboard die (in my FreeNAS build), it will take me several days to find the defective component, some more days to get a replacement (probably a newer generation) and then I maybe have to reinstall FreeNAS unplanned because the mainboard changed. Therefore I planned to upgrade now, but I know it might seem unnecessary.
There's so much wrong in this post.
I am working in the IT and we are replacing most of the hardware after it has run 5 years.
Yes that is usually what happens in large Enterprise environments but usually only for the laptops etc that they hand out to their employees. And that's because the warranty is usually around that much. Do you really think that these enterprises clean house every 5 years for their server rooms? No they don't. I know of Fortune 500 companies that still have AS400 machines and even mainframes that are chugging along since decades.
The chance that the hardware will die is getting higher and higher after that mark.
The chance that your hardware will die gets higher the minute you purchase it. I have seen brand new hardware fail too. Of course you can RMA the new hardware but that's still a failure. But just because the hardware is X years old does NOT mean it will die on you randomly. All of these devices have built-in warning systems that will alert you about impending failure.
if CPU/Mainboard die (in my FreeNAS build), it will take me several days to find the defective component,
Again, if you create the right kind of monitoring scripts, you wouldn't get to this point. You would have advance warning of impending failure and you would take the appropriate steps to mitigate the problem.
some more days to get a replacement (probably a newer generation)
That depends on your location and the general availability of server grade components in your area, but again see my point about monitoring scripts and advance warning.
and then I maybe have to reinstall FreeNAS unplanned because the mainboard changed.
Search the forums. Many users have changed hardware without a problem without having the need to re-install FreeNAS. Now if the failure is in the boot drive, then yes you have to re-install FreeNAS -- but that takes all of 15 mins and then uploading your previously backed up FreeNAS config.
I had in mind that onboard (RAID) controllers should be avoided and that the LSI (working in IT mode) would be the most common and recommended method to access drives.
That is still correct -- but you can use the on-board SATA ports for your drives -- if your motherboard has enough SATA ports. If not, use an add-on PCIe HBA card. Your original question about using the intel chipset 246 has nothing to do with RAID.
To get the maximum space and still allow one drive to fail. Of course I know I have to have an additional backup of the data.
Don't do this. Especially with drives larger than 1 TB. I had a drive failure last week on my Proxmox server --- and it took 3.5 hours to re-silver a 500 GB drive. This was not on ZFS, but on hardware RAID6. But the point is that larger drives take much much longer and if any other drive fails (of which chances are high because of the heavy loads on the disks during re-silvering process), all your data will be lost.

Oh and by the way I ran SMART on the failed drive just to check -- and it had Power On Hours of 62489 -- That's 7.13 years. Think about that. Even HDDs last more than 5 years. Motherboards, CPUs and RAM are built for much more than that. I have my Proxmox server running 24/7 since the last 6+ years. FreeNAS backup running since 7+ and the new main FreeNAS running since 6-7 months. I also have a Dell desktop that I bought way back in 2009 -- It's running ArchLinux and it's what I use everyday and currently using the same desktop to type up this post.

Bottom line even desktop grade hardware is built to last much more than 5 years.

PS: If you are going to upgrade nonetheless, can you just send me the old hardware since it's just trash for you anyway? I can take it from there ;-)
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Yes that is usually what happens in large Enterprise environments but usually only for the laptops etc that they hand out to their employees. And that's because the warranty is usually around that much.
The warranty for (enterprise) laptops we hand out to our employees is much less than 5 years. We replace the laptops after 5 years, because we had many failing laptops after that mark in the past.

Do you really think that these enterprises clean house every 5 years for their server rooms? No they don't.
Probably not all enterprises. I can only tell from my experience. We have >10 VMware hosts and yes, we are replacing them after 5 years.

I know of Fortune 500 companies that still have AS400 machines and even mainframes that are chugging along since decades.
I guess AS400 are mostly not replaced, because you have to invest a lot in migrating the software.

All of these devices have built-in warning systems that will alert you about impending failure.
So how am I able to be warned about a failing motherboard, CPU, RAM or PSU?

Again, if you create the right kind of monitoring scripts, you wouldn't get to this point. You would have advance warning of impending failure and you would take the appropriate steps to mitigate the problem.
I don't know that kind of monitoring in FreeNAS yet, I guess I have to find some information about it. I for sure know the detailed information HPE servers, VMware etc. give me, but I didn't see anything similar in my build yet.

Search the forums. Many users have changed hardware without a problem without having the need to re-install FreeNAS.
I would have assumed that I would have to do a reinstallation if I changed the mainboard e.g.

Don't do this. Especially with drives larger than 1 TB.
I know that there is a risk of losing the complete pool in case of one failing drive. I started my FreeNAS build with eight 6TB drives (RAIDZ2) and did a lot of those burn in tests that are recommended in the forums. This took me several weeks but after that I was quite sure that the drives are okay. I replaced those 6TB drives after 3-4 years in which they were running 24/7. Replacing the drives was done mainly to increase the total size of the pool.

So currently the plan is to replace the drives every 3-4 years. I don't see a big risk there in losing 2 (burned in) drives one after each other in that timespan. Of course I am running scrub jobs, SMART jobs etc. regularly.

It is actually so hard to see that much "unused" space when using mirrors. Of course when comparing to enterprise there we are using RAID10 most of the time, but most often because it is much faster than RAID5 and when accessed by many clients at the same time.

PS: If you are going to upgrade nonetheless, can you just send me the old hardware since it's just trash for you anyway? I can take it from there ;-)
I am still unsure whether or not to rebuild the system. You shouldn't have tried to convince me :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am working in the IT and we are replacing most of the hardware after it has run 5 years.

That's generally because the hardware is leased, or has become inefficient, and has little to do with failure rates. I do a lot of refurb work here in the shop and it's mostly consumables like HDD's and supercaps/batteries that die.

I've got a quartet of HP DL365 (that's "G1") that one of my companies bought new, ran for some time, retired, donated to one of my clients, and is still running strong. Not bad for a 2008-era server. Unfortunately, being maxxed out at 8 cores, 32GB of RAM, and a RAID controller that has fallen off the VMware support matrix, and because the things blast out 250-300 watts, they're being looked at for eventual replacement. It's just hard to replace gear if you're a 501(c)3 and you don't have a big budget.

Enterprise environments tend to lease their gear and force upgrades that way, but anyone who works in data centers knows there's a lot of old stuff hanging around out there because it can be less costly to avoid messing with something that works.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I have to repeat what others have already written. In my view it makes absolutely no sense to replace this machine. And it indeed seems that some of the "reasons" are not fully understood. I have just finished my new FreeNAS build (it is currently in the burn-in phase for the new HDDs) and the board is actually an X9 gen. from Supermicro that I bought off eBay.

As already written, there are more reasons for hardware replacement than fault risk. Actually, I would think the latter is rather low on the list. Support (which handles HW failures) costs and duration of leasing contracts (few enterprises buy their servers these days) will usually be the two dominating factors. Plus probably power efficiency...

And I consider myself to be really paranoid w.r.t. data safety. My background is in mission-critical systems, where outages (which are far less critical than loss of data) easily cost millions per day. So I take this very seriously, even for my private data.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
I *bought* five year old enterprise hardware to create my FreeNAS boxes (per sig, plus an R510). They're fine. And very very cheap.
 

jayecin

Explorer
Joined
Oct 12, 2020
Messages
79
The guy wants to upgrade his NAS, he did not ask for your financial advise. Either answer his question or move on, your financial views are pointless and petty.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The guy wants to upgrade his NAS, he did not ask for your financial advise. Either answer his question or move on, your financial views are pointless and petty.

Thanks for the thought, but the moderation team is capable of keeping discussions on track if they go too far off the rails. Discussions are allowed to touch on tangential issues to a reasonable amount.
 
Top