RAIDZ recommendation?

Joined
May 23, 2023
Messages
4
I am entirely new to TrueNAS. I have choosen Core over Scale because i have no needs for App. Just stand-alone NAS system will be fit my need.

My Build is as followed as:

  • ASUS C621 Sage Motherboard
  • 2x Xeon Gold 5140 14 Cores
  • 784 GB SK hynix DDR4 ECC LRDIMM
  • 22 x 16 GB Seagate Exos

It will be one user only. The primary storage will video game archive/preservation ranging from DOS/Commodore64 games to Win3x. legacy ISO images (dating back to windows XP to modern games) may be included as well.

What is this build for?

It will be offline semi-cold storage for long term preservation with checksum MD5 check weekly against redump.org database. I would like to treat this as like 'video game museum' since many of legacy video games are are under threat of being extinct as time as goes on. (such as floppy disks and CD decay rendered non-working after long period of time) So hence of the reason i am doing this digital preservation.

My question is that if I set up as 2 array of RAID3Z x 11 disks, will it impact parity bit on odd number of disks (11 disks per array)? which block sector should I go for?

I am open to hear your recommendations if you disagree with two arrays RAID 3Z x 11 disk. I would like to have a lot of redundancy if possible.

Thank you!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Your RAM and CPU specs really sound like gross overkill for your stated needs, even assuming you mean 16 TB disks rather than 16 GB. But your pool layout sounds highly redundant. You always trade off redundancy against available storage, but this should give you roughly 250 TB of storage (accounting for the 80% rule, around 200 TB usable) that would survive just about any reasonably-foreseeable harm short of destruction of the entire server.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You won't have any parity issues with 11 disks in RAIDZ3, just do note that the suggested maximum vdev width is 12 drives. If you want even more safety you can use hotspares.

Also it looks like you will need an HBA.

Anyway, that's what I call a hardcore gamer. Nerd badge earned :D
 
Last edited:
Joined
May 23, 2023
Messages
4
Yes, my apologies. Thank you for pointing it out.

Yes, it is 22 x 16 TB Exos.

Thank you, it is good to know that I am under 12 as a maximum list. what is recommendation block sector size? It will be in between alot of small sized games such as DOS, Amiga, Commdore64 to 4.7 GB ISO image size. Not sure which number would be sweet spot block size sector.
 
Joined
May 23, 2023
Messages
4
You won't have any parity issues with 11 disks in RAIDZ3, just do note that the suggested maximum vdev width is 12 drives. If you want even more safety you can use hotspares.

Also it looks like you will need an HBA.

Anyway, that's what I call a hardcore gamer. Nerd badge earned :D

Yes, very hardcore ha. Thank you.

I have ordered Adaptec 72405 24 port via ebay. It should arrive in about next week. Hopefully this device will be good solution to my needs.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I have ordered Adaptec 72405 24 port via ebay. It should arrive in about next week. Hopefully this device will be good solution to my needs.
Most definitely not. The last time anyone put serious work into getting Adaptec hardware to behave properly, they gave up after much frustration and useless responsibility dodging when driver bugs are pointed out.

There's a reason only LSI HBAs are recommended, and it's not love for LSI/Broadcom.
 
Joined
May 23, 2023
Messages
4
Most definitely not. The last time anyone put serious work into getting Adaptec hardware to behave properly, they gave up after much frustration and useless responsibility dodging when driver bugs are pointed out.

There's a reason only LSI HBAs are recommended, and it's not love for LSI/Broadcom.

Ouch....I got Adaptec 72405 as a lowest reasonable price I could find on eBay. That really explains it. Guess can't go cheap and risking gamble away the redundancy. Appreciate that for shedding the light on Adaptec issues....its good to know.

What is the best and robust LSi-based 24 ports HBA recommendations?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I got Adaptec 72405 as a lowest reasonable price I could find on eBay.

Which was probably about a hundred bucks. And I can tell you why that is.

Lots of these adapters ended up in ESXi hypervisors that "run the world", but a few years ago, VMware discontinued support for Linux driver compatibility, and lots of devices that are otherwise pretty decent went on a massive fire sale as it forced many companies to "refresh" their hardware outside the normal leasing cycle. You can see for yourself:


Not supported by ESXi 7 or 8. And ESXi 6.7's end of support was late last year, so these devices, which probably retailed for north of $1K back in the day, are only of value to Windows hobbyists now. No demand. Prices do the capitalist thing and fall.

And the problem is that Adaptec was the hot controller of choice back in the day, when Adaptec provided documentation and technical assistance to driver authors to make drivers for their gear. But that all went away over time, and when Adaptec got bought out by PMC, they were relatively hostile to the open source community. This makes it hard to produce the sort of rock solid reliability that ZFS requires. It would be GREAT(!!!!!) to have a reliable second option for add-on HBA cards, but Adaptec isn't it.

There's a reason only LSI HBAs are recommended, and it's not love for LSI/Broadcom.

We have no love for Broadcom here. It's just that we hate everything else more. ZFS is cruel to you if your hardware isn't up to snuff. LSI HBA's are -- and even that requires you to have the specific right firmware.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What is the best and robust LSi-based 24 ports HBA recommendations?

Why do you need a 24 port HBA? Wouldn't you be better off with an 8 port HBA and then an SAS expander? Certainly much cheaper, and totally sufficient for HDD's.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
This is a pretty beefy system we're talking about here for the workload you are describing. You could sell one of the CPUs and half your ram, buy a good, solid, and well-supported HBA and be in a better position with money in your pocket.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Thank you, it is good to know that I am under 12 as a maximum list. what is recommendation block sector size? It will be in between alot of small sized games such as DOS, Amiga, Commdore64 to 4.7 GB ISO image size. Not sure which number would be sweet spot block size sector.
If you mean for HDDs, I would say it's pretty indifferent as either 512e or 4k work with TN; since you are going to use an HBA or a backplane, I would say that you need to go for the model with the SAS interface instead than then SATA one. It's just a matter of interface compatibility between the disks and the thing you plug them into.

If you mean record size as the dataset propriety, it depends on the files; I would say you could totally have a dataset for ISOs and another for games, with rispectively 4k and a lower number more compatible with the small files you were mentioning. It's again a matter of analyzing your needs and setting up the software to fit those needs.

I suggest you to look into my signature in order to get a better understanding of ZFS since it looks like you are completely new to it and there are a few key points that would be useful to understand.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
What is the best and robust LSi-based 24 ports HBA recommendations?
You do not need a 22+ port controller for your 22 drives.
The usual way to do it would be with backplane(s) and SAS expanders, connected to a LSI 2308 / 3008 -8i, or even -4i. You have not yet mentioned the case or chassis for your build.

Alternatively, the 8 SATA ports from the C621 are perfectly suitable for ZFS—unless the drives are SAS. Possibly even the further 2 ports from the ASMedia controller, although to keep with the oversized and over-engineered theme you may want to keep this for a redundant boot. So, even connecting all drives directly, you'd only need a -16i HBA; or a pair of -8i HBAs, as you have PCIe slots to spend.
 

Xlot

Dabbler
Joined
Jan 3, 2014
Messages
14
And the problem is that Adaptec was the hot controller of choice back in the day, when Adaptec provided documentation and technical assistance to driver authors to make drivers for their gear. But that all went away over time, and when Adaptec got bought out by PMC, they were relatively hostile to the open source community. This makes it hard to produce the sort of rock solid reliability that ZFS requires. It would be GREAT(!!!!!) to have a reliable second option for add-on HBA cards, but Adaptec isn't it.

The 7 series adaptec devices work just fine as an HBA.. I have had one passed through to a virtual instance of Truenas for about 3 years now. (I replaced a couple of 2308's as I needed a PCI-e slot for a GPU to transcode - there's also normally 6x 512GB SSD's attached too - but they're sitting on my desk currently pending time to upgrade to 6x 2TB MX500 drives)

Here's the camcontrol devlist output:

1686717480135.png


I have direct native access to my drives through the 71605. Smartctl etc. all work just fine.

Here's the BIOS settings (unlike the LSI devices its just a BIOS setting not a need to reflash).

1686721504685.png


Before pass thru of the controller to the Truenas instance, the drives all appear natively under Rocky Linux too as part of initialisation.

edit: The one issue with the adaptec cards is they run HOT.. I had to attach a fan directly to the card to keep the heatsink cool enough, despite it being installed in a Supermicro 16-bay 3RU case in my home rack - as I have the fans in the case set to run slower for noise reasons
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Please read points 4), 4a), 5), and 10) at a minimum.

Anyways, you've been warned against it. The Adaptec controllers have turned out to be a problem for users in the past in a variety of ways. There have been several reported pool corruptions and other undesirable problems. These problems do not always show up right away and it is common for it to bite you down the road at some point, at which point we may not be able to help you once the damage is done. ZFS does not have repair or recovery tools.

That said, I lean little-l non-batshit-crazy libertarian and respect your right to make your own mistakes. You've been made aware that this is not an acceptable controller by several folks here, so I feel like any ethical imperative to warn you has been sufficiently met.
 

Xlot

Dabbler
Joined
Jan 3, 2014
Messages
14
I did read the whole article - I just wanted to point out that the 71605 seems completely reasonable after 3 years of uptime and zero issues with the configuration I have, which includes the 2018 BIOS update for the card with a fairly extensive change log.

I'm also aware that you have a vested interest in being conservative from the controller standpoint given that you likely have a duty of care of sorts in terms of what you recommend, so I'm not going to wage war on the topic, although characterising my choices as mistakes seems a little off? (amusingly your article's point 9 contains a link about experimental cross flashing - my blog post from 2017 is the number 1 credit on that link).

I've always chosen the bleeding edge; you'll see from my minimal post history here that I'm a bit experimental - but I'm also a somewhat informed user with a cautious backup process and I don't jump in lightly to say "it works".

Here's a smartctl through the same card:

(Yes - that is 8.5 years.. two disks out of that (Raidz2) array have been going that long - the new disks I added to the array when I switched to the 71605 have 2.78 years of uptime - so thats when I installed and configured it)

1686787987125.png


Hooroo!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've always chosen the bleeding edge; you'll see from my minimal post history here that I'm a bit experimental - but I'm also a somewhat informed user with a cautious backup process and I don't jump in lightly to say "it works".

My article doesn't have sufficient artificial intelligence to analyze you and determine what your expertise level is, or what your risk tolerance level is. We haven't quite gotten to that level of forumware software just yet. And I tire of arguing with argumentative people who are just cocksure that they are right about their opinion about their awesome controller, or that RAID controllers are fine, or any of the other things that pop on nearly a daily basis. There's a limited list of RAID and HBA cards out there. They've all been tried. With the exception of the LSI HBA's running very specific firmware versions, they all seem to have various issues.

What I can tell you about Adaptec is that people show up with Adaptecs and that there have been instances of pool damage and/or loss. The reason we recommend the LSI HBA's is because iXsystems sells them with their TrueNAS hardware platform and has extensively tested them on many thousands of systems. The freebie crowd has tested them on many more than that; there's several billion problem-free aggregate run hours behind the LSI controllers. There just aren't enough Adaptec controllers out there to get that sort of reliability testing in any case. I'm not collecting stats on which specific Adaptec cards, firmware versions, etc., have problems. There's really little point, as LSI HBA's are easy to come by for bargain basement prices.
 
Top