daddysworkisneverdone
Cadet
- Joined
- Feb 27, 2019
- Messages
- 5
so new user on the forums. I've trolled through over the years and even toyed with freenas virtualized a few times, but thought I'd throw this out as a serious question for those with lots of experience with freenas. Not even sure if this is the right place to ask, so you won't hurt my feelings if you need to move this to another spot.
I know the ruling principle is "don't use hardware raid", but please hear me out first, and I'm hoping some of the smart folks can break down the idea and either destroy it entirely with explanation, or discuss as I think it may be a cheaper solution for us smaller scale users. And I’m stressing small scale. I realize hardware raid doesn’t scale, and that’s probably the biggest advantage of software defined storage, and yes I realize ZFS was the original, and likely still the best. But all the new players are following (some poorly) the same basic constructs. SDS doesn’t seem to scale down very well in either performance or price.
Currently running a used dell branded LSI raid card, an H700. 512MB cache, battery backed, breakout cables to 8x4TB sata drives. These cards (cheaper now), are $100 to $150 (move to newer H710 now at that price), cache is battery or capacitor (in higher end models) protected, battery is readily available and easy to replace. a low power core I3 with 16GB ecc is all the cpu and ram it needs. could actually get by with less ram since the hardware raid card does all the heavy lifting, but I run some VMs on it as well.
I know hardware raid can't "resilver" the data..and you'll have bitrot issues..but..for years now LSI has had an IMPI command tool to actually have the card do the same thing. LSI calls it a consistency check. This is not comparing a given drive sector’s data to its checksum, that's a patrol read and happens automatically. the consistency check actually compares full raid stripe data to the parity data across the raid stripe set, and will correct any errors found. This is not an automatic process, but must be called with the tool, and scheduled using cron or windows task scheduler. But it can be done and works very well.
The Dell cards seem to sell cheaper (don’t know why, maybe volume?), with their tool, the command is something like
omconfig storage adapter=0 virtualdisk=0 action=checkconsistency
Any LSI or rebrand of it supports this if it’s an actual raid card and not a plain HBA. Command will vary based on rebranding OEM name-even Intel sells them!
With that out of the way, I can get pretty much the same data integrity at a much lower price than needing a monster price Xeon and tons of ram.
There are some advantage too. I’m not limited to a single zdev worth of throughput, so an 8 drive (I know that’s not an ideal drive count) raid 6 gives me 6 drives worth of read IOPs, and 6 drives (minus raid 6 write penalty) worth of write IOPs. What would I need with ZFS or another SDS type? 6 Zdevs? Or have enough ZIL, ARC and L2ARC to survive the bursts? Not knocking it at all, just out of my price range to have that many drives, that much CPU, and that much ram.
What if-I run freenas and ZFS on top of that hardware raid card? Present a single virtual disk (disk device) to freenas, and create the ZFS equivalent of a raid 0? Let the LSI chip handle the raid (and there’s a BSD toolset to call consistency checks regularly to handle bitrot). Now I can run a lower cost core I3 and less ram.
Why bother with Freenas then? The point in time snapshotting is awesome, no other way to put it. MS isn’t quite there yet…but then I’d also need a windows license. Win10 would be ok for this type of small nas, but it’d only be a nas, no iSCSI target on desktop OS. Solarwinds iSCSI target software stinks IMO, so I don’t like that as a solution.
Linux might be ok, and price is right, but I still don’t know of anything that approaches ZFS snapshotting? Maybe there is and I’m not aware.
So my general thought here is I could eat my cake and have it too-get the great point-in-time snapshot function of ZFS and the small scale performance of hardware raid. Having a decent iSCSI target built in is a nice bonus, it’s nice to have for toying with ESX. NFS works too I know, and Freenas has that too.
Please help me see the flaw(s) in my idea and help me understand why it’s a no-go, or share any insight if you’ve seen it tried before
I know the ruling principle is "don't use hardware raid", but please hear me out first, and I'm hoping some of the smart folks can break down the idea and either destroy it entirely with explanation, or discuss as I think it may be a cheaper solution for us smaller scale users. And I’m stressing small scale. I realize hardware raid doesn’t scale, and that’s probably the biggest advantage of software defined storage, and yes I realize ZFS was the original, and likely still the best. But all the new players are following (some poorly) the same basic constructs. SDS doesn’t seem to scale down very well in either performance or price.
Currently running a used dell branded LSI raid card, an H700. 512MB cache, battery backed, breakout cables to 8x4TB sata drives. These cards (cheaper now), are $100 to $150 (move to newer H710 now at that price), cache is battery or capacitor (in higher end models) protected, battery is readily available and easy to replace. a low power core I3 with 16GB ecc is all the cpu and ram it needs. could actually get by with less ram since the hardware raid card does all the heavy lifting, but I run some VMs on it as well.
I know hardware raid can't "resilver" the data..and you'll have bitrot issues..but..for years now LSI has had an IMPI command tool to actually have the card do the same thing. LSI calls it a consistency check. This is not comparing a given drive sector’s data to its checksum, that's a patrol read and happens automatically. the consistency check actually compares full raid stripe data to the parity data across the raid stripe set, and will correct any errors found. This is not an automatic process, but must be called with the tool, and scheduled using cron or windows task scheduler. But it can be done and works very well.
The Dell cards seem to sell cheaper (don’t know why, maybe volume?), with their tool, the command is something like
omconfig storage adapter=0 virtualdisk=0 action=checkconsistency
Any LSI or rebrand of it supports this if it’s an actual raid card and not a plain HBA. Command will vary based on rebranding OEM name-even Intel sells them!
With that out of the way, I can get pretty much the same data integrity at a much lower price than needing a monster price Xeon and tons of ram.
There are some advantage too. I’m not limited to a single zdev worth of throughput, so an 8 drive (I know that’s not an ideal drive count) raid 6 gives me 6 drives worth of read IOPs, and 6 drives (minus raid 6 write penalty) worth of write IOPs. What would I need with ZFS or another SDS type? 6 Zdevs? Or have enough ZIL, ARC and L2ARC to survive the bursts? Not knocking it at all, just out of my price range to have that many drives, that much CPU, and that much ram.
What if-I run freenas and ZFS on top of that hardware raid card? Present a single virtual disk (disk device) to freenas, and create the ZFS equivalent of a raid 0? Let the LSI chip handle the raid (and there’s a BSD toolset to call consistency checks regularly to handle bitrot). Now I can run a lower cost core I3 and less ram.
Why bother with Freenas then? The point in time snapshotting is awesome, no other way to put it. MS isn’t quite there yet…but then I’d also need a windows license. Win10 would be ok for this type of small nas, but it’d only be a nas, no iSCSI target on desktop OS. Solarwinds iSCSI target software stinks IMO, so I don’t like that as a solution.
Linux might be ok, and price is right, but I still don’t know of anything that approaches ZFS snapshotting? Maybe there is and I’m not aware.
So my general thought here is I could eat my cake and have it too-get the great point-in-time snapshot function of ZFS and the small scale performance of hardware raid. Having a decent iSCSI target built in is a nice bonus, it’s nice to have for toying with ESX. NFS works too I know, and Freenas has that too.
Please help me see the flaw(s) in my idea and help me understand why it’s a no-go, or share any insight if you’ve seen it tried before