Poor man's raidz expansion - also backup and test system

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
TL;DR: SAS drives are inexpensive on eBay

--

My 5-wide raidz2 was at 75% used, and rapidly getting to the point where a single 14TB disk wouldn't hold the data I cared about - everything with the exception of the PC backup dataset. I pondered moving it all to a 14TB external, then blowing away the pool and redoing it 8-wide.

That'd have been USD 260 for the external, and returning the external is a) a little iffy ethically and b) maybe not that easy once the packaging has been opened, depending on store policy.

Which is when I realized that I had already built my solution, near-as. I had set up a raidz expansion test system in my desktop, so could I expand that further and use it as a replication target so I could redo my main raidz2? Of course I can.

Assumption: There's a tower system, no matter how old, that has space for 8 drives. In some way shape or form. It'll be used as a temporary replication target. Depending on what its daily use is, it may also become a permanent backup target, or a test system. Or it can be torn down again after, to reduce the cost.

To stay under USD 260 and still have a working system - again, assumption is CPU/board/memory/PSU/case is already in place - I went for eBay SAS drives. That's the secret sauce, 2TB and 3TB SAS drives are being removed from server systems en masse and are quite affordable as a result.

HBA IT mode IBM M1015 or similar: USD 36
Intel NIC i210t: USD 23 (did not arrive in time, Realtek worked, but, probably a good idea just in case)
Cables SAS-to-SATA or SAS-to-SAS: Between 7.50 and 12, per cable; two needed. Let's call it 24 total on the high end.

Without drives: USD 83

Drives: 2TB SAS in 8-wide raidz1 gives me 11.8 TiB (just enough for me), 3TB SAS in 8-wide raidz1 would be 17.7 TiB
2TB goes for 17 to 20 bucks per drive, depending. I got six for 100 and then two more for 19 each. USD 138 total.
3TB goes for 24 to 26 bucks per drive, depending. Assuming 25 it's 200 for the drives. Slightly above the external, also more space.

As built with 2TB: USD 221, including shipping. Not bad.

4xhotswap in 3x5.25", aluminum, I found for USD 85. Totally optional and a luxury, but it worked well for my case and allowed a "cleaner" drive arrangement. I could also have removed the GPU for the period of transfer, and that would have allowed me to use the second internal drive cage and not taken that hit.

I may keep the 2TB drives for raidz expansion testing, still thinking about that. Or put them up on eBay again to recoup USD 100 or so.

Steps taken to make it all work:
- Flash HBA in EFI shell or DOS, this may not be necessary depending on how it arrives
- Cable the drives. If they are just mounted to the case, use SAS-to-SAS breakout (SFF-8087 to SFF-8482) with SATA power connectors; if they are mounted to a hot-swap, SAS-to-SATA breakout. Yes these hotswaps support SAS and SATA drives both, and yes you can access the SAS drive as a SAS drive over a SAS-to-SATA breakout cable that connects to the hotswap cage. It's all just connectors, the protocol doesn't get touched by anything but the drive and the HBA.
- Use a Linux boot to change SAS drives from 520 sectors to 512 sectors, was necessary on 6 of them, not on the other 2. Depends on how you get them.
- badblocks in Linux for all SAS drives to make sure they are okay. Some had a "grown defect list", some didn't. Good enough for my use case here. Could also have run badblocks in TrueNAS instead.
- Boot into TrueNAS, make a raidz1 pool, set up replication, and move data over to temporary system
- While all of the above happens, on source TrueNAS, badblocks the additional 3 drives.
- When replication is complete, verify that all data came over, verify again; verify a third time.
- Blow away the source raidz2, redo as 8-wide
- Replicate "the other way" - back to my main TrueNAS system

Between badblocks (a week), replication two ways (6 days - roughly another week), it took two weeks to do. It's not as neat as actual raidz expansion, but it got the job done, and I can do it now, before my data grows to the point where 2TB drives wouldn't have done the trick any more.

And I have a full-blown test system. That could become a backup system, but it's actually my daily driver desktop, so it won't become a backup system until I allow myself an SFF-PC upgrade sometime 2022 :).

Edited to add: This basic recipe could also be a main "starter" TrueNAS. Sure no ECC, but, where cost is the main factor, that's a choice people make anyway. Between 8 and 12 drives, as raidz2 or raidz3. If over 8 drives, an inexpensive SAS expander can be stuck into one of the PCIe slots, making it possible to address 12 (or more) drives easily. Any tower system with lots of 5.25" space should be able to handle 12 drives easily, by adding hot-swap cages as desired. Maybe even 15 or 16, depending.
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
OK, so where are the photos? Kidding apart, I'm curious to know just how old those SAS drives were. Your post got me thinking if it was worth me rebuilding my old amd ECC PC that I first use FreeNAS on. All the parts are present except drives, and I have a tested but unsed 9211-8i HBA. In the UK SAS drives on fleabay go for around £100 for 8TB ( 4x2TB, 2x4TB. 3x3TB). These are typically seagate constellation or HGST ultrastar but they are old - 2013-2015. A SAS-to-SAS breakout (SFF-8087 to SFF-8482) with SATA power connectors is around £15.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
These are old drives, 2013 or so. They seem to be EMC pulls, hence the 520 byte format.

For a backup or test system raidz1 is fine. For production with something like this, I think raidz2, maybe even 3, or 3-wide mirrors, but mirrors are unlikely for home use.

Drives are around 40C in my case, with a single 120mm blowing over the bottom four, and a single fan in the 5.25 cage. A little hot, but not crazy by any means for 7200rpm.

Pics or it didn't happen, I get it. Not that there's much to see. 5.25" hotswap cage, and the mess that's the inside of the desktop. 4 drives in the bottom cage, and an SSD that's the Windows/Linux drive just kinda hanging out there. That DVD has lost power, I'll need to hunt down my molex-to-sata connector to power it again. Or hit Corsair up for another SATAx4 power cable for my modular PSU in there.

Guts are a 3rd generation Intel i5, with 32GiB non-ECC DDR3. The GPU hardly sees any use, it's a GTX1080. There was gaming going on at some point. Full-size ATX board, no mATX frippery in this case :). I think I have six SATA, so if it became dedicated to TrueNAS, more than 8 drives could be a thing, though the mounting for those gets a little tricky eventually - a different case that's built for tons of 5.25" would be a better choice there.

TrueNAS-Cage.jpg TrueNAS-Inside.jpg
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Don't know what brand of SAS drive your now using, but the HGST HUS723020ALS640 drives on UK fleabay are nearly all marked with EMC part numbers, so they would be 520b format. Either that or they are ex HP server kit with how knows what firmware on the drives. What kind of power on hours did you end up with? I'd hope for 40K or less. I've not used SAS drives before, but there are plenty of refs of how to do the 520b to 512b change.

I've just noticed a UK fleaby sale a few mins ago of "10 x Seagate 2TB 7.2k SAS 6G 3.5 Hard Drive HDD ST2000NM0001" for £83 inc P&P. Hmm, I wonder if that's a bargain or junk.

I can see you had a bit of a case challenge. Stupid me put my last desktop build (2012!) in a Fractal R3 case leaving the backup PC in an Antec case. Not sure I can now summon up the energy to swap the destop to the old Antec case and re-use the Fractal R3 for a FreeNAS backup/test rig. Anyway thanks to your first post I have some more ideas for this kind project.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I am using Seagate ST2000NMCLAR2000 (3 of), no power-on hours reported, no grown defect list on any of them, was 520b; Seagate ST32000444SS (3 of), no power-on hours reported, grown defect list on all of them, was 520b ; HGST HUS72302CLAR2000 (2 of), 31k hours, no grown defect list on any of them, came as 512b.

For your Seagate 2TB: Probably a little bit of both. If it's described as tested and working, you can badblocks them and return any that fail. Might be some have a grown defect list, so they'll likely fail sooner rather than later. Depending on what you want to use them for, that may be acceptable at that price.

Changing to 512b is dead easy on Linux. Check that drive is 512 and make it so if it isn't:

Code:
sudo sg_readcap /dev/sgX
sudo sg_format --format --size=512 /dev/sgX


Those are from sg3-utils which brings in libsgutils2. I installed those as dot deb files, I'm not sure whether there's an Ubuntu repo for it.
 
Last edited:

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
Boot into TrueNAS, make a raidz1 pool, set up replication, and move data over to temporary system
- While all of the above happens, on source TrueNAS, badblocks the additional 3 drives.
- When replication is complete, verify that all data came over, verify again; verify a third time.
- Blow away the source raidz2, redo as 8-wide
- Replicate "the other way" - back to my main TrueNAS system
I have two 8TB drives on the way that I was planning on substituting in for 2 of the 4tb drives in my one of the vdevs in my mirror with two resilverings, but I'm thinking I might do it this way as I have something weird going on with permissions in my pool. I can do it within the same case. Do you know, when the replications over and back are done, is it just the data that transfers, or are the permissions associated with the dataset and folders within the dataset also transferred? I'm looking for a fresh start.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I think it'll replicate that over since it's block-level. There is "include dataset properties", which can be unset, but I don't think that'll blow away ACLs.

For that, either recursively strip them, or rsync back and forth.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Thank you! I had WCE=0 on all. That would have sped up badblocks no doubt :).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Turns out this is rather timely--I'm wanting to build a pool for VM storage, calling for striped mirrors, the more the better. And I figured old used drives might be a bit of a drawback, but it's cheap enough to buy a couple of spares and have them burned in, tested, and sitting on the shelf (or even in a slot in the chassis, just not plugged in) for the inevitable failure. Found a vendor, made an offer for 10 x 3 TB disks, and they arrived the next day. Nicely packaged, clean, and already in Supermicro drive trays (which was convenient, as that's where I was going to install them). Great. I'm going to guess that all of them came from the same system, because they all have exactly the same runtime--and that's over 61k hours, so they're fully depreciated.

Problem is three of them are DOA. Mechanically so--you can hear the noise when they try to spin up. The others (less one) have passed long SMART tests and are running badblocks now. Let's see how the vendor deals with this.
 
Top