Will upgrading my SAS controller automagically let FreeNas "see" the extra space on my drives?

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
My current setup is eleven 1TB drives in a RAIDZ3 array, with one hot spare. I am slowly replacing each drive with a 4TB Ironwolf drive to increase my capacity. I didn't realize that my SAS card (an LSI SAS3081E) is old enough that it cannot address >2TB per drive. Luckily, it was new enough that the drives are appearing as 2TB, and not 512 bytes like older cards may.

Once I realized this, I immediately went and bought a SAS9211-8I card on eBay to replace my old card, but I have a question: Once I replace/upgrade firmware on the new card, will Freenas (11.2) immediately recognize the drives as 4TB and utilize all the space?

Right now I'm still putting in the larger drives at a rate of about 1 per day (takes a while to rebuild each time.) The system sees them as 2TB, so presumably once I replace ALL the drives, the pool will expand to have roughly double the capacity of the 1TB drives. When I install the new card, will FreeNAS see all the extra space and expand the pool size again?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I notice that your card is a megaraid, and that you say it's running IT fw; with the exception of autoexpand=off, yes, it should just work.
it would, however, be HIGHLY recommended to ensure you have a working backup before doing so.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
I didn't realize that 'autoexpand' was even an flag that could be changed. I ran the command zpool get autoexpand [volume] and verified that it is set to "on", so it looks like I should be good to go, as long as I can successfully flash the IT firmware to my new SAS card. My main worry was that FreeNAS would freak out because the drives would have the same harware ID's but different capacities.

As for backups, I replicate all my datasets to another FreeNas box I set up at a relative's house, so I should be good on that front. Thanks for helping me clear this up!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I don't imagine anything would "freak out", but as with anything it's possible something could go wrong. the only thing I can think of is maybe the partitions could need to be changed. you might want to try and see if you can find anyone else who's done so before taking the plunge with live data.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I didn't realize that 'autoexpand' was even an flag that could be changed. I ran the command zpool get autoexpand [volume] and verified that it is set to "on", so it looks like I should be good to go, as long as I can successfully flash the IT firmware to my new SAS card. My main worry was that FreeNAS would freak out because the drives would have the same harware ID's but different capacities.

As for backups, I replicate all my datasets to another FreeNas box I set up at a relative's house, so I should be good on that front. Thanks for helping me clear this up!
If you have to expand the partitions on the drives that are only recognized as 2TB, follow this walk through https://www.ixsystems.com/community/threads/expand-resize-single-disk.1660/post-7178
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you might be better off putting in the new controller *before* replacing disks, as the new disks would be sized correctly from the start and not require any of that pool-nuking-potential list of commands.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
What makes you think that expanding a partition(s) would be a pool-nuking procedure?
I usually see the worst possible outcome and try to work backwards from there to prevent it. doing a bunch of command line commands, which are fully capable of all kinds of damage if mistyped, *particularly* if a relative terminal newb, introduces risk that things like the GUI is intended to avoid.
I was also exaggerating for effect.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
I'm glad everyone here is as cautious as I am; I would like to avoid re-making my pool if I can. But as I said, I have replicated all my data to another PC, and I am also going to copy all my "big" (media/documents) data to a USB HDD attached to my desktop PC before I replace the card, so if I DO nuke something by accident, all I will have to replicate from the remote backup would be my IOCage jails.

However, the Ebay seller I bought from is on form, and the new card is due in sometime next week. I won't have all the 4TB drives in by then, which brings up an oddity that I don't know what to think of. Replacing one of the disks takes about 12 hours, which I would accept if I took the old drive out first. As it stands, when I replace a drive, both the old and new drives are in the machine, so I would think the system should just mirror the old partition to the new drive, but it takes so long that I believe the system is actually rebuilding from the other drives.

Could that be the case? I know my current SAS card is old, but it still has a throughput of 3GBit/s, so assuming both drives are attached to it, that would be 1.5GBit/s for each drive, which comes out to about 90MB/s, or about 3 hours to transfer 1TB of data. Even accounting for overhead from other processes, the resilver time is still taking twice as long as I would expect.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
the kind of direct clone you are talking about might only apply to mirror (not even 100% sure about that, zfs algorithms are complex), a raidz would recalculate the parity to resilver, and resilver is affect by more than just controller bandwidth. mainly, the disk write speed is the primary bottleneck, with any overhead that might exist else where being able to slow it beyond that. zfs also writes a ton of metadata, mainly the checksums, and all of that takes time on platter disks.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
Thank you for clearing that up for me. Since my SAS card is fully populated with 8 drives, it would make sense that resilvering takes so long.

When I install my new card, I'll make sure to fully populate all the SATA ports on the board first, then put the remainder of the drives on the SAS card, to mitigate this bottleneck in the future.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Since my SAS card is fully populated with 8 drives, it would make sense that resilvering takes so long.
that's not correct. the primary limit is the write speed of each drive itself, of the physical platters and head mechanism, not the SATA/SAS link.
the old card has 3Gb/s per channel. that's per hard drive when direct attached (no expanders). 8 drives would be 24gb/s max theoretical
the 9211 (sas2) is 6Gb/s per channel. the controllers connection to the rest of the system is pcie 2.0, and that's 40gb/s, which is 5gb/s per drive with every drive running absolutely pinned 100% of the time. 8 drives would be 48gb/s max theoretical.

you realistically can't max the sas2 bandwidth with anything but SSD's, platters are just too slow, relatively. your listed motherboard only has 2 SATA 6gb ports, so you will get the most speed from the sas card. the number of ports, however, means you will have at least one drive on the 3gb/s ports, so your pool will be limited to 3gbs overall anyway. you could get an expander to bypass that, but your platters will likely have trouble saturing even the 3gbs anyway so YMMV.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
That's fascinating! I had no idea I was buying such an incredibly high throughput interface for less than 50 USD. I won't then bother re-organizing my drives when I replace the card.

According to The Internet™, average sustained read/write speeds for my new drives are both about 150MB/s, so they wouldn't even saturate a 1.5Gbit/s first gen SATA interface. When the new card goes in, I will make sure all the extra drives are not on the SATA3 ports on the motherboard (I'm upgrading my boot storage to ssd's, they need it more.)
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
my boot storage to ssd's, they need it more.
your boot drives link speed wont matter, they dont get written or read from anywhere even remotely close to enough for speed to matter, USB 2.0 speed is more than sufficient for boot drives after all. the main advantage of ssds for boot drives is reliability, usb drives are notorious for dying with no warning, not doing proper wear leveling, having no SMART, etc.
not sure which ports you mean but realistically doesn't matter much. ideally, use the SAS2/SATA3 (6gb/s) & SATA3 (6gb/s) for DATA first, the SATA2(3gbs) for boot, and use whatever is left for whatever, but as you noticed the drives can barely use SATA1 (1.5Gb/s).
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
Lots of misconception about interface speed.
3Gbit/s or 6Gbit/s is only the data rate on the SATA port and doesn't even account for platter throughput as it is irrelevant.
3Gbit/s should be equivalent to about 300MB/s if you do not include latency and account for a 8/10 bit encoding.
What makes HDD slow is there inherently high latency which is primarily caused by the mechanical properties of the drive itself.
Resilvering is highly CPU and RAM. 1 disk versus 8 or more disk only increase the workload on the CPU. The SATA port wouldn't care less as everything would most likely be DMA based.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
A safe way to go would perhaps be to install both cards and migrate a single drive to the new card.

If your pool comes up with no issues, then you can migrate the remaining drives.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Resilvering is highly CPU and RAM.
per @Chris Moore this is only true with the lowest of performing servers, being mostly negligible; the disk write speed is the bottleneck for resilvers unless you are running a potato, as getting the data on disk is the slowest HDD process, particularly with raidz vdevs as the full data read speed will be a partial aggregate of all the disks being read from - which is all of them, but that can only be written as fast as the new disk can write it.

interface speed, as you remark, is still pretty meaningless though.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
So I got the new SAS card early in the week, but I held off installing it until all the drives were replaced, to see what would happen. I made sure to install the latest IT firmware before attaching the drives.

As I feared, the pool did NOT expand automatically to use all the space in my drives. I only have ~15TB available when I should have twice that. Is there anyway to force it to re-configure without destroying the pool and re-creating it?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
If your 1TB drives are unsused, I would create the array again and replicate to them. After that, break your 4T disk array and create it fresh, then replicate the entire content there. It will be much faster than replacing one disk at a time.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Offline the pool and expand the 2TB partitions that were created because of the old card. Just follow the commands in the link I posted earlier.
 
Top