Caveats with an HBA upgrade... Before I cause chaos...

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
Hi All,

I've been through the forum but not quite found the info to relax me.

I'm running out of SATA ports and was running a 4 port HBA SAS -> SATA, the sensible thing is to swap that for an 8 port version.

I have a pool which is running 9x16TB drives, 4 of these drives are on the current HBA card and the rest of them are on the main motherboard.

Ideally I would like to have 8 of these drives on that card but would mean unplugging from the mother board - and I have no idea what TrueNAS is doing to keep track of drives. Would I basically destroy the ZRAID2 pool by doing this?

Thanks for any pointers
Paul
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
No, ZFS does not care too much which disks are on which ports.

There is an exception to this, some USB enclosures hide disk serial numbers and may cause other problems.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
the sensible thing is to swap that for an 8 port version.

You could also use an SAS expander. We were just talking about these in another thread today. Since you've already got a 4 lane HBA, you could attach that to a 24-port SAS expander such as this one and gain a total of 20 lanes (4 get burned to attach to the HBA).

Ideally I would like to have 8 of these drives on that card but would mean unplugging from the mother board - and I have no idea what TrueNAS is doing to keep track of drives. Would I basically destroy the ZRAID2 pool by doing this?

If you've done everything via the TrueNAS GUI, you'll be fine. If you've followed dumbass instructions from random YouTube or webpages to do operations at the shell prompt, you might not be.

If you can go to the shell prompt, type "zpool status", and look for drives that do NOT have a gptid/ label, those will be a problem. Any gptid/ based drives can be moved around just fine because the system identifies them under /dev/ as /dev/gptid/${thegptid}. If you have any drives that are listed as a raw device name like da0 or ada0, that's the bad thing that MUST be avoided.
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
You could also use an SAS expander. We were just talking about these in another thread today. Since you've already got a 4 lane HBA, you could attach that to a 24-port SAS expander such as this one and gain a total of 20 lanes (4 get burned to attach to the HBA).

If you've done everything via the TrueNAS GUI, you'll be fine. If you've followed dumbass instructions from random YouTube or webpages to do operations at the shell prompt, you might not be.

If you can go to the shell prompt, type "zpool status", and look for drives that do NOT have a gptid/ label, those will be a problem. Any gptid/ based drives can be moved around just fine because the system identifies them under /dev/ as /dev/gptid/${thegptid}. If you have any drives that are listed as a raw device name like da0 or ada0, that's the bad thing that MUST be avoided.

Everything has a gptid, so that looks good. Thanks for confirming this, it's a huge help.

I hadn't considered an expander. I'd actually ordered a 8i version of what I have, so will go with that to start with. But was considering a 16i - 24 opens up possibilties!

Is there an overall bandwidth issue using an HBA expander at 24 ports? I was even thinking whether there was going to be an issue with 8 'proper' ones or 16... (It's PCI 3 8x I believe)

Which also leads me on to a follow up question is whether it is better to move the drives all onto cards or whether a combination of motherboard SATA connections and PCI SAS cards is okay? Or all drives in a pool from the same card for example?

Kindest
Paul
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Everything has a gptid, so that looks good. Thanks for confirming this, it's a huge help.

<tips hat>My pleasure.</>

24 opens up possibilties!

Yes

Is there an overall bandwidth issue using an HBA expander at 24 ports? I was even thinking whether there was going to be an issue with 8 'proper' ones or 16... (It's PCI 3 8x I believe)

So that's not generally the bandwidth problem. The LSI 2008 is very old tech and performs slowly, so that is a particular kind of issue. Remember that an HBA is actually a small computer all on its own and does have a finite capacity. That is more important than PCIe 3 8x or any of that, in most cases.

But there's also a limited amount of bandwidth between the HBA and expander. If you use a 4 lane SFF8087 connection, that's 24Gbps (4 x 6Gbps) between the HBA and expander. This means you cannot be pushing more than that between the drives and the HBA. The single SFF8087 gets "crowded".

However, conventional hard drives top out at about 250MBytes/sec (or about 3Gbps), and that's only under the unusual condition where you're doing massively sequential access. If you get 12 drives going and connected via a 4-lane SFF8087 to a HBA, you could in theory be demanding up to about 36Gbps over a channel that can really only do 24Gbps. In practice this isn't a problem on a ZFS system, where you're likely to have some fragmentation that throws the brakes on things. It becomes more of a consideration when you go up to 24 drives.

But this being SAS, you can also up the expander's bandwidth to 8-lane wideport (2 x SFF8087 connecting to the HBA). So in practice, the use of SAS expanders isn't a problem with HDD's. I leave as an exercise for the reader why SSD's might still be problematic.

Which also leads me on to a follow up question is whether it is better to move the drives all onto cards or whether a combination of motherboard SATA connections and PCI SAS cards is okay? Or all drives in a pool from the same card for example?

No such rules exist. A port is pretty much a port is a port, from the FreeBSD or Linux point of view. As long as it can talk to the drive reliably, you're good.

I will note that an HBA typically runs some percentage (not big usually) slower than a mainboard SATA port, so if you have stuff like SSD's, use mainboard SATA if available.
 
Top