H330 or HBA330 for FreeNAS in ESXi with 4k sector disks

Status
Not open for further replies.

John_n4s

Dabbler
Joined
May 15, 2017
Messages
16
Fellow FreeNAS(ers)

My plans are to build an All-in-one ESXi system in which the latest stable FreeNAS will be ran to manage several disks with the magic of zfs.
I originally wanted to go for the famous Dell PERC H200 flashed to IT-mode but that one does not support 4k sector disks (which is going to be the future and already a reality for most 8TB+ disks if I understood correctly)
I then started looking at the H330 and HBA330. There are some posts out there claiming a H330 can be cross-flashed but all kinds of issues will arise.
Some say it is wiser to go for a HBA330 since it supports the type of passthrough that I am looking for.
I see some forum posts here and there but none of them really answer my question fully.
Could someone enlighten me which card(s) give me a guaranteed good passthrough for proper zfs use with FreeNAS on my future ESXi build?
Thank you in advance.
Kind regards,

John
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why are you looking at 12GB/s cards? Perhaps you should go over your concept in more detail because I see no reason to spend for a 12GB card if you are connecting it to spinning rust.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

John_n4s

Dabbler
Joined
May 15, 2017
Messages
16

Dear Chris,
Thank you for your reply. It feels like I should have added more information on why I was looking at 12 GB/s cards.
I choose the 12GB/s card for its speed and capabilities considering I want a future-proof setup and possibly with SAS disks in it instead of SATA (the rust?)
I can imagine the 12 GB/s card is overkill at this point.
Personally I am not experienced flashing firmware of these type of cards and read that the HBA330 is automatically in IT-mode already hence my original pick.

Question 1:
If you still suggest this SAS2308 chipset card is sufficient enough for 4 x high speed SATA disks (8TB+) then I will gladly go with your pick.
Question 1a:
Also if this is the case, could you tell me if a mini-sas to 4 sata breakout cable will be sufficient?

Question 2:
I went through the guide from stux where he describes using an M2.ssd with a 14gb virtual disk and a USB mirror. I currently have a FreeNAS system running with 2 mirrored SSD's for the FreeNAS OS drive that also gets scrubbed via a cron job. It is not problem to do it the same way as stux does considering zfs scrubs for the boot drives?

Thanks for your feedback!
Kind regards,

John
 
Last edited:

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Question 1:
If you still suggest this SAS2308 chipset card is sufficient enough for 4 x high speed SATA disks (8TB+) then I will gladly go with your pick.
Depending on how many disks you plan to use (with sas expanders), this is SEVERAL times faster than any spinning hard drive SATA or SAS.
Question 1a:
Also if this is the case, could you tell me if a mini-sas to 4 sata breakout cable will be sufficient?
If you only need 8 drives, yes. Back to the first question, 200MB/s*8=1,600mb/s and that's a bit less than 6,000mb/s. So again unless you added a bunch of SSDs, don't worry about it.
Question 2:
I went through the guide from stux where he describes using an M2.ssd with a 14gb virtual disk and a USB mirror. I currently have a FreeNAS system running with 2 mirrored SSD's for the FreeNAS OS drive that also gets scrubbed via a cron job. It is not problem to do it the same way as stux does considering zfs scrubs for the boot drives?
I'm not sure I understand the question. One, can't you link to the guide so I can understand what you're referring to? Two, a pool is a pool and can be scrubbed.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm not sure I understand the question. One, can't you link to the guide so I can understand what your referring to? Two, a pool is a pool and can be scrubbed.
I think he is talking about this:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
Thank you for your reply. It feels like I should have added more information on why I was looking at 12 GB/s cards.
I choose the 12GB/s card for its speed and capabilities considering I want a future-proof setup and possibly with SAS disks in it instead of SATA (the rust?)
Mechanical disks, 'spinning rust', can only go so fast because of the fact that they are mechanical. The fastest, SATA or SAS, internal transfer rate that I have seen specs for is 270MB/s. This means, no matter how the disk connects to the server, it can't go faster than 270MB/s, so disk drives could get twice as fast as they are now and still be slow enough for the 6Gb controller to handle. Using 12Gb controllers to run slow, mechanical drives only makes sense if you have a whole lot of disks, if you are only ever using eight disks, there is no reason, not even future proofing, to buy a 12Gb controller. Here is why, think of how long you will be using the system before you replace or upgrade it, by the time you need a 12Gb controller, the price will be half or less what it is now, if you ever need a 12Gb controller at all. They might come up with a whole different technology for interfacing disks by then, so why spend now for something you don't need.
I believe in buying what you reasonably expect to use, but there is no purpose in 12Gb controllers for mechanical drives, not in small installations. They do make 12Gb SAS SSDs that are horribly expensive and fast enough to make a difference, but that is a whole other question.
 

John_n4s

Dabbler
Joined
May 15, 2017
Messages
16
Depending on how many disks you plan to use (with sas expanders), this is SEVERAL times faster than any spinning hard drive SATA or SAS.

If you only need 8 drives, yes. Back to the first question, 200MB/s*8=1,600mb/s and that's a bit less than 6,000mb/s. So again unless you added a bunch of SSDs, don't worry about it.

I'm not sure I understand the question. One, can't you link to the guide so I can understand what you're referring to? Two, a pool is a pool and can be scrubbed.
Thanks for the reply and the answers so far!!

About my last question, it is exactly the thread Chris referred to. In that thread you see how the creator first makes a 14GB virtual disk in ESXi as installation media for FreeNAS and then proceeds adding another (mirror) usb disk for baremetal booting. My question is.. considering the issues with ZFS and scrubbing, shouldn't all these disks be connected to the HBA to prevent errors with ZFS?
 

John_n4s

Dabbler
Joined
May 15, 2017
Messages
16
I think he is talking about this:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
Mechanical disks, 'spinning rust', can only go so fast because of the fact that they are mechanical. The fastest, SATA or SAS, internal transfer rate that I have seen specs for is 270MB/s. This means, no matter how the disk connects to the server, it can't go faster than 270MB/s, so disk drives could get twice as fast as they are now and still be slow enough for the 6Gb controller to handle. Using 12Gb controllers to run slow, mechanical drives only makes sense if you have a whole lot of disks, if you are only ever using eight disks, there is no reason, not even future proofing, to buy a 12Gb controller. Here is why, think of how long you will be using the system before you replace or upgrade it, by the time you need a 12Gb controller, the price will be half or less what it is now, if you ever need a 12Gb controller at all. They might come up with a whole different technology for interfacing disks by then, so why spend now for something you don't need.
I believe in buying what you reasonably expect to use, but there is no purpose in 12Gb controllers for mechanical drives, not in small installations. They do make 12Gb SAS SSDs that are horribly expensive and fast enough to make a difference, but that is a whole other question.
Thanks Chris. Your tips are most useful. I will go with your advice concerning the HBA. I also added another question below in the reply to kdragon.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My question is.. considering the issues with ZFS and scrubbing, shouldn't all these disks be connected to the HBA to prevent errors with ZFS?
All your data disks should be connected to the HBA so the HBA can be passed into the VM for direct access by FreeNAS. I am not sure why @Stux did the boot drives the way he did. Perhaps we will be lucky and he will chime in with answers for us.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
shouldn't all these disks be connected to the
Comparing to m2 or comparing to USB?

Some or many here complain about USB attached storage reliability.

I don't know about m2 reliability, maybe someone else if this is your question...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
My question is.. considering the issues with ZFS and scrubbing, shouldn't all these disks be connected to the HBA to prevent errors with ZFS?
Nope. That's one of the amazing things about ZFS, it doesnt care. As long as we have "raw" access to the block device (ssd, sata, nvme, etc..) it doesnt care. Without reading his thread, I'm guessing he passed the USB controller to the FreeNAS VM or used an RDM (raw device mapping). The prefered way would be to use passthrough for the USB controller.
usb disk for baremetal booting
I'll have to look at how he has that set up because there's not much point in mirroring a mechanical or SSD drive to a USB drive. You would still need a datastore for the .vmx .nvram and other files.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll have to look at how he has that set up because there's not much point in mirroring a mechanical or SSD drive to a USB drive. You would still need a datastore for the .vmx .nvram and other files.
It has been a long time since I read that thread, and I don't remember the details, but I think he setup a virtual disk in the ESXi datastore to boot FreeNAS from but used FreeNAS to mirror that disk to a physical USB drive so he would still be able to boot FreeNAS 'bare metal' if something went wrong with ESXi.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
It has been a long time since I read that thread, and I don't remember the details, but I think he setup a virtual disk in the ESXi datastore to boot FreeNAS from but used FreeNAS to mirror that disk to a physical USB drive so he would still be able to boot FreeNAS 'bare metal' if something went wrong with ESXi.
Yeah, ok that makes sense. Now that you say that it does sound familiar... Its hard to remember everything I read here. :confused:
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
All your data disks should be connected to the HBA so the HBA can be passed into the VM for direct access by FreeNAS. I am not sure why @Stux did the boot drives the way he did. Perhaps we will be lucky and he will chime in with answers for us.

Had a few reasons:

1) only six Sata ports.
2) needed a data store on the ESXi m2 to boot FreeNAS.
3) wanted that mirrored so as to not get locked out
4) wanted to be able to bare metal boot FreeNAS
5) hits and giggles.

If you don’t care about 3+ then there is no point, and just go with the single ESXi datastore.

Ideally you’d use hw raid for your ESXi datastore. My system is too small for that.
 
Status
Not open for further replies.
Top