BUILD 40GbE Questions

Status
Not open for further replies.
Joined
Mar 22, 2016
Messages
217
Hello all! Hopefully some one can help with a configuration for a 40GbE FreeNAS.

Server:
SuperServer 2028U-TN24R4T+
CPU: 2x 2623 V3 or 2x 2689 V4
RAM: ???
Drives: 4x 1.2TB P3520 NVMe drives
HBA: LSI-9207 8e
NIC:???

Right now I have FreeNAS vitrualized on an ESXi host running 2x 2689 V4 with 256GB of RAM. I have all 4 of the PCIe switches passed through to the FreeNAS VM and can pull the smart stats for the NVMe drives and have the LSI-9207 passed through as well. The VM has 4 cores and 32GBs of RAM. All 4 of the drives are striped (I know) and it will absolutely crush 10GbE right now with ISCSi, transfers are pegged at 1.2GB/s. SMB shares are rather lack luster at 900MB/s reads and 650MB/s writes.

So the question is as I move to 40GbE I've read that virtualizing FreeNAS for 40GbE is not going to happen. I decided to take the chassis and make it a bare metal install. I'd rather not give it 20 cores and 256GBs of RAM as that seems excessive considering the NVMe drives are screaming fast, but would the system need that? I have a feeling 2 E5-2623 V3s (quad cores clocked at 3.0 Ghz) should be fine along with 32 GBs of RAM. Looking for suggestions on what to go with here. The system will eventually have 24 1.2TB P3520's in 6 raid Z1 vdevs. It will be a ready heavy application entirely for VM storage.

The other question is what NIC to go with? Chelsio T580-SO-CR seems to be the likely choice but I'm open to suggestions as this is completely new territory for me. If any one has any tunning suggestions as well I'm totally open.

Thank you in advance!
 
Joined
May 3, 2017
Messages
2
First . .. you suck . . .
Second . . See First.

Just kidding. I bought a Old EMC S200 off eBay with a dual port 10GB card that I am running in a LACP Port Channel. Perhaps consider running a LACP Balanced Port Channel before going to 40GB? Especially if you want to run Virtualized hardware. Just my $0.02 for a 3rd poster.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I've read that virtualizing FreeNAS for 40GbE is not going to happen.
While I'm not up to speed on all things FreeNAS and ESXi, what states you cannot go 40GbE? Wouldn't this be a limitation of ESXi?

I decided to take the chassis and make it a bare metal install. I'd rather not give it 20 cores and 256GBs of RAM as that seems excessive considering the NVMe drives are screaming fast, but would the system need that?
As for CPU cores and RAM, that depends on how you use your system. I don't think you need 20 cores either, assuming you remain on ESXi, try 4 cores. As for RAM, well you may say that your drives are screaming but if you are running VMs then giving FreeNAS a lot of RAM will allow frequent data to be cached in RAM and thus be significantly faster. This is something you will need to play around with but if you can keep all the RAM, I'd do it. You suggested using 32GB of RAM but if you are hosting multiple VMs then you will need all the RAM you can throw at it.

The system will eventually have 24 1.2TB P3520's in 6 raid Z1 vdevs.
That is a lot of SSD's. And are you stating that you plan to have 6 different volumes or that you plan to have one volume made of 6 vdevs? I suspect the latter.

Also you are asking about the NIC but I guess my question is, do you have the rest of the infrastructure to support 40GbE? And what is the justification for using 40GbE? Just curious is all.

Cheers,
Joe
 
Joined
Mar 22, 2016
Messages
217
While I'm not up to speed on all things FreeNAS and ESXi, what states you cannot go 40GbE? Wouldn't this be a limitation of ESXi?


Likely a ESXi limitation. I was reading around on STH and it was mentioned a couple of times.

joeschmuck said:
As for CPU cores and RAM, that depends on how you use your system. I don't think you need 20 cores either, assuming you remain on ESXi, try 4 cores. As for RAM, well you may say that your drives are screaming but if you are running VMs then giving FreeNAS a lot of RAM will allow frequent data to be cached in RAM and thus be significantly faster. This is something you will need to play around with but if you can keep all the RAM, I'd do it. You suggested using 32GB of RAM but if you are hosting multiple VMs then you will need all the RAM you can throw at it.

Well here's the thing the drives alone can saturate the 40Gb connection, or at least should be able too, without a problem. They each can read at 1700MB/s, even if I was only getting a third of those read speeds 2vdevs should be able to saturate a 40Gb link easily. RAM would likely have better latency but it would still be limited by the 40Gb link. Which is really what my question is, would I need a massive amount of RAM if these NVMe drives are stupid fast? Their latency is quoted at 20 µs, but that will changed on the r/w load.

joeschmuck said:
That is a lot of SSD's. And are you stating that you plan to have 6 different volumes or that you plan to have one volume made of 6 vdevs? I suspect the latter.

Also you are asking about the NIC but I guess my question is, do you have the rest of the infrastructure to support 40GbE? And what is the justification for using 40GbE? Just curious is all.

Cheers,
Joe

Sorry for the confusion it will be 6vdevs in a volume.

Currently I have an Arista 7050QX-32 that will be the core of the network. I was going to use Chelsio nics for my ESXi servers, and know they are recommended for FreeNAS, but was seeing if there were any other options out there.

Justification; none. I've never seen this before and really wanted to experiment to see what happens. I'm sure it would help with my web servers, would help with the other services I run, but it's really not needed.
 

Ralf_Klein

Cadet
Joined
Apr 24, 2017
Messages
4
Aslong you have a VMware ESXi driver for this Ethernet Cards it shoud work. There is no limitation for 40Gbit card within ESXi. After a correct installation you can pressend the card to the Freenas VM as a VMXNET3 network card. That's all.
 
Joined
Mar 22, 2016
Messages
217
If I decide to run it on ESXi the plan was to pass one of the ports on a Chelsio T580-SO-CR directly to the VM. Still haven't decided yet as I don't have the NICS yet. Once I get those I can play around with it and see how it performs. Right now it's on ESXi with 4 cores and 32GB's of RAM.

The other aspect of it is I don't know how passing the PCIe switches through to the VM will affect performance or stability over the long run. Supermicro has no idea about stability because it's not something they have tested or really want to test. VMware has nothing about it either.

In the end it might work on ESXi since so much of the hardware is directly passed through to the VM, the down side is the stability is highly questionable.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
In the end it might work on ESXi since so much of the hardware is directly passed through to the VM, the down side is the stability is highly questionable.
If ESXi supports it then I'd think stability wouldn't be an issue. The only real issues I see with people using ESXi is the ability to turn off all the VMs before turning off ESXi, and any dependancies with the VMs. So if you had an iSCSI or NFS Share on FreeNAS to other VMs, you need to shut down the other VMs first and FreeNAS last. All hell breaks loose if you shutdown or reboot FreeNAS first.

When/if you setup a 40GbE then please post your results. A nice writeup could benefit many other people considering this path. Also touch on the Pros and Cons.

Good Luck, hope it all works out.
 

Ralf_Klein

Cadet
Joined
Apr 24, 2017
Messages
4
Yes you have not the same possibilities as in the VMware vCenter. In the ESXi you can only control a little bit the start and stop behavior.

2017-05-08_22h35_52.png
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Yes you have not the same possibilities as in the VMware vCenter. In the ESXi you can only control a little bit the start and stop behavior.
You know, I never looked at the autostart settings in the GUI, I've alwaysed used vSphere since it still works for many things. With I had vCenter, but I don't feel it's worth the cost for a home user.
 
Joined
Mar 22, 2016
Messages
217
Maybe a stupid question, but does any one know if there is a list of what NVMe drivers FreeNAS runs with at any given time? I haven't been able to dig it up yet.
 

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
If ESXi supports it then I'd think stability wouldn't be an issue. The only real issues I see with people using ESXi is the ability to turn off all the VMs before turning off ESXi, and any dependancies with the VMs. So if you had an iSCSI or NFS Share on FreeNAS to other VMs, you need to shut down the other VMs first and FreeNAS last. All hell breaks loose if you shutdown or reboot FreeNAS first.

When/if you setup a 40GbE then please post your results. A nice writeup could benefit many other people considering this path. Also touch on the Pros and Cons.

Good Luck, hope it all works out.

Sorry for my english. I just build the 40GbE server with Freenas 9.10.2 U3 offical version, and not that good performance, may be I have something wrong for tunable settings ?
the performance is like the picture in the attachment
If ESXi supports it then I'd think stability wouldn't be an issue. The only real issues I see with people using ESXi is the ability to turn off all the VMs before turning off ESXi, and any dependancies with the VMs. So if you had an iSCSI or NFS Share on FreeNAS to other VMs, you need to shut down the other VMs first and FreeNAS last. All hell breaks loose if you shutdown or reboot FreeNAS first.

When/if you setup a 40GbE then please post your results. A nice writeup could benefit many other people considering this path. Also touch on the Pros and Cons.

Good Luck, hope it all works out.

Sorry for my english. I just build the 40GbE server with Freenas 9.10.2 U3 offical version, and not that good performance, may be I have something wrong for tunable settings ?
the performance is like the picture in the attachment(May I should use Freenas 11? has more deep optimize for SAS3.0?)
My hardware is :
intel i7 3930k
32g ram
gigabyte x79-ud3
lsi 9300-8i with itmode
intel xl710 qda1 Qsfp+ ports
18*seagate 3tb 7200 64MB DESKTOP HDDS

The client's is :
e5-2620v4
gigabyte x99-ud4
16g ddr4 ram
intel xl710 qda1 directlly link to server

for SAS3.0, it should can run around 3.X GB/S but not just 30% or more less performance.

May be somebody can help me. Thanks advance.
 

Attachments

  • 222222.png
    222222.png
    360.5 KB · Views: 332
Status
Not open for further replies.
Top