Disable Jumbo frames on a switch

Status
Not open for further replies.
Joined
Nov 11, 2014
Messages
1,174
Not sure what do you mean, because it take just 2 SSD in raid 0 to have a storage system that is capable of delivering little over 1000GB/s sequential read ?
 
Joined
Nov 11, 2014
Messages
1,174
My info might be outdated, but when I read whitepapers from enterprise equipment vendors, they kind of expect you to use jumbo in 10Gb SAN network. Same way they expect that if you have iSCSI on isolated storage network.

P.S. I don't claim to be right, I rather learn.:smile:
 

acp

Explorer
Joined
Mar 25, 2013
Messages
71
My info might be outdated, but when I read whitepapers from enterprise equipment vendors, they kind of expect you to use jumbo in 10Gb SAN network. Same way they expect that if you have iSCSI on isolated storage network.

P.S. I don't claim to be right, I rather learn.:smile:
We out here live in a budget. The hbas that I use at work cost more then my entire freenas system. And they are connected to a San that cost more than my house. As to the speed I was assuming spinning disk not SSD. But if 10gig isn't fast enough you can always go 40gig.

It all depends what you are trying to do. My Nas is spinning disk so tweeking more speed isn't worth the risk. I do plan to add SSD but it will based on consumer grade stuff so I'm not expecting much faster speed. It is fast enough for what I'm doing. Doing iperf testing on my switch I got 9gigabit per second. I only paid$120 for it. It gave me bragging point that I have 10gig at home but my servers at work are still stuck on 1gig.

All I can tell you try it and see what happens. Maybe the extra 5% in efficiency is what you are looking for.

Sent from my Nexus 5X using Tapatalk
 
Joined
Nov 11, 2014
Messages
1,174
Oh we are on a budget too, but not to the point of getting the cheapest consumer stuff. 10Gb is fast egouh if I am able to utilize not less that 90% of it. Perhaps replacing older 10Gb Intel Nic with Chelsio t520 will be able to push the speed higher even without jumbo , but that's just a theory that will cost me to verify.


Even if you don't use ssd still easy to build a pull to saturate 10Gb. Considering average hdd can transfer let's say 175MB/s so you need what ? Just 5.5 spinning disk to reach that speed. Some HDD are able to reach 250MB/s so you only need 4 of them. My NAS has 16x slow HDDs with 140MB/s max transfer speed (the machine in my signature) and I should be able to go over 10Gb transfer speed, but I can do only half of it. And testing with iperf 2 , it's still maxing 5.x Gb speed. So I am thinning either use jumbo frames, or try with the latest and greatest chelsio NIC with the hope for better results on non-jumbo frames.

Your 9Gb/s sounds good, what is the hardware ? and the nics ?
 
Joined
Nov 11, 2014
Messages
1,174
P.S. I'll be ok with 9Gb/s speed on 10Gb network, but with 5Gb not so much.
 

acp

Explorer
Joined
Mar 25, 2013
Messages
71
Your 9Gb/s sounds good, what is the hardware ? and the nics ?

Freenas nic: mellonox mcx-311a
Asrock E3224-4L E3-1220 V3 32gb

VM server nic: mellonox mcx-312a
Supermicro X10SLL-F E3-1220 V3 16gb

Switch: css-326

Dac 2m cable

Connected using Ubuntu 16 in a vm on xenserver 7.3.


Sent from my Nexus 5X using Tapatalk
 
Joined
Nov 11, 2014
Messages
1,174
What about the freenas version you are using ?
 
Joined
Nov 11, 2014
Messages
1,174
Thanks.
Mine is much older - 9.3 , so I can't tell if that contribute to the problem, but I guess much testing will be needed to find out.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have no device(at the moment) that is configured to use jumbo frames. So nobody is sending jumbo frames from the clients. But the question is: If jumbo is enable or disabled on switch how that will effect anything ?!

It shouldn't affect anything on the network. It is possible that the switch treats traffic differently if configured for jumbo, because switches that do not do cut-through switching inevitably need to store and then forward packets. This has implications for the internal design, but really most of the crappy designs are five to ten years behind us now.
 
Joined
Nov 11, 2014
Messages
1,174
I did leave my XG2000 in cut-through mode (default) instead of store and forward and jumbo is disabled on the switch (not default). I really need to get a pair of Chelsio T520 and test them out. I heard so much about them (including from you). What I am really hoping is to be able to push 10Gb to over 90% utilization without jumbo frames , then I would never have to deal with jumbos.
 
Joined
Nov 11, 2014
Messages
1,174
And in case that the Chelsio T520 don't do for me what I am hoping for, then I would never use anything else but my good old INTEL EXPX9502AFXSR 10GBE XF SR 2

They serve me so well: so stable on windows or freebsd. Never got really hot, so very easy to keep cool. And never need drivers.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Store and forward has higher latency than cut-through, but cut-through has to store frames if the destination port is busy, and if there's trash on the LAN, cut-through often passes it. In some cases it isn't worth the trouble, but having frames switched in a dozen microseconds is nice.
 
Joined
Nov 11, 2014
Messages
1,174
From what I know is:
in cut-through the switch read only to begging of the frame (where is going to, so no checksum is done) and forward right away to cut on latency, while store and forward will read the whole frame (and do the checksum) but it will add latency.

So unless you have too many bad frames that need re-sending , cut-through should be a good thing and the way to go , right ?


P.S. My 10Gb switch has both options, but my Dell 5548 switch only support store and forward.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Cut-through has some other issues, but what they are and if they affect your setup is probably beyond the amount of time I have right now to discuss.
 
Joined
Nov 11, 2014
Messages
1,174
I usually respond much faster , but in this case I wasn't sure what to say.

I am always curious to know what you have to say, and I am always eager to learn when the source is good, but ... I mean if you are busy you are busy.

I guess I'll just come back tomorrow, or next week. :)
 
Joined
Nov 11, 2014
Messages
1,174
Freenas nic: mellonox mcx-311a
Asrock E3224-4L E3-1220 V3 32gb

VM server nic: mellonox mcx-312a
Supermicro X10SLL-F E3-1220 V3 16gb

Switch: css-326

Dac 2m cable

Connected using Ubuntu 16 in a vm on xenserver 7.3.


Sent from my Nexus 5X using Tapatalk

You really got all the juice from the mellonox cards :smile:

I am just curious , why you have these two connected like that ?

One is file server , that's fine , but other is not a workstation?! It's a ubuntu VM on Xen hyper-visor ?

I am just wandering what kind of setup is that and what purpose is serving ?
 

acp

Explorer
Joined
Mar 25, 2013
Messages
71
You really got all the juice from the mellonox cards :)

Yup!

I am just curious , why you have these two connected like that ?

I'm testing right now. This is my first venture into 10gig networking.


One is file server , that's fine , but other is not a workstation?! It's a ubuntu VM on Xen hyper-visor ?

I am just wandering what kind of setup is that and what purpose is serving ?

I booted into XenServer. Built a ubuntu VM. Created a virtual nic that had access to the Mellonox card. Ran iperf.

My end goal is to have the VM disks on FreeNAS on 10gig. 1G was too slow. Currently I have a 120gb SSD for booting XenServer 7.3. I gotten the Supermirco board to boot off the Intel nic using iscsi from FreeNAS. I haven't had much luck on the Mellonox cards. Could be that there BIOS support for iBFT on Intel nic. However given how XenServer wants to bind the nic to dedicated iSCSI I'm not leaning using that method anymore. So I haven't really tried much further. Using the SSD works fine. Then I can use Mellonox cards for iSCSI and NFS. That is more in line of a true storage network. The ISO library is using that method, hosted of course on FreeNAS.

Since I have a 10gig switch I tried it as well. It did solve one issue I was having with FreeNAS. While testing iSCSI booting, I was direct connected between the FreeNAS server and the VM server. So the NIC kept going up and down. FreeNAS sometime wouldn't see the NIC being up so I had to reset the FreeNAS Mellonox NIC to get it to see that the link was up.

I'm liking what I'm seeing, so maybe there another 10gig switch in the future. :)
 
Joined
Nov 11, 2014
Messages
1,174
Do you have to have a external storage for the VMs ? Can you just do ssd inside the hypervisor or raid with sdd for redundancy still inside the hypervisor and call it a day ?
 
Status
Not open for further replies.
Top