Pimp my FreeNAS!

Status
Not open for further replies.

elementalwindx

Dabbler
Joined
Sep 20, 2012
Messages
20
Alright, I'll bite!

Just finished my (re)build of my system.
Only thing I am missing now is a good UPS, and I'll be "done" (for now :p )


Whats the general average transfer rates of large files between systems with that setup?


I ran something very similar except all 1Gbe. 10Gbe is too expensive for my taste at this moment (I want external as well as internal). Eventually I got fed up with the setup and turned that box into a W2012R2 server using PFSense as a Hyper-V Guest, and using JBOD storage spaces.

The next project is 2 W2012R2 servers running scale out file servers w/ CSV + 2 VM servers. This setup will incorporate ONLY dual 10Gbe nics. 2 per each server tied into a 24 port 10Gbe netgear switch. (As much as I hate netgear, they are the cheapest I can find at this moment)

I also used all supermicro boards and enclosures.

I must say this as well, I received two of their newest hot off the press SAS3 enclosures and apparently their manufacturing process was not perfected (as of 2-3 months ago) and I had to work for hours with them to determine that their backplanes (on both units) short out on part of the enclosure causing the drives to act very erratic or not turn on at all. Other than their tech support being very...err....stupid to this particular problem leaving me to figure out the cause and fix. I must say they at least have good tech support. They don't put you on hold long and they actually have some Americans working in that dept.

I would still buy from them again even after that experience.
 
Last edited:

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Whats the general average transfer rates of large files between systems with that setup?


I ran something very similar except all 1Gbe. 10Gbe is too expensive for my taste at this moment (I want external as well as internal). Eventually I got fed up with the setup and turned that box into a W2012R2 server using PFSense as a Hyper-V Guest, and using JBOD storage spaces.

The next project is 2 W2012R2 servers running scale out file servers w/ CSV + 2 VM servers. This setup will incorporate ONLY dual 10Gbe nics. 2 per each server tied into a 24 port 10Gbe netgear switch. (As much as I hate netgear, they are the cheapest I can find at this moment)

I also used all supermicro boards and enclosures.

I must say this as well, I received two of their newest hot off the press SAS3 enclosures and apparently their manufacturing process was not perfected (as of 2-3 months ago) and I had to work for hours with them to determine that their backplanes (on both units) short out on part of the enclosure causing the drives to act very erratic or not turn on at all. Other than their tech support being very...err....stupid to this particular problem leaving me to figure out the cause and fix. I must say they at least have good tech support. They don't put you on hold long and they actually have some Americans working in that dept.

I would still buy from them again even after that experience.

Unfortunately I couldn't tell you actual speeds for hardware 10 Gbe.

As I mentioned in my post, they are virtual VMXNET3 controllers between the different guests. VMWare calls them 10Gbit, but how fast they actually operate is a mystery to me. iperf suggests ~30Gbit on my setup.

I can try to copy a largeish file and let you know if you are interested though.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Unfortunately I couldn't tell you actual speeds for hardware 10 Gbe.

As I mentioned in my post, they are virtual VMXNET3 controllers between the different guests. VMWare calls them 10Gbit, but how fast they actually operate is a mystery to me. iperf suggests ~30Gbit on my setup.

I can try to copy a largeish file and let you know if you are interested though.

So I just did a quick and dirty test, copying a large (~13gb) file from my pool to /dev/null on one of my other guests connected via the VMXNet3 interface.

This resulted in ~200MB/s, quite a bit lower than I was expecting. (local reads are ~950MB/s, local writes ~675MB/s, using the /dev/null dd method, with compression disabled)

This was using NFS.

It's probably a limitation of the virtual interface
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't believe that there's any attempt with the vmxnet stuff to artificially limit network speeds. Two guests on the same host should basically communicate at a very rapid speed, depending on the availability of resources.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
I don't believe that there's any attempt with the vmxnet stuff to artificially limit network speeds. Two guests on the same host should basically communicate at a very rapid speed, depending on the availability of resources.

I didn't mean that it was artificially limited, but rather that something technology specific was limiting it. I have no idea what kind of overhead is involved in emulating a high bitrate network adapter.

Or my setup could be misconfigured somehow, who knows. It's just a stark contrast between local speeds and speed over an interface that iperf suggests has 30+gbit of performance. I'm learning this stuff as I'm going :)
 
Status
Not open for further replies.
Top