I tried searching the forums, but didn't come up with a solution to my problem. Apologies if this has been addressed before, but I missed it through my search.
Had a FreeNAS system for a while, and have decided to upgrade it to larger system. I have been purchasing miscellaneous parts and gradually adding them to my system before transitioning to a larger server - probably going to order a Dell T640 in the near future. For now, the specs of my current system are below:
Lenovo TS140 chassis - CPU: E3-1225 V3
32GB ECC DDR3
Large Storage Array: 3x8TB WD drives (Red and Gold) in RaidZ1
Fast Storage Array: 1x Intel P3605 1.6TB NVMe SSD
FreeNAS installed on a spare 500gb 2.5" drive
Network adapter: Intel XXV710
This post only concerns the "Fast Storage Array" listed above.
This is a private system, so only 1-3 users are using it at a time. I have the NVMe set up as a sort of fast scratch space - mostly for file transfers between systems and active photo/video editing.
The issue I am having is my transfer speeds are a bit slower than I was expecting. It's still fast and workable, but with 10Gb I think I should be able to get better speeds. For all these tests, my client is a MacBook Pro. Connection is via Thunderbolt3 to a Sonnet Echo PCIe chassis containing a Mellanox card (I can get the exact card number later, if it makes a difference). Mellanox card connected via SFP+ DAC to a Netgear GC728X. The other 10Gb port on the switch is connected via SFP+ DAC to the FreeNAS box.
When I set the system up, first thing I did was run an iperf. Results came out to about 7Gb. I then switched on jumbo frames on both systems and the switch, at which point I saw about 9.5Gb:
Results are same when I reverse client/server as well.
So, I think the network is working as it should. I then created my storage volume with the NVMe drive. Disabled compression for these tests. First test I did was verify the local disk speed using dd:
The 1.5GB/sec seems within specs for the drive, so I was happy with that.
Next, shared the drive out via AFP, NFS, and CIFS. Then tested my write speeds from the client using both file transfers and dd. The results were very similar, so I'm only posting up the dd results from CIFS here. CIFS was, surprisingly, the fastest. However, it still topped out around 450MB/sec. The NFS and AFP were about 350MB/sec or so. CIFS results:
My initial thought was that the transfer was banging the CPU too hard. However, during the transfer the smbd process was using only about 33% cpu, according to top.
I then tried running multiple concurrent dd transfers, from between 2 and 4. At 2 concurrent transfers, my speed was nearly the same, for each transfer (about 800MB/sec total). Scaling up to 4 concurrent transfers, my speed dropped to about 210MB/sec each. The smdb task never went past 65% usage in any of these cases.
So, it seems I'm able to hit a wall 800MB/sec total transfer speed, but only if I have multiple concurrent transfers. I would greatly appreciate any suggestions on how I can hit this with a single stream instead of having to force multiple concurrent transfers.
In the course of my troubleshooting, I have also tried adding the sysctl tunables mentioned by jgreco in this thread:
https://forums.freenas.org/index.ph...rdware-smb-cifs-bottleneck.42641/#post-277350
Those tunables didn't seem to have any significant effect - write speed was within about 10MB/sec before and after.
I have also tried to disable jumbo frames on both systems and my switch, but that resulted in close to the same transfer speed as well (about 60MB/sec slower).
I'm not sure what to test next, so any guidance would be appreciated. Sorry for all the text - wanted to preemptively put my system specs and previous testing up here to save time for anyone kind enough to help.
Had a FreeNAS system for a while, and have decided to upgrade it to larger system. I have been purchasing miscellaneous parts and gradually adding them to my system before transitioning to a larger server - probably going to order a Dell T640 in the near future. For now, the specs of my current system are below:
Lenovo TS140 chassis - CPU: E3-1225 V3
32GB ECC DDR3
Large Storage Array: 3x8TB WD drives (Red and Gold) in RaidZ1
Fast Storage Array: 1x Intel P3605 1.6TB NVMe SSD
FreeNAS installed on a spare 500gb 2.5" drive
Network adapter: Intel XXV710
This post only concerns the "Fast Storage Array" listed above.
This is a private system, so only 1-3 users are using it at a time. I have the NVMe set up as a sort of fast scratch space - mostly for file transfers between systems and active photo/video editing.
The issue I am having is my transfer speeds are a bit slower than I was expecting. It's still fast and workable, but with 10Gb I think I should be able to get better speeds. For all these tests, my client is a MacBook Pro. Connection is via Thunderbolt3 to a Sonnet Echo PCIe chassis containing a Mellanox card (I can get the exact card number later, if it makes a difference). Mellanox card connected via SFP+ DAC to a Netgear GC728X. The other 10Gb port on the switch is connected via SFP+ DAC to the FreeNAS box.
When I set the system up, first thing I did was run an iperf. Results came out to about 7Gb. I then switched on jumbo frames on both systems and the switch, at which point I saw about 9.5Gb:
Code:
./iperf -c 192.168.1.167 ------------------------------------------------------------ Client connecting to 192.168.1.167, TCP port 5001 TCP window size: 131 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.168 port 55058 connected with 192.168.1.167 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 11.1 GBytes 9.54 Gbits/sec
Results are same when I reverse client/server as well.
So, I think the network is working as it should. I then created my storage volume with the NVMe drive. Disabled compression for these tests. First test I did was verify the local disk speed using dd:
Code:
sudo dd if=/dev/zero of=testfile bs=1m count=100000 100000+0 records in 100000+0 records out 104857600000 bytes transferred in 63.211588 secs (1658835082 bytes/sec)
The 1.5GB/sec seems within specs for the drive, so I was happy with that.
Next, shared the drive out via AFP, NFS, and CIFS. Then tested my write speeds from the client using both file transfers and dd. The results were very similar, so I'm only posting up the dd results from CIFS here. CIFS was, surprisingly, the fastest. However, it still topped out around 450MB/sec. The NFS and AFP were about 350MB/sec or so. CIFS results:
Code:
dd if=/dev/zero of=testfile bs=1m count=100000 100000+0 records in 100000+0 records out 104857600000 bytes transferred in 226.087431 secs (463792258 bytes/sec)
My initial thought was that the transfer was banging the CPU too hard. However, during the transfer the smbd process was using only about 33% cpu, according to top.
I then tried running multiple concurrent dd transfers, from between 2 and 4. At 2 concurrent transfers, my speed was nearly the same, for each transfer (about 800MB/sec total). Scaling up to 4 concurrent transfers, my speed dropped to about 210MB/sec each. The smdb task never went past 65% usage in any of these cases.
So, it seems I'm able to hit a wall 800MB/sec total transfer speed, but only if I have multiple concurrent transfers. I would greatly appreciate any suggestions on how I can hit this with a single stream instead of having to force multiple concurrent transfers.
In the course of my troubleshooting, I have also tried adding the sysctl tunables mentioned by jgreco in this thread:
https://forums.freenas.org/index.ph...rdware-smb-cifs-bottleneck.42641/#post-277350
Those tunables didn't seem to have any significant effect - write speed was within about 10MB/sec before and after.
I have also tried to disable jumbo frames on both systems and my switch, but that resulted in close to the same transfer speed as well (about 60MB/sec slower).
I'm not sure what to test next, so any guidance would be appreciated. Sorry for all the text - wanted to preemptively put my system specs and previous testing up here to save time for anyone kind enough to help.