I’m trying to diagnose my bottleneck with SMB file transfers. I’m running Trunas 13.0.-U5.3 on a Dell R830, 4 x Xeon e5 4660 v4’s totalling 64 cores & 128 threads with 1TB of ram. For testing purposes I have 15x 512GB SSD’s running in stripe. It also has a 2TB metadata vdev, 2TB L2arc, and 256GB Slog. These are all running on gen 3 NVMe on their own PCIe card.
I’m connecting to it from 2 different PC’s, having the same results on both, but the most powerful is running a threadripper pro 5995wx, 64 cores, 128 threads, also with 1TB or ram running windows 11 pro for workstations. Tested to and from Gen 3 and Gen 4 NVMe drives capable of 7GB speeds.
Each device has 2 x dual 25Gb Mellanox Connectx 4 Nic’s. Logic here was the dell servers only have PCIe 3.0 so knew there would be a bottleneck there for 100Gb/12.5GB throughout.
I’ve tried aggregating the 4 x 25Gb connections together to create 100Gb connections on both windows, Trunas and through a 25Gb switch and also running a single 25Gb connection directly from machine to machine.
Each test has only got me the same 1GB write speeds and around 400Mb read speeds.
Any input would be most welcome.
I’m connecting to it from 2 different PC’s, having the same results on both, but the most powerful is running a threadripper pro 5995wx, 64 cores, 128 threads, also with 1TB or ram running windows 11 pro for workstations. Tested to and from Gen 3 and Gen 4 NVMe drives capable of 7GB speeds.
Each device has 2 x dual 25Gb Mellanox Connectx 4 Nic’s. Logic here was the dell servers only have PCIe 3.0 so knew there would be a bottleneck there for 100Gb/12.5GB throughout.
I’ve tried aggregating the 4 x 25Gb connections together to create 100Gb connections on both windows, Trunas and through a 25Gb switch and also running a single 25Gb connection directly from machine to machine.
Each test has only got me the same 1GB write speeds and around 400Mb read speeds.
Any input would be most welcome.