I read a ton of threads on this and almost every one had issues with SMB or general networking issues. I ruled those out and still no luck
I have an older 17-2600k cpu and 16Gb of RAM
Using some random 250GB SSD as boot drive and 3 Silicon Power 2Tb SSD's as the raid-z pool
Using DD I get:
I specifically went for something large to try and get past issues like write cache .
I get the same ~3GB/s with every size of DD test I'm doing.
Using iperf to the server I get 940Mbs which seems perfectly normal for 1Gb ethernet.
When I try and transfer a bunch of video files to the NAS write speeds hover around 30-40Mb/s.
After a while the transfer completely stalls and then picks up to 20Mb/s again for a few
I tried Sftp as a comparison and that was around the same speed, so its not just a SMB thing.
CPU is nowhere near 100% and ram usage looks fine too.
If DD shows good speeds, does that mean the drives are fine and something else is a problem or is that not a true test of the drive performance?
I have an older 17-2600k cpu and 16Gb of RAM
Using some random 250GB SSD as boot drive and 3 Silicon Power 2Tb SSD's as the raid-z pool
Using DD I get:
Code:
dd if=/dev/zero of=/mnt/DupPool/SMB/testfile bs=1G count=100 100+0 records in 100+0 records out 107374182400 bytes (107 GB, 100 GiB) copied, 33.1105 s, 3.2 GB/s
I specifically went for something large to try and get past issues like write cache .
I get the same ~3GB/s with every size of DD test I'm doing.
Using iperf to the server I get 940Mbs which seems perfectly normal for 1Gb ethernet.
When I try and transfer a bunch of video files to the NAS write speeds hover around 30-40Mb/s.
After a while the transfer completely stalls and then picks up to 20Mb/s again for a few
I tried Sftp as a comparison and that was around the same speed, so its not just a SMB thing.
CPU is nowhere near 100% and ram usage looks fine too.
If DD shows good speeds, does that mean the drives are fine and something else is a problem or is that not a true test of the drive performance?