pclausen
Patron
- Joined
- Apr 19, 2015
- Messages
- 267
I added the tunable and it unfortunately didn't make a difference. I ran dd (after turning off compression) and got the following results:
So I start out at around 1.58 Gbps and then after about 1.5 hours, performance drops to about 1.18 Gbps. So it does seem that my system is not able to write to the pool any after than what my real life copy tests are showing.
While dd was running, I looked at the 1st disk in vdev0 and the last disk in vdev4 and they were both showing about the same I/O (as were the other 48 disks in the pool):
CPU and Load stats:
Code:
[root@argon] ~# dd if=/dev/zero of=/mnt/v1/misc/foofile bs=1048576 load: 5.68 cmd: dd 26184 [running] 11.02r 0.01u 8.02s 39% 2624k 17339252736 bytes transferred in 10.982776 secs (1578767743 bytes/sec) load: 4.23 cmd: dd 26184 [dmu_tx_delay] 95.26r 0.14u 47.34s 25% 2624k 133439684608 bytes transferred in 95.222746 secs (1401342537 bytes/sec) load: 3.66 cmd: dd 26184 [dmu_tx_delay] 515.73r 0.63u 232.75s 23% 2624k 694745563136 bytes transferred in 515.693642 secs (1347205989 bytes/sec) load: 3.84 cmd: dd 26184 [dmu_tx_delay] 976.99r 1.12u 428.79s 17% 2624k 1277145645056 bytes transferred in 976.949585 secs (1307278968 bytes/sec) load: 6.31 cmd: dd 26184 [running] 1498.13r 1.69u 644.36s 23% 2624k 1914715504640 bytes transferred in 1498.091036 secs (1278103572 bytes/sec) 6055083900928 bytes transferred in 5115.036750 secs (1183781114 bytes/sec)
So I start out at around 1.58 Gbps and then after about 1.5 hours, performance drops to about 1.18 Gbps. So it does seem that my system is not able to write to the pool any after than what my real life copy tests are showing.
While dd was running, I looked at the 1st disk in vdev0 and the last disk in vdev4 and they were both showing about the same I/O (as were the other 48 disks in the pool):
CPU and Load stats: