Hi all,
I'm seeing a bit of weirdness and I'm unsure of whether it's causing me some problems with performance.
Problem: 2 of the SSD's (da0,da1) report high "disk busy" numbers compared to the other 6 (da2 through da7) for the same amount of load. I'm finding that the performance of the installation is not where I expect it, and I can't seem to get maximum performance from the configuration - on a "dd if=/dev/zero of=/mnt/DATASSD/NAS-VMD/freenas/outfile bs=1M count=262144", gstat shows da0 and da1 as 80%+ busy while the rest of the drives are only 30% busy.
Dell T320, Running 11.0-U3
32 GB RAM, 8x Samsung 850 EVO 1TB SSD's on Dell PERC controller flashed to "IT" (passthrough) mode
ZFS pool set up as 4 groups of mirror pairs
10GbE twinax to switch, 10 GbE twinax to compute nodes
NFS shares mounted as:
a.b.c.d:/mnt/DATASSD/NAS-VMD on /mnt/pve/NAS-VMD type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=a.b.c.d,mountvers=3,mountport=x,mountproto=udp,local_lock=none,addr=a.b.c.d)
Boot time drive info for da0 and da2:
The only difference seems to be that the da0 and da1 drives may come from a different lot of drives, despite being the same model and same firmware:
During a data migration activity via NFS from a couple of nodes, here's what the performance metrics look like -- first the "disk busy":
But as you can see, the same number of operations and amount of transfer is occurring across the disks:
Any thoughts? Would this higher "disk busy" percentage be holding me up? And if so, any ideas as to why da0 and da1 would be the only two drives showing double or triple the busy% for the same transactions?
Thanks,
-c
I'm seeing a bit of weirdness and I'm unsure of whether it's causing me some problems with performance.
Problem: 2 of the SSD's (da0,da1) report high "disk busy" numbers compared to the other 6 (da2 through da7) for the same amount of load. I'm finding that the performance of the installation is not where I expect it, and I can't seem to get maximum performance from the configuration - on a "dd if=/dev/zero of=/mnt/DATASSD/NAS-VMD/freenas/outfile bs=1M count=262144", gstat shows da0 and da1 as 80%+ busy while the rest of the drives are only 30% busy.
Dell T320, Running 11.0-U3
32 GB RAM, 8x Samsung 850 EVO 1TB SSD's on Dell PERC controller flashed to "IT" (passthrough) mode
ZFS pool set up as 4 groups of mirror pairs
10GbE twinax to switch, 10 GbE twinax to compute nodes
NFS shares mounted as:
a.b.c.d:/mnt/DATASSD/NAS-VMD on /mnt/pve/NAS-VMD type nfs (rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=a.b.c.d,mountvers=3,mountport=x,mountproto=udp,local_lock=none,addr=a.b.c.d)
Boot time drive info for da0 and da2:
Code:
da0 at mps0 bus 0 scbus0 target 4 lun 0 da0: <ATA Samsung SSD 850 1B6Q> Fixed Direct Access SPC-3 SCSI device da0: Serial Number S246NXAG602332H da0: 600.000MB/s transfers da0: Command Queueing enabled da0: 953869MB (1953525168 512 byte sectors) da0: quirks=0x8<4K> GEOM_ELI: Device da0p1.eli created. da2 at mps0 bus 0 scbus0 target 6 lun 0 da2: <ATA Samsung SSD 850 1B6Q> Fixed Direct Access SPC-3 SCSI device da2: Serial Number S33FNCAH502074V da2: 600.000MB/s transfers da2: Command Queueing enabled da2: 953869MB (1953525168 512 byte sectors) da2: quirks=0x8<4K> GEOM_ELI: Device da2p1.eli created.
The only difference seems to be that the da0 and da1 drives may come from a different lot of drives, despite being the same model and same firmware:
During a data migration activity via NFS from a couple of nodes, here's what the performance metrics look like -- first the "disk busy":
But as you can see, the same number of operations and amount of transfer is occurring across the disks:
Any thoughts? Would this higher "disk busy" percentage be holding me up? And if so, any ideas as to why da0 and da1 would be the only two drives showing double or triple the busy% for the same transactions?
Thanks,
-c