poor performance on zvol

grisu

Cadet
Joined
Aug 13, 2017
Messages
3
Hi,

i am a little bit frustrated ... her my setup

Dell R710 with 288GB mem and 16cores (a former ESXi-host ... a waste to discard it)
H200 HBA in IT-mode
D2600-shelve with 6x10T HST SAS drives
purpose is a backup-repository for veeam

Situation:

backup-server is a Win2016-VM with the zvol-LUN connected as RDM
the disk at the client is formated with ReFS (64k Blocksize)
phoron61 is the name of the FreeNAS .. so test are done local on the machine

backups run quite good (so write-throughput is about 400MB/s and more ... bottleneck is the source-SAN)
but reading from the zvol is terrible slow (expect cached readings of course)

root@phoron61v:~ # pv < /dev/zvol/backup2/bkp2_zvol1 > /dev/null
^C.2GiB 0:00:14 [51.7MiB/s] [ <=> ]

so reading from the zvol is about 40-100MB/s ...

BTW:
is this a sequential read?
or ist this - because auf COW - random reads?


dd with different blocksizes shows the same ...
reading writing from a local testfile (1TB, so no cache-hit) work fine ... with about 600MB/s

dd if=/dev/dax of=/dev/null with different blocksizes also utilize the disk


any ideas?
any hints?
i will try some test with NFS instead of FC/iSCSI, so no zvol involved ... but thats just a fallback




FreeNAS data
Build FreeNAS-11.2-U5
Platform Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
Memory 294859MB


zpool

root@phoron61v:~ # zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backup2 54T 10.5T 43.5T - - 0% 19% 1.00x ONLINE /mnt
freenas-boot 68G 1.52G 66.5G - - - 2% 1.00x ONLINE -

root@phoron61v:~ # zfs get recordsize backup2
NAME PROPERTY VALUE SOURCE
backup2 recordsize 128K default

root@phoron61v:~ # zfs get volblocksize backup2/bkp2_zvol1
NAME PROPERTY VALUE SOURCE
backup2/bkp2_zvol1 volblocksize 64K -


root@phoron61v:~ # zpool status backup2
pool: backup2
state: ONLINE
scan: resilvered 36K in 0 days 00:00:01 with 0 errors on Thu Sep 12 10:38:41 2019
config:

NAME STATE READ WRITE CKSUM
backup2 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/57525669-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5855889e-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/58d986cb-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/59c9e8f0-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5abd4c62-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0
gptid/5b3222a7-d547-11e9-8492-d067e5edcba6 ONLINE 0 0 0

errors: No known data errors



Thanks for some
Chris
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
RAIDZ1 is not a good idea for block storage since you effectively get the performance of a single disk (which is what you see). If you want more IOPS, you need more VDEVs, which usually either goes in Mirrored VDEVs or multiple RAIDZ2 (RAIDZ1 is not really a good idea for data you care about).
 

grisu

Cadet
Joined
Aug 13, 2017
Messages
3
Hi,

thanks for the quick response ....

but perhaps i misunderstand all the statistics ...
i dont know if i need more IOPS ... because the write is really good

gstat from 1 disks

root@phoron61v:~ # dd if=/dev/da11 of=/dev/null bs=4k
dT: 1.064s w: 1.000s filter: da11
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 14186 14106 56423 0.1 78 906 3.6 90.5| da11


root@phoron61v:~ # dd if=/dev/da11 of=/dev/null bs=64k
dT: 1.063s w: 1.000s filter: da11
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 3834 3834 245407 0.2 0 0 0.0 95.8| da11


IOPS vs ops/s is still not clear for me ... on-disk-cache or whatever ...
but this is what one disk can handle IMHO

root@phoron61v:~ # pv < /dev/da11 > /dev/null
^Croot@phoron61v:~ # pv < /dev/da11 > /dev/null
^C09GiB 0:00:09 [ 239MiB/s] [ <=>

dT: 1.064s w: 1.000s filter: da11
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 1913 1913 244811 0.5 0 0 0.0 94.4| da11


on a zvol the stats are far away from that ... sequential reads of course ... but why are there no seq-reads on the zvol?
Block-size of 64k and a sector size on the disk of 4k means that at least 14 reads a in sequence
And there is (at the moment) no fragmentation ...

root@phoron61v:~ # pv < /dev/zvol/backup2/bkp2_zvol1 > /dev/null
^C.2GiB 0:00:22 [88.2MiB/s] [

root@phoron61v:~ # gstat -f da
dT: 1.004s w: 1.000s filter: da
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
3 551 551 19674 4.5 0 0 0.0 86.2| da3
5 486 486 19905 6.5 0 0 0.0 93.1| da5
0 526 526 20558 5.4 0 0 0.0 90.6| da7
3 525 525 20618 5.3 0 0 0.0 88.6| da11
4 515 515 19120 5.2 0 0 0.0 87.3| da12
2 554 554 19952 4.3 0 0 0.0 84.3| da13


why are the stats so different?


thanks

Chris
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You'll probably find that your copious amounts of RAM are making it hard to see what's really happening on the disks... IOPS (input/output operations per second) and ops/s are the same thing (memory and disk both have numbers that are separate for that and very different speeds.

Sequential reads only make sense when you're talking about a single physical disk... here you have 5.

To read sequentially, all of the blocks need to be requested and read in the order they spin past the head. For 5 separate disks, you need a block from at least 2 (maybe more) of them to make one logical block to meet the request, so getting all that to line up sequentially isn't going to be simple and won't happen very often without a lot of luck (which you can control a bit if you want to be a mathematician and plan out your block sizes to align across ZFS, physical disk and zvol).
 

grisu

Cadet
Joined
Aug 13, 2017
Messages
3
Hi,

thanks for your help and time

i think i will go with a mirrored setup and do some tests


Chris
 
Top