24x FullFlash but bad Performance

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hi Folks,

i got some serious Problems with my FreeNas Storage.
The IOps Perfomance is not what we expected.

Some Benchmark from a VM via NFS/Proxmox.
https://openbenchmarking.org/result/1907181-KH-SECONDTES96

I test it with Sync on and off but still the same Results.
I also test it with Raidz/ Mirror-Stripe (Raid10) / Raidz2 / Stripe. Always with similar Results.

I think my CPU is the Bottleneck but i need Help to verify this.

Hardwarespecs from the Freenas:
- Board Manufacturer: SupermicroBoard Product Name: X11DPU
- 2x Intel(R) Xeon(R) Bronze 3104 CPU @ 1.70GHz (6 cores)
- 24x INTEL SSDSC2KG019T8 https://ark.intel.com/content/www/d...0-series-1-92tb-2-5in-sata-6gb-s-3d2-tlc.html
- 256GB ECC Ram
- 4x10Gbit LAGG

Code:
root@sl-nas-ssd[~]# diskinfo -wS /dev/da23
/dev/da23
        512             # sectorsize
        1920383410176   # mediasize in bytes (1.7T)
        3750748848      # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        233473          # Cylinders according to firmware.
        255             # Heads according to firmware.
        63              # Sectors according to firmware.
        ATA INTEL SSDSC2KG01    # Disk descr.
        PHYG911300D41P9DGN      # Disk ident.
        id1,enc@n500304801ee25dfd/type@0/slot@18/elmdesc@Slot23 # Physical path
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM
        Not_Zoned       # Zone Mode

Synchronous random writes:
         0.5 kbytes:    143.9 usec/IO =      3.4 Mbytes/s
           1 kbytes:    141.6 usec/IO =      6.9 Mbytes/s
           2 kbytes:    148.2 usec/IO =     13.2 Mbytes/s
           4 kbytes:    163.0 usec/IO =     24.0 Mbytes/s
           8 kbytes:    167.8 usec/IO =     46.6 Mbytes/s
          16 kbytes:    181.7 usec/IO =     86.0 Mbytes/s
          32 kbytes:    234.7 usec/IO =    133.1 Mbytes/s
          64 kbytes:    318.8 usec/IO =    196.0 Mbytes/s
         128 kbytes:    498.5 usec/IO =    250.8 Mbytes/s
         256 kbytes:    809.8 usec/IO =    308.7 Mbytes/s
         512 kbytes:   1465.3 usec/IO =    341.2 Mbytes/s
        1024 kbytes:   2744.6 usec/IO =    364.4 Mbytes/s
        2048 kbytes:   5189.6 usec/IO =    385.4 Mbytes/s
        4096 kbytes:  10119.0 usec/IO =    395.3 Mbytes/s
        8192 kbytes:  19903.4 usec/IO =    401.9 Mbytes/s
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Can you share your exact pool layout for your tested scenarios? It's not clear from this:
I also test it with Raidz/ Mirror-Stripe (Raid10) / Raidz2 / Stripe. Always with similar Results.
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
I made this Test with "Raid10": https://openbenchmarking.org/result/1907181-KH-SECONDTES96
pool.jpg
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It seems to me that your pool is ready for a very high number of IOPS in that layout.

I doubt a single testing client will be able to even make a dent in that potential, so perhaps you need to reconsider what it is you're expecting from a test (what is the highest performance you can get from one proxmox VM?).

Perhaps look at reporting on the FreeNAS box while the tests are going on and see if there's any kind of indication of a bottleneck.

I also see that there was a resilver on the pool... this could mean that your pool structure has changed after you added some data already? if there's a new VDEV, the system may see that new VDEV as the target for all writes until things even out (reports should show that).
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hi sretalla,

thanks for Reply!

The "Resilver" is from me. I removed a single Disk to Test the Single Speed.

I exepct a far better IO Speed than this:

Code:
root@sl-nas-ssd[/mnt/datapool]# fio --rw=randwrite --name=test --size=10G --direct=1 --numjob=10
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
...
fio-3.5
Starting 10 processes
Jobs: 10 (f=10): [w(10)][2.3%][r=0KiB/s,w=29.5MiB/s][r=0,w=7542 IOPS][eta 58m:38s]




Code:
root@sl-nas-ssd[~]# zpool iostat datapool 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
datapool    67.6G  20.7T     49  2.95K  4.74M   178M
datapool    67.6G  20.7T      0  42.3K      0   558M
datapool    67.6G  20.7T      0  42.4K      0   570M
datapool    67.6G  20.7T      0  43.7K      0   546M
datapool    67.6G  20.7T      0  42.6K      0   575M
datapool    67.5G  20.7T      0  44.2K      0   560M
datapool    67.6G  20.7T      0  43.1K      0   560M
datapool    67.6G  20.7T      0  43.0K      0   549M
datapool    67.6G  20.7T      0  42.7K      0   545M
datapool    67.6G  20.7T      0  42.2K      0   555M
datapool    67.6G  20.7T      0  42.8K      0   574M
datapool    67.6G  20.7T      0  44.3K      0   561M
datapool    67.6G  20.7T      0  42.0K      0   544M
datapool    67.5G  20.7T      0  42.5K      0   547M
datapool    67.6G  20.7T      0  44.0K      0   528M
datapool    67.6G  20.7T      0  43.2K      0   530M
datapool    67.6G  20.7T      0  42.8K      0   553M
datapool    67.6G  20.7T      0  42.7K      0   556M
datapool    67.5G  20.7T      0  43.2K      0   530M
datapool    67.6G  20.7T      0  42.6K      0   534M
datapool    67.6G  20.7T      0  41.8K      0   559M
datapool    67.6G  20.7T      0  42.5K      0   554M
datapool    67.5G  20.7T      0  44.7K      0   558M
datapool    67.5G  20.7T      0  42.0K      0   547M
datapool    67.6G  20.7T      0  42.5K      0   546M
datapool    67.6G  20.7T      0  42.3K      0   534M
datapool    67.6G  20.7T      0  43.3K      0   571M
datapool    67.6G  20.7T      0  42.4K      0   560M
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Well according to the pool iostat you seem to be limited to the speed of one sata3 port... That does seem odd. As requested above, we need exact hardware information. RAM, CPU, HBA, etc...

Edit: I see the details you posted but still need to know how the drives are connected.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
CPU could be the culprit here; what was the purpose in choosing a dual CPU setup with low per-core speeds?

Make sure you're doing all work with mirrors. Do not use RAIDZ for VM storage.

Stop testing synchronous random writes. Turn off sync entirely and see what happens. Sync writes involve a huge subsystem traversal. Go check it out:

https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

See especially the section on Laaaaaatency. If you're getting 400MBytes/sec sync writes with no SLOG device I'd say that's pretty awesome.
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hi Guys,

thanks for Replying!

This is the Hardware in the Server:

Basis: https://www.supermicro.com/en/products/system/2U/2029/SYS-2029U-E1CR4T.cfm (BIOS Version: 3.1) (Firmware Revision : 01.61)
HBA: https://www.supermicro.com/products/accessories/addon/AOC-S3008L-L8i.cfm (JBOD Mode)
Board: Board Product Name: X11DPU
CPU: 2x Processor: Intel(R) Xeon(R) Bronze 3104 CPU @ 1.70GHz
RAM: 8x MEM-DR432L-SL03-ER26 (Samsung RDIMM 32GB, DDR4-2666, CL19-19-19, reg ECC (M393A4K40CB2-CTD) (at the Moment with only 2133Mhz because CPU Limitations)
PSU: 2x PWS-1K02A-1R 1000Watt Titanium
SataDoms: 2x SSD-DM032-SMCMVN1 32GB für FreeNas https://www.supermicro.com/products/nfo/SATADOM.cfm
SSD Storage: 24x INTEL SSDSC2KG019T8 https://ark.intel.com/content/www/d...0-series-1-92tb-2-5in-sata-6gb-s-3d2-tlc.html


If i forget Something... write what :)


Sync Writes ALWAYS:

Code:
root@sl-nas-ssd[/mnt/datapool]# fio --rw=randwrite --name=test --size=10G --direct=1 --numjob=10
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
...
fio-3.5
Starting 10 processes
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
Jobs: 10 (f=10): [w(10)][1.2%][r=0KiB/s,w=32.5MiB/s][r=0,w=8324 IOPS][eta 51m:59s]


Code:
root@sl-nas-ssd[~]# zpool iostat datapool 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
datapool    32.0G  20.8T      7  1.50K   773K  41.2M
datapool    32.0G  20.8T      0  48.0K      0   321M
datapool    32.1G  20.8T      0  47.5K      0   295M
datapool    32.1G  20.8T      0  48.9K      0   315M
datapool    32.2G  20.8T      0  47.8K      0   322M
datapool    32.3G  20.8T      0  46.4K      0   320M
datapool    32.3G  20.8T      0  46.9K      0   321M
datapool    32.4G  20.8T      0  48.4K      0   332M
datapool    32.4G  20.8T      0  47.1K      0   333M
datapool    32.5G  20.8T      0  47.4K      0   333M
datapool    32.5G  20.8T      0  48.1K      0   288M
datapool    32.6G  20.8T      0  47.8K      0   297M
datapool    32.6G  20.8T      0  47.7K      0   315M
datapool    32.7G  20.8T      0  47.1K      0   343M
datapool    32.7G  20.8T      0  46.5K      0   307M
datapool    32.8G  20.8T      0  48.5K      0   299M
datapool    32.8G  20.8T      0  48.4K      0   342M
datapool    32.9G  20.8T      0  46.5K      0   320M
datapool    32.9G  20.8T      0  46.7K      0   310M
datapool    33.0G  20.8T      0  47.6K      0   332M
datapool    33.0G  20.8T      0  45.8K      0   341M
datapool    33.1G  20.8T      0  47.2K      0   360M
datapool    33.1G  20.8T      0  46.4K      0   357M
^C
root@sl-nas-ssd[~]#



Sync Writes Disabled:
Code:
root@sl-nas-ssd[/mnt/datapool]# fio --rw=randwrite --name=test --size=10G --direct=1 --numjob=10
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
...
fio-3.5
Starting 10 processes
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
test: Laying out IO file (1 file / 10240MiB)
Jobs: 10 (f=10): [w(10)][3.9%][r=0KiB/s,w=140MiB/s][r=0,w=35.0k IOPS][eta 12m:40s]

Code:
root@sl-nas-ssd[~]# zpool iostat datapool 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
datapool    32.9G  20.8T      7  1.51K   772K  41.2M
datapool    33.2G  20.8T      0  45.9K      0   334M
datapool    33.4G  20.8T      0  43.7K      0   350M
datapool    33.5G  20.8T      0  45.6K      0   382M
datapool    33.7G  20.8T      0  40.0K      0   335M
datapool    34.0G  20.8T      0  39.7K      0   333M
datapool    34.1G  20.8T      0  42.5K      0   351M
datapool    34.3G  20.8T      0  44.5K      0   370M
datapool    34.4G  20.8T      0  43.3K      0   360M
datapool    34.5G  20.8T      0  42.9K      0   354M
datapool    34.9G  20.8T      0  37.4K      0   311M
datapool    34.9G  20.8T      0  38.9K      0   324M
datapool    35.0G  20.8T      0  41.0K      0   346M
datapool    35.1G  20.8T      0  41.7K      0   361M
datapool    35.3G  20.8T      0  41.6K      0   353M
datapool    35.4G  20.8T      0  43.9K      0   380M
datapool    35.5G  20.8T      0  44.7K      0   384M
datapool    35.8G  20.8T      0  37.2K      0   326M
datapool    35.9G  20.8T      0  38.7K      0   341M
^C
root@sl-nas-ssd[~]#





Thank you for your help!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Try the iostat with a -v this with show each disk and if there are any pulling down performance.
 
Top