Block storage limits?

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Hi,
I'm testing a new system and have tried different nvme disks to see what the performance difference is.
I'm not sure if i hit any limits on any protocols, i would like some guidance from the community (beyond the the resources/stickies i've allready read).

The system specs:

Supermicro 1114-WN10RT
AMD Rome 7402P
512GB 3200Mhz RAM
10Gbit NIC (iSCSI only)

With four PM983 disks (1.9TB NVME U2) as striped mirror (two vdevs), exported to iSCSI as 1TB zvol (blocksize 64KiB, sync always) , i get this performance when running it on a VM (only one on the storage).

1608052513896.png



And with ten PM1733 disks (1.9TB NVME U2) as striped mirror (5 vdevs), exported to iSCSI as 5TB zvol (blocksize 64KiB, sync always), i get kind of the same numbers.

tank-vmw01-sync-always-no-slog.jpg


Is this expected? Or am i hitting som kind of limitations?
I wouldn't expect more on the first test, since the link is only 10Gbit atm.
This is just basic tests, i'm still waiting on a dual 40Gbit NIC so i can test multipathing and more, but i was a little bit pussled that there was not more difference between the tests, when the disks are very different performance wise.

I ran diskinfo on the disks also.

PM983:
Code:
# diskinfo -wS /dev/nvd3
/dev/nvd3
        512             # sectorsize
        1920383410176   # mediasize in bytes (1.7T)
        3750748848      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        SAMSUNG MZQLB1T9HAJR-00007      # Disk descr.
        S439NXXX901635  # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM

Synchronous random writes:
         0.5 kbytes:     13.1 usec/IO =     37.3 Mbytes/s
           1 kbytes:     13.3 usec/IO =     73.5 Mbytes/s
           2 kbytes:     13.7 usec/IO =    142.9 Mbytes/s
           4 kbytes:     14.3 usec/IO =    274.1 Mbytes/s
           8 kbytes:     15.6 usec/IO =    499.9 Mbytes/s
          16 kbytes:     18.5 usec/IO =    843.2 Mbytes/s
          32 kbytes:     24.0 usec/IO =   1304.4 Mbytes/s
          64 kbytes:     34.2 usec/IO =   1826.7 Mbytes/s
         128 kbytes:     60.0 usec/IO =   2084.3 Mbytes/s
         256 kbytes:    117.3 usec/IO =   2132.0 Mbytes/s
         512 kbytes:    234.5 usec/IO =   2132.4 Mbytes/s
        1024 kbytes:    465.9 usec/IO =   2146.2 Mbytes/s
        2048 kbytes:    923.8 usec/IO =   2164.9 Mbytes/s
        4096 kbytes:   1840.1 usec/IO =   2173.8 Mbytes/s
        8192 kbytes:   3696.7 usec/IO =   2164.1 Mbytes/s


PM1733:
Code:
# diskinfo -wS /dev/nvd0
/dev/nvd0
        512             # sectorsize
        1920383410176   # mediasize in bytes (1.7T)
        3750748848      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        SAMSUNG MZWLJ1T9HBJR-00007      # Disk descr.
        XXXXXXXXXXXXXX  # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM

Synchronous random writes:
         0.5 kbytes:     20.6 usec/IO =     23.7 Mbytes/s
           1 kbytes:     20.3 usec/IO =     48.2 Mbytes/s
           2 kbytes:     19.1 usec/IO =    102.2 Mbytes/s
           4 kbytes:     13.8 usec/IO =    283.3 Mbytes/s
           8 kbytes:     15.3 usec/IO =    510.1 Mbytes/s
          16 kbytes:     17.2 usec/IO =    907.4 Mbytes/s
          32 kbytes:     20.7 usec/IO =   1508.6 Mbytes/s
          64 kbytes:     27.3 usec/IO =   2292.8 Mbytes/s
         128 kbytes:     50.4 usec/IO =   2481.3 Mbytes/s
         256 kbytes:    101.2 usec/IO =   2470.3 Mbytes/s
         512 kbytes:    202.2 usec/IO =   2473.0 Mbytes/s
        1024 kbytes:    403.0 usec/IO =   2481.3 Mbytes/s
        2048 kbytes:    813.3 usec/IO =   2459.2 Mbytes/s
        4096 kbytes:   1620.7 usec/IO =   2468.1 Mbytes/s
        8192 kbytes:   3258.2 usec/IO =   2455.3 Mbytes/s



Is there anything i can tweak more for better performance or is this not getting better?
I did try with a SLOG (RMS-300 8GB) but that gave a little bit worse bumbers than without.

Kind Regards
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @ehsab,

Reads and Writes are in the range pf 1,150 MB. 8x 1,150 = 9,200 Mb = 9,2 Gb.

Does that points you in the direction you are looking for ?

Did you configured jumbo frames to take you closer to the theoretical limit of 10G ? With multiple NICs, you can configured round robin every 1 packet, so basically using all interfaces at once.
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Hey @ehsab,

Reads and Writes are in the range pf 1,150 MB. 8x 1,150 = 9,200 Mb = 9,2 Gb.

Does that points you in the direction you are looking for ?

Did you configured jumbo frames to take you closer to the theoretical limit of 10G ? With multiple NICs, you can configured round robin every 1 packet, so basically using all interfaces at once.
I have not configured jumbo frames yet, that usally never give more then a couple of percent more.
I was more wondering why 1M Q1T1 and RND4K didn't differ more on the two sets of disks.
Seems to saturate the 10Gbit link with SEQ1M Q8T1
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
There is something fishy with these new disks.
With the older PM983 disks i got about 3GB/s read.
With these PM1733 disks which should be double the performance of PM983 i get 1.4GB/s read.

Code:
dT: 1.064s  w: 1.000s
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    8  11301  11301 1446499    0.6      0      0    0.0   97.7| nvd0
    1  11172  11172 1430018    0.6      0      0    0.0   97.7| nvd1
    8  13987  13987 1790319    0.5      0      0    0.0   97.7| nvd2
    8  14240  14240 1822680    0.5      0      0    0.0   97.7| nvd3
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
@HoneyBadger @jgreco Would you mind taking a look at this, is this to be expected or am i missing something?

I have been doing some more work/tests.
First i verified that the nvme disks actually was correctly mapped to the pcie lanes.

Code:
  HostBridge
    PCIBridge
      PCI 01:00.0 (NVMExp)
        Block(Disk) "nvme8n1"
    PCIBridge
      PCI 02:00.0 (NVMExp)
        Block(Disk) "nvme9c9n1"
    PCIBridge
      PCI 03:00.0 (NVMExp)
        Block(Disk) "nvme10c10n1"
  HostBridge
    PCIBridge
      PCI 41:00.0 (NVMExp)
        Block(Disk) "nvme6c6n1"
    PCIBridge
      PCI 42:00.0 (NVMExp)
        Block(Disk) "nvme7c7n1"
    PCIBridge
      PCI 45:00.0 (Ethernet)
        Net "eno1np0"
      PCI 45:00.1 (Ethernet)
        Net "eno2np1"
    PCIBridge
      PCI 46:00.0 (SATA)
        Block(Disk) "sdb"
        Block(Disk) "sda"
    PCIBridge
      PCIBridge
        PCI 4a:00.0 (VGA)
    PCIBridge
      PCI 4d:00.0 (SATA)
    PCIBridge
      PCI 4e:00.0 (SATA)
  HostBridge
    PCIBridge
      PCI 81:00.0 (NVMExp)
        Block(Disk) "nvme4c4n1"
    PCIBridge
      PCI 82:00.0 (NVMExp)
        Block(Disk) "nvme5c5n1"
    PCIBridge
      PCI 87:00.0 (SATA)
    PCIBridge
      PCI 88:00.0 (SATA)
  HostBridge
    PCIBridge
      PCI c1:00.0 (NVMExp)
        Block(Disk) "nvme0c0n1"
    PCIBridge
      PCI c2:00.0 (NVMExp)
        Block(Disk) "nvme1c1n1"
    PCIBridge
      PCI c3:00.0 (NVMExp)
        Block(Disk) "nvme2c2n1"
    PCIBridge
      PCI c4:00.0 (NVMExp)
        Block(Disk) "nvme3c3n1"


Then i configured two pools with the same parameters (64K blocksize, no compression, sync=always).

Code:
  pool: pm1733
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        pm1733                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/094b02a9-4086-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
            gptid/09500c72-4086-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/0930fb84-4086-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
            gptid/094d9393-4086-11eb-b5e3-0007432cbe70  ONLINE       0     0     0

errors: No known data errors

  pool: pm983
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        pm983                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/ee186b95-4085-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
            gptid/ee324314-4085-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/ee2cad08-4085-11eb-b5e3-0007432cbe70  ONLINE       0     0     0
            gptid/ee2ffdbb-4085-11eb-b5e3-0007432cbe70  ONLINE       0     0     0

errors: No known data errors


I received my Chelsio T580-LP-CR card so i can now run mpio to the hosts.
So i configured iSCSI with two separate networks and enabled roundrobin with io=1 on the hosts.
I created a VM with OS disk locally on the host, and then added two hardrives to it, one in each zvol, and thats the only IO the zvol is seeing, nothing else is on there.

This gave me the following performance.

PM983

4x983-2x80Gb-io1-mpio-synca.jpg


4x983-2x80Gb-io1-mpio-synca-nvme.jpg



PM1733

4x1733-2x80Gb-io1-mpio-synca.jpg


4x1733-2x80Gb-io1-mpio-synca-nvme.jpg



Are these values "good" in comparison what kind of disks i have?
I have yet to tune the kernel variables, so there are more job to be done.
But i noticed that all reads are coming from RAM Cache, how do i use fio without involving RAM Cache?
Only way i could get the reads out of the disk was with dd if=/dev/nvd0 of=/dev/null bs=128k and that gave me the following performance.

PM983 (single disk)

Code:
# dd if=/dev/nvd0 of=/dev/null bs=128k
139091+0 records in
139091+0 records out
18230935552 bytes transferred in 6.702614 secs (2719973976 bytes/sec)



PM1733 (single disk)

Code:
# dd if=/dev/nvd4 of=/dev/null bs=128k
47322+0 records in
47322+0 records out
6202589184 bytes transferred in 6.095193 secs (1017619891 bytes/sec)


I also noticed that turning sync off (on pool and zvol) gave me lower write speeds, so i'm not so sure i can rust the numbers here.
So, any input/thoughts is very appreciated.

Thank you!
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
I discovered that the disks was not showing expected performance during "nvmecontrol perftest"

Code:
root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme1ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   16858 MB/s:    8

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme2ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   16859 MB/s:    8

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme3ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   16797 MB/s:    8

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme4ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  120516 MB/s:   58

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme5ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   95041 MB/s:   46

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme6ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   95716 MB/s:   46

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme7ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   60114 MB/s:   29

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme8ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  117222 MB/s:   57

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme9ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:   60178 MB/s:   29


Seems like the perftest of nvmecontrol is missing a "," i dont think the speeds are that fast, 117222 MB/s for nvme8ns1 should probably be 1172,22 MB/s and so on.

Anyway, that's not my main concern, when i issued nvmecontrol format -s 0 nvme4 i got this warning on console:

Code:
Dec 18 11:49:45 tnas01 nvme4: async event occurred (type 0x2, info 0x00, page 0x04)


Does anyone know the meaning of this message?
I didn't get the warning when i formated nvme0-3 (PM983 PCIe gen3 disks), but for nvme4-9 (PM1733 PCIe gen4 disks) i got it.
Isn't that weird?

After the disks was formated i got expected (not for PM1733) read performance.

Code:
root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme0ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  312312 MB/s:  152

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme1ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  312338 MB/s:  152

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme2ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  312241 MB/s:  152

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme3ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  312401 MB/s:  152

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme4ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  179930 MB/s:   87

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme5ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  179677 MB/s:   87

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme6ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  180024 MB/s:   87

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme7ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  179962 MB/s:   87

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme8ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  179399 MB/s:   87

root@tnas01[~]# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme9ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  179768 MB/s:   87


Running jgrecos solnet-array-test-v2.sh on the disks (parallel readtest)

Code:
dT: 1.002s  w: 1.000s
L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
    1   3090   3090 3164074    0.3      0      0    0.0   96.1| nvd0
    1   3095   3095 3169186    0.3      0      0    0.0   96.2| nvd1
    1   3101   3101 3175320    0.3      0      0    0.0   96.3| nvd2
    1   3089   3089 3163052    0.3      0      0    0.0   96.1| nvd3
    6  11232  11232 1437635    0.6      0      0    0.0   98.3| nvd4
    8  11238  11238 1438402    0.6      0      0    0.0   98.3| nvd5
    1  11237  11237 1438274    0.6      0      0    0.0   98.3| nvd6
    2  11243  11243 1439040    0.6      0      0    0.0   98.3| nvd7
    8  11284  11284 1444408    0.6      0      0    0.0   98.3| nvd8
    8  11222  11222 1436357    0.6      0      0    0.0   98.3| nvd9


Not that good numbers for the PM1733 disks :/
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
Just out of curiosity, what's the temperature of the PM1733's when you're running those tests? My experience with them is they run rather hot.

Have you updated to the latest Samsung firmware?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Would you mind taking a look at this, is this to be expected or am i missing something?

Drives 4-7 (inclusive) seem to be sharing a PCIe host port with the SATA and onboard 10GbE. This might be harming their performance in remote-access tests - are you able to try with an add-in NIC?

I noticed you're using 512-byte granularity in your read tests. If you switch to 4K do the same performance results appear? It wouldn't be unreasonable to think that the newer PM1733s aren't tuned for the same "legacy" 512-byte sector performance.

Just out of curiosity, what's the temperature of the PM1733's when you're running those tests? My experience with them is they run rather hot.

I'd check into this. Enterprise NVMe runs downright toasty, to the tune of 20W+ per drive usually, and I recall there being some issues with thermal throttling under extended workload in earlier Samsung drives. Check the firmware as suggested, and also look at the nvmecontrol output to see if perhaps to counter this they were set to a lower power state out of the box.

Couple questions.

1. With the RMS-300 in place, what's the impact on your RND4K write values? Especially the Q1T1 while aligns with the "guest OS demands immediate write with data integrity" performance.

2. CrystalDiskMark isn't an ideal way to test the full/concurrent throughput of multiple machines. Are you comfortable setting up something like HCIBench instead? This will more accurately represent a multi-VM workload.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Use atto to test for a solo VM with random values to see more "true" performance and only cache
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Just out of curiosity, what's the temperature of the PM1733's when you're running those tests? My experience with them is they run rather hot.

Have you updated to the latest Samsung firmware?

44c max under load so far. PM983 is cooler.
Thats the thing about Samsungs Enterprise disks, they don't offer firmwares or software for them to end costumers or on their homepage.
Here is more info for the disk:

Code:
nvmecontrol identify -v nvme4
Controller Capabilities/Features
================================
Vendor ID:                   144d
Subsystem Vendor ID:         144d
Serial Number:               S4YNNE0NAXXXXX
Model Number:                SAMSUNG MZWLJ1T9HBJR-00007
Firmware Version:            EPK98B5Q
Recommended Arb Burst:       8
IEEE OUI Identifier:         38 25 00
Multi-Path I/O Capabilities: Multiple controllers, Multiple ports
Max Data Transfer Size:      131072 bytes
Controller ID:               0x0041
Version:                     1.3.0


And the PM983

Code:
# nvmecontrol identify -v nvme0
Controller Capabilities/Features
================================
Vendor ID:                   144d
Subsystem Vendor ID:         144d
Serial Number:               S439NA0N9XXXXX
Model Number:                SAMSUNG MZQLB1T9HAJR-00007
Firmware Version:            EDA5402Q
Recommended Arb Burst:       2
IEEE OUI Identifier:         38 25 00
Multi-Path I/O Capabilities: Not Supported
Max Data Transfer Size:      2097152 bytes
Controller ID:               0x0004
Version:                     1.2.0
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Drives 4-7 (inclusive) seem to be sharing a PCIe host port with the SATA and onboard 10GbE. This might be harming their performance in remote-access tests - are you able to try with an add-in NIC?

I noticed you're using 512-byte granularity in your read tests. If you switch to 4K do the same performance results appear? It wouldn't be unreasonable to think that the newer PM1733s aren't tuned for the same "legacy" 512-byte sector performance.



I'd check into this. Enterprise NVMe runs downright toasty, to the tune of 20W+ per drive usually, and I recall there being some issues with thermal throttling under extended workload in earlier Samsung drives. Check the firmware as suggested, and also look at the nvmecontrol output to see if perhaps to counter this they were set to a lower power state out of the box.

Couple questions.

1. With the RMS-300 in place, what's the impact on your RND4K write values? Especially the Q1T1 while aligns with the "guest OS demands immediate write with data integrity" performance.

2. CrystalDiskMark isn't an ideal way to test the full/concurrent throughput of multiple machines. Are you comfortable setting up something like HCIBench instead? This will more accurately represent a multi-VM workload.



How do you come to the conclusion that they share PCIe lanes? Would be interesting to know. They shouldnt do that, here's a screenshot of the arcitecture for the motherboard.

1608365156030.png



You are absolutly right, big difference when using 4k instead of 512.
Code:
# nvmecontrol perftest -n 32 -o read -s 4096 -t 30 nvme0ns1
Threads: 32 Size:   4096  READ Time:  30 IO/s:   31729 MB/s:  123

# nvmecontrol perftest -n 32 -o read -s 4096 -t 30 nvme4ns1
Threads: 32 Size:   4096  READ Time:  30 IO/s:   81372 MB/s:  317


Does this mean i need to change from 512b to 4K secotr on the physical disk to be able to utilize it? I have no idea how to do that since Samsung doesn't offer any softwares (that i know of) for this on these drives.


Both drives (PM983 and PM1733) only support one power state.
Code:
# nvmecontrol power -l nvme4

Power States Supported: 1

 #   Max pwr  Enter Lat  Exit Lat RT RL WT WL Idle Pwr  Act Pwr Workloadd
--  --------  --------- --------- -- -- -- -- -------- -------- --
 0: 25.0000W    0.100ms   0.100ms  0  0  0  0  0.0000W  0.1500W 0


# nvmecontrol power -l nvme0

Power States Supported: 1

 #   Max pwr  Enter Lat  Exit Lat RT RL WT WL Idle Pwr  Act Pwr Workloadd
--  --------  --------- --------- -- -- -- -- -------- -------- --
 0: 10.6000W    0.000ms   0.000ms  0  0  0  0  0.0000W  0.0000W 0



During my (initital tests) i saw a decrease in performance when using the RMS-300 disk as a SLOG. But i'm not sure i did the test accuratly. If you could give som input that would be wildy appreciated.

Sure, i configured HCIBench (not sure about how to tune the tests though) and here's the results.

PM983

Code:
Datastore     = pm983
=============================
JOB_NAME    = job0
VMs        = 3
IOPS        = 172876.33 IO/S
THROUGHPUT    = 1350.00 MB/s
R_LATENCY    = 0.24 ms
W_LATENCY    = 0.39 ms
95%tile_R_LAT    = 0.00 ms
95%tile_W_LAT    = 0.00 ms
=============================


PM1733

Code:
Datastore     = pm1733
=============================
JOB_NAME    = job0
VMs        = 3
IOPS        = 168413.18 IO/S
THROUGHPUT    = 1315.00 MB/s
R_LATENCY    = 0.24 ms
W_LATENCY    = 0.41 ms
95%tile_R_LAT    = 0.00 ms
95%tile_W_LAT    = 0.00 ms
=============================
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Huge difference in writes aswell on the PM1733 disk when using 4k over 512b.

Code:
# nvmecontrol perftest -n 32 -o write -s 4096 -t 30 nvme8ns1
Threads: 32 Size:   4096 WRITE Time:  30 IO/s:  162702 MB/s:  635

# nvmecontrol perftest -n 32 -o write -s 512 -t 30 nvme8ns1
Threads: 32 Size:    512 WRITE Time:  30 IO/s:   80986 MB/s:   39
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
I must shamefully admit i read the numbers wrong on the nvmecontrol perf tests. I mixed up iops and throughput.
Nevertheless, the PM983 outperforms the PM1733 by far on writes. Odd numbers though, i would have expected better read speeds then write speeds.

Write
Code:
PM1733 - 4K
# nvmecontrol perftest -n 32 -o write -s 4096 -t 30 nvme4ns1
Threads: 32 Size:   4096 WRITE Time:  30 IO/s:  149500 MB/s:  583

PM983 - 4K
# nvmecontrol perftest -n 32 -o write -s 4096 -t 30 nvme0ns1
Threads: 32 Size:   4096 WRITE Time:  30 IO/s:  273764 MB/s: 1069


PM1733 - 512b
# nvmecontrol perftest -n 32 -o write -s 512 -t 30 nvme4ns1
Threads: 32 Size:    512 WRITE Time:  30 IO/s:   80545 MB/s:   39

PM983 - 512b
# nvmecontrol perftest -n 32 -o write -s 512 -t 30 nvme0ns1
Threads: 32 Size:    512 WRITE Time:  30 IO/s:  279218 MB/s:  136


Read
Code:
PM1733 - 4K
# nvmecontrol perftest -n 32 -o read -s 4096 -t 30 nvme4ns1
Threads: 32 Size:   4096  READ Time:  30 IO/s:  155535 MB/s:  607

PM983 - 4K
# nvmecontrol perftest -n 32 -o read -s 4096 -t 30 nvme0ns1
Threads: 32 Size:   4096  READ Time:  30 IO/s:  147080 MB/s:  574


PM1733 - 512b
# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme4ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  317197 MB/s:  154

PM983 - 512b
# nvmecontrol perftest -n 32 -o read -s 512 -t 30 nvme0ns1
Threads: 32 Size:    512  READ Time:  30 IO/s:  134354 MB/s:   65
 
Top