Resource icon

SLOG benchmarking and finding the best SLOG

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Just curious about your thoughts HB.

I have the following NVMe
2 Intel 750 400GB (set to 4k with 100gb OP)
2 Intel Optane 280GB
1 Intel P3700 400GB
For my L2Arc
2 Intel 750 1.2TB (i'll be changing these to 4k)
1 Intel P3520 1.2TB (i'll be changing this to 4k)

I have 3 storage servers, 1 for terminal servers, 1 for file servers and 1 for misc.
The TS and FS storage servers both have 256gb of ram, the misc has 64gb of ram.
All servers have 40gb chelsios nics
the TS server has 12gb sas hitachi drives
the FS server has 12gb sata WD drives.

What would you use for ZIL/SLOG and What would you use for L2ARC?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
My Intel Optane 280GB
Code:
diskinfo -wS /dev/nvd0
/dev/nvd0
		512			 # sectorsize
		280065171456	# mediasize in bytes (261G)
		547002288	   # mediasize in sectors
		0			   # stripesize
		0			   # stripeoffset
		INTEL SSDPED1D280GA	 # Disk descr.
		PHMB743500RY280CGN	  # Disk ident.

Synchronous random writes:
		 0.5 kbytes:	 17.9 usec/IO =	 27.3 Mbytes/s
		   1 kbytes:	 18.0 usec/IO =	 54.1 Mbytes/s
		   2 kbytes:	 18.8 usec/IO =	103.8 Mbytes/s
		   4 kbytes:	 15.7 usec/IO =	248.3 Mbytes/s
		   8 kbytes:	 18.9 usec/IO =	414.3 Mbytes/s
		  16 kbytes:	 25.8 usec/IO =	605.2 Mbytes/s
		  32 kbytes:	 38.8 usec/IO =	804.9 Mbytes/s
		  64 kbytes:	 65.2 usec/IO =	959.2 Mbytes/s
		 128 kbytes:	119.7 usec/IO =   1044.7 Mbytes/s
		 256 kbytes:	220.0 usec/IO =   1136.4 Mbytes/s
		 512 kbytes:	399.4 usec/IO =   1251.9 Mbytes/s
		1024 kbytes:	760.5 usec/IO =   1314.9 Mbytes/s
		2048 kbytes:   1475.0 usec/IO =   1355.9 Mbytes/s
		4096 kbytes:   2970.0 usec/IO =   1346.8 Mbytes/s
		8192 kbytes:   5828.3 usec/IO =   1372.6 Mbytes/s

 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Intel 750 400GB 4k Sectors and Over Provisioned to 100GB

Code:
diskinfo -wS /dev/nvd1
diskinfo: /dev/nvd1: Operation not permitted
root@KSPSAN06:~ # diskinfo -wS /dev/nvd2
/dev/nvd2
		4096			# sectorsize
		99999997952	 # mediasize in bytes (93G)
		24414062		# mediasize in sectors
		131072		  # stripesize
		0			   # stripeoffset
		INTEL SSDPEDMW400G4	 # Disk descr.
		CVCQ5214004G400AGN	  # Disk ident.

Synchronous random writes:
		   4 kbytes:	 12.2 usec/IO =	319.6 Mbytes/s
		   8 kbytes:	 14.8 usec/IO =	526.2 Mbytes/s
		  16 kbytes:	 19.5 usec/IO =	801.4 Mbytes/s
		  32 kbytes:	 34.7 usec/IO =	900.5 Mbytes/s
		  64 kbytes:	 70.3 usec/IO =	889.5 Mbytes/s
		 128 kbytes:	143.5 usec/IO =	870.9 Mbytes/s
		 256 kbytes:	256.7 usec/IO =	973.9 Mbytes/s
		 512 kbytes:	516.6 usec/IO =	967.9 Mbytes/s
		1024 kbytes:   1032.0 usec/IO =	969.0 Mbytes/s
		2048 kbytes:   2064.3 usec/IO =	968.8 Mbytes/s
		4096 kbytes:   4084.6 usec/IO =	979.3 Mbytes/s
		8192 kbytes:   8211.0 usec/IO =	974.3 Mbytes/s
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Intel 750 1.2TB 4K Sector
Code:
diskinfo -wS /dev/nvd3
/dev/nvd3
		4096			# sectorsize
		1200243695616   # mediasize in bytes (1.1T)
		293028246	   # mediasize in sectors
		131072		  # stripesize
		0			   # stripeoffset
		INTEL SSDPEDMW012T4	 # Disk descr.
		CVCQ522000AM1P2BGN	  # Disk ident.

Synchronous random writes:
		   4 kbytes:	 12.3 usec/IO =	317.9 Mbytes/s
		   8 kbytes:	 14.8 usec/IO =	528.5 Mbytes/s
		  16 kbytes:	 19.6 usec/IO =	799.0 Mbytes/s
		  32 kbytes:	 28.9 usec/IO =   1080.5 Mbytes/s
		  64 kbytes:	 55.7 usec/IO =   1121.2 Mbytes/s
		 128 kbytes:	115.0 usec/IO =   1087.2 Mbytes/s
		 256 kbytes:	210.8 usec/IO =   1186.2 Mbytes/s
		 512 kbytes:	418.0 usec/IO =   1196.1 Mbytes/s
		1024 kbytes:	836.0 usec/IO =   1196.1 Mbytes/s
		2048 kbytes:   1654.3 usec/IO =   1209.0 Mbytes/s
		4096 kbytes:   3269.0 usec/IO =   1223.6 Mbytes/s
		8192 kbytes:   6682.8 usec/IO =   1197.1 Mbytes/s
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Just curious about your thoughts HB.

I have the following NVMe
2 Intel 750 400GB (set to 4k with 100gb OP)
2 Intel Optane 280GB
1 Intel P3700 400GB
For my L2Arc
2 Intel 750 1.2TB (i'll be changing these to 4k)
1 Intel P3520 1.2TB (i'll be changing this to 4k)

I have 3 storage servers, 1 for terminal servers, 1 for file servers and 1 for misc.
The TS and FS storage servers both have 256gb of ram, the misc has 64gb of ram.
All servers have 40gb chelsios nics
the TS server has 12gb sas hitachi drives
the FS server has 12gb sata WD drives.

What would you use for ZIL/SLOG and What would you use for L2ARC?

I'm kind of thinking of using 2 OP 750 400gb in a stripe for the file server and 2 Optanes in a stripe for the terminal server. Unfortunately I only have one P3700...may have to just buy another!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Neat result here. The 900p is clearly being limited by its 512e sectors at the lower block sizes; the P4800X isn't but it has more capacity (and maybe that's Intel's "enterprise sauce" at work?)

upload_2018-10-11_12-42-3.png


The two Intel 750's are essentially neck-and-neck with the Optane P4800X very tightly up until 16K records, where Optane pulls away. The two 750's separate themselves based on NAND die size, and consumer Optane doesn't manage to pull ahead of the little one until 64K and the big one until 512K.

I wouldn't stripe the log devices as they aren't hot-swappable (unless the Optanes are U.2 and you have the bays).

For SLOG, I would get a second P3700 and use them in your TS storage since the performance at smaller block sizes should benefit that (you're hosting the TS VMs from here? or the user roaming profiles?) the Optanes in your file server, and the 750s in your "misc" server.

For L2ARC, a 1.2TB 750 in each of the two 256GB servers, the 64GB "misc" server may not be able to take full advantage of it. Keep it as a cold-spare to your other two machines?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Perfect thanks!
 

ivosevb

Cadet
Joined
Aug 30, 2015
Messages
5
Intel DC P4510
Model Number: INTEL SSDPE2KX010T8
Firmware Version: VDV10131
Total NVM Capacity: 1,000,204,886,016 [1.00 TB]

Code:
diskinfo -wS /dev/nvd0
/dev/nvd0
   512			 # sectorsize
   1000204886016   # mediasize in bytes (932G)
   1953525168	  # mediasize in sectors
   131072		 # stripesize
   0			   # stripeoffset
   INTEL SSDPE2KX010T8   # Disk descr.
   BTLJ83320CNQ1P0FGN   # Disk ident.
   Yes			 # TRIM/UNMAP support
   0			   # Rotation rate in RPM

Synchronous random writes:
	0.5 kbytes:	 13.5 usec/IO =	 36.1 Mbytes/s
	  1 kbytes:	 13.2 usec/IO =	 73.7 Mbytes/s
	  2 kbytes:	 13.4 usec/IO =	145.9 Mbytes/s
	  4 kbytes:	 13.7 usec/IO =	284.5 Mbytes/s
	  8 kbytes:	 15.7 usec/IO =	497.3 Mbytes/s
	 16 kbytes:	 19.6 usec/IO =	796.3 Mbytes/s
	 32 kbytes:	 32.8 usec/IO =	953.5 Mbytes/s
	 64 kbytes:	 61.5 usec/IO =   1016.2 Mbytes/s
	128 kbytes:	131.0 usec/IO =	954.5 Mbytes/s
	256 kbytes:	243.1 usec/IO =   1028.6 Mbytes/s
	512 kbytes:	513.8 usec/IO =	973.2 Mbytes/s
   1024 kbytes:	971.5 usec/IO =   1029.3 Mbytes/s
   2048 kbytes:   2024.2 usec/IO =	988.0 Mbytes/s
   4096 kbytes:   3966.1 usec/IO =   1008.6 Mbytes/s
   8192 kbytes:   7986.6 usec/IO =   1001.7 Mbytes/s
 

ivosevb

Cadet
Joined
Aug 30, 2015
Messages
5
Intel DC P4501
Model Number: INTEL SSDPE7KX500G7
Firmware Version: QDV101D1
Total NVM Capacity: 500,107,862,016 [500 GB]

Code:
diskinfo -wS /dev/nvd1
/dev/nvd1
		512			 # sectorsize
		500107862016	# mediasize in bytes (466G)
		976773168	   # mediasize in sectors
		131072		  # stripesize
		0			   # stripeoffset
		INTEL SSDPE7KX500G7	 # Disk descr.
		PHLF8272004P500JGN	  # Disk ident.
		Yes			 # TRIM/UNMAP support
		0			   # Rotation rate in RPM

Synchronous random writes:
		 0.5 kbytes:	 15.7 usec/IO =	 31.0 Mbytes/s
		   1 kbytes:	 15.1 usec/IO =	 64.6 Mbytes/s
		   2 kbytes:	 15.1 usec/IO =	129.3 Mbytes/s
		   4 kbytes:	 15.0 usec/IO =	259.7 Mbytes/s
		   8 kbytes:	 24.6 usec/IO =	318.0 Mbytes/s
		  16 kbytes:	 53.5 usec/IO =	292.1 Mbytes/s
		  32 kbytes:	105.3 usec/IO =	296.7 Mbytes/s
		  64 kbytes:	178.8 usec/IO =	349.6 Mbytes/s
		 128 kbytes:	386.3 usec/IO =	323.6 Mbytes/s
		 256 kbytes:	773.3 usec/IO =	323.3 Mbytes/s
		 512 kbytes:   1386.5 usec/IO =	360.6 Mbytes/s
		1024 kbytes:   3088.2 usec/IO =	323.8 Mbytes/s
		2048 kbytes:   5740.6 usec/IO =	348.4 Mbytes/s
		4096 kbytes:  12001.4 usec/IO =	333.3 Mbytes/s
		8192 kbytes:  23967.8 usec/IO =	333.8 Mbytes/s
 

mattfrazer

Cadet
Joined
Oct 20, 2017
Messages
1
Results from a single module in a Sun F40 PCIe SSD
Chassis is an HP DL380 G6 Single E5606 64GB DDR3

Code:
=== START OF INFORMATION SECTION ===
Device Model:	 3E128-TS2-550B01
Firmware Version: PROLUIO6
User Capacity:	100,000,000,512 bytes [100 GB]
Sector Sizes:	 512 bytes logical, 8192 bytes physical
Rotation Rate:	Solid State Device
Device is:		Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS, ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)

/dev/da1
		512			 # sectorsize
		100000000512	# mediasize in bytes (93G)
		195312501	   # mediasize in sectors
		8192			# stripesize
		0			   # stripeoffset
		12157		   # Cylinders according to firmware.
		255			 # Heads according to firmware.
		63			  # Sectors according to firmware.
		ATA 3E128-TS2-550B01	# Disk descr.
		Not_Zoned	   # Zone Mode

Synchronous random writes:
		 0.5 kbytes:   2043.5 usec/IO =	  0.2 Mbytes/s
		   1 kbytes:   1985.6 usec/IO =	  0.5 Mbytes/s
		   2 kbytes:   2026.9 usec/IO =	  1.0 Mbytes/s
		   4 kbytes:   1934.4 usec/IO =	  2.0 Mbytes/s
		   8 kbytes:   1805.9 usec/IO =	  4.3 Mbytes/s
		  16 kbytes:   1825.3 usec/IO =	  8.6 Mbytes/s
		  32 kbytes:   1880.1 usec/IO =	 16.6 Mbytes/s
		  64 kbytes:   1968.6 usec/IO =	 31.7 Mbytes/s
		 128 kbytes:   2193.2 usec/IO =	 57.0 Mbytes/s
		 256 kbytes:   2774.1 usec/IO =	 90.1 Mbytes/s
		 512 kbytes:   3016.5 usec/IO =	165.8 Mbytes/s
		1024 kbytes:   4396.7 usec/IO =	227.4 Mbytes/s
		2048 kbytes:   6169.4 usec/IO =	324.2 Mbytes/s
		4096 kbytes:  10361.2 usec/IO =	386.1 Mbytes/s
		8192 kbytes:  18519.0 usec/IO =	432.0 Mbytes/s
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Results from a single module in a Sun F40 PCIe SSD

I've tried to figure out how to make FreeBSD do something similar to the
cache-nonvolatile:true setting in the Solaris sd.conf file, but to no avail so far.

Modern PLP-protected drives will simply determine "the data is protected, return true" but apparently these old dinosaurs will actually flush their (protected) RAM to NAND which kills performance (as you're seeing)

Basically, these cards (F20, F40, F80, etc) are supposed to be able to completely ignore a cache flush command with no ill effect. I'd be interested to see how they perform under that setup.
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
OK, I added a new raidz2 8x8TB pool so I needed to buy and benchmark a new SLOG. The Intel Optane SSD 800P 58GB was purchased and installed (FN11.2RC1). See the results below:
Code:
diskinfo -wS /dev/nvd0 
/dev/nvd0 
512 		  # sectorsize 
58977157120 # mediasize in bytes (55G) 
115189760 	# mediasize in sectors 
0 			# stripesize 
0 			# stripeoffset 

INTEL SSDPEK1W060GA # Disk descr. 
PHBT803300JL064Q # Disk ident. 
Yes 		  # TRIM/UNMAP support 
0 			# Rotation rate in RPM 

Synchronous random writes: 

0.5 kbytes:	  8.3 usec/IO =	 58.6 Mbytes/s
  1 kbytes:	  8.2 usec/IO =	119.2 Mbytes/s
  2 kbytes:	  9.6 usec/IO =	203.2 Mbytes/s
  4 kbytes:	 12.6 usec/IO =	309.0 Mbytes/s
  8 kbytes:	 19.0 usec/IO =	411.7 Mbytes/s
  16 kbytes:	 32.1 usec/IO =	487.3 Mbytes/s
  32 kbytes:	 57.9 usec/IO =	539.9 Mbytes/s
  64 kbytes:	113.6 usec/IO =	550.0 Mbytes/s
128 kbytes:	243.9 usec/IO =	512.4 Mbytes/s
256 kbytes:	465.2 usec/IO =	537.4 Mbytes/s
512 kbytes:	919.5 usec/IO =	543.8 Mbytes/s
1024 kbytes:   1805.8 usec/IO =	553.8 Mbytes/s
2048 kbytes:   3555.2 usec/IO =	562.6 Mbytes/s
4096 kbytes:   7048.2 usec/IO =	567.5 Mbytes/s
8192 kbytes:  14056.4 usec/IO =	569.1 Mbytes/s
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
OK, I added a new raidz2 8x8TB pool so I needed to buy and benchmark a new SLOG. The Intel Optane SSD 800P 58GB was purchased and installed (FN11.2RC1). See the results below:

This is significantly better than the results earlier, and points to some PCIe issues in the previous poster's system ( @Sirius )
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Dell Enterprise Class 200GB SSD 12Gbps SAS
Model: PX02SMF020
Part No. SDFCP93DAA01
Toshiba PX02SM SSD

Code:
root@freenas1:~ # diskinfo -wS /dev/da14
/dev/da14
		512			 # sectorsize
		200049647616	# mediasize in bytes (186G)
		390721968	   # mediasize in sectors
		4096			# stripesize
		0			   # stripeoffset
		24321		   # Cylinders according to firmware.
		255			 # Heads according to firmware.
		63			  # Sectors according to firmware.
		TOSHIBA PX02SMF020	  # Disk descr.
		9510A00ZT0RB	# Disk ident.
		Not_Zoned	   # Zone Mode

Synchronous random writes:
		 0.5 kbytes:   2129.3 usec/IO =	  0.2 Mbytes/s
		   1 kbytes:   2156.7 usec/IO =	  0.5 Mbytes/s
		   2 kbytes:   2162.6 usec/IO =	  0.9 Mbytes/s
		   4 kbytes:   2147.6 usec/IO =	  1.8 Mbytes/s
		   8 kbytes:   2143.8 usec/IO =	  3.6 Mbytes/s
		  16 kbytes:   2130.3 usec/IO =	  7.3 Mbytes/s
		  32 kbytes:   2179.4 usec/IO =	 14.3 Mbytes/s
		  64 kbytes:   2190.3 usec/IO =	 28.5 Mbytes/s
		 128 kbytes:   2217.6 usec/IO =	 56.4 Mbytes/s
		 256 kbytes:   2292.7 usec/IO =	109.0 Mbytes/s
		 512 kbytes:   2413.0 usec/IO =	207.2 Mbytes/s
		1024 kbytes:   4282.3 usec/IO =	233.5 Mbytes/s
		2048 kbytes:   7337.7 usec/IO =	272.6 Mbytes/s
		4096 kbytes:  12450.3 usec/IO =	321.3 Mbytes/s
		8192 kbytes:  22873.4 usec/IO =	349.8 Mbytes/s
root@freenas1:~ #

 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Dell Enterprise Class 200GB SSD 12Gbps SAS
Model: PX02SMF020
Part No. SDFCP93DAA01
Toshiba PX02SM SSD

Don't take this the wrong way, but I really hope diskinfo has stuffed something up and is giving you incorrect results because those latency/throughput numbers are shockingly bad at lower recordsizes.
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Don't take this the wrong way, but I really hope diskinfo has stuffed something up and is giving you incorrect results because those latency/throughput numbers are shockingly bad at lower recordsizes.

Probably. I think the SLOG performance is on par with the capabilities of the SSD when running Iometer against it with sync=always.
ps. Honesty doesn't offend me. Once again I appreciate your help.
 

drros

Dabbler
Joined
Aug 27, 2018
Messages
10
Bought this drive for L2ARC, so not using this as a SLOG, just a test.
KFA2 Gamer 240GB - Phison PS5008-E8 based NMVe installed in m.2-pci-e adapter
Code:
root@freenas[~]# smartctl -a /dev/nvme0
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       GAMER TA1G0240N
Serial Number:                      11AC078912DC00000004
Firmware Version:                   E8FM11.5
PCI Vendor/Subsystem ID:            0x1987
IEEE OUI Identifier:                0x000000
Total NVM Capacity:                 240,057,409,536 [240 GB]
Unallocated NVM Capacity:           0
Controller ID:                      0
Number of Namespaces:               1
Namespace 1 Size/Capacity:          240,057,409,536 [240 GB]
Namespace 1 Formatted LBA Size:     512
Local Time is:                      Tue Dec  4 14:07:32 2018 +04
Firmware Updates (0x02):            1 Slot
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x001e):     Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat
Maximum Data Transfer Size:         512 Pages
Warning  Comp. Temp. Threshold:     90 Celsius
Critical Comp. Temp. Threshold:     94 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     3.00W       -        -    0  0  0  0        0       0
 1 +     2.00W       -        -    1  1  1  1        0       0
 2 +     2.00W       -        -    2  2  2  2        0       0
 3 -   0.1000W       -        -    3  3  3  3     1000    1000
 4 -   0.0050W       -        -    4  4  4  4   400000   90000

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         1
 1 -    4096       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning:                   0x00
Temperature:                        44 Celsius
Available Spare:                    100%
Available Spare Threshold:          50%
Percentage Used:                    0%
Data Units Read:                    138,569 [70.9 GB]
Data Units Written:                 1,394,362 [713 GB]
Host Read Commands:                 2,452,652
Host Write Commands:                7,552,538
Controller Busy Time:               32
Power Cycles:                       24
Power On Hours:                     424
Unsafe Shutdowns:                   21
Media and Data Integrity Errors:    0
Error Information Log Entries:      0
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 2:               44 Celsius

Error Information (NVMe Log 0x01, max 16 entries)
No Errors Logged



Code:
root@freenas[~]# diskinfo -wS /dev/nvd0
/dev/nvd0
        512             # sectorsize
        240057409536    # mediasize in bytes (224G)
        468862128       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        GAMER TA1G0240N # Disk descr.
        11AC078912DC00000004    # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM

Synchronous random writes:
         0.5 kbytes:   1383.7 usec/IO =      0.4 Mbytes/s
           1 kbytes:   1038.0 usec/IO =      0.9 Mbytes/s
           2 kbytes:   1029.4 usec/IO =      1.9 Mbytes/s
           4 kbytes:   1037.7 usec/IO =      3.8 Mbytes/s
           8 kbytes:   1027.1 usec/IO =      7.6 Mbytes/s
          16 kbytes:   1574.9 usec/IO =      9.9 Mbytes/s
          32 kbytes:   1001.6 usec/IO =     31.2 Mbytes/s
          64 kbytes:   1008.5 usec/IO =     62.0 Mbytes/s
         128 kbytes:   1017.1 usec/IO =    122.9 Mbytes/s
         256 kbytes:   1156.7 usec/IO =    216.1 Mbytes/s
         512 kbytes:   1259.0 usec/IO =    397.1 Mbytes/s
        1024 kbytes:   2465.3 usec/IO =    405.6 Mbytes/s
        2048 kbytes:   2970.3 usec/IO =    673.3 Mbytes/s
        4096 kbytes:   5284.0 usec/IO =    757.0 Mbytes/s
        8192 kbytes:  10057.9 usec/IO =    795.4 Mbytes/s
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Mini XL w/ 200GB Intel S3710 SSD (on 6Gb/s SATA port)
Code:
       
=== START OF INFORMATION SECTION ===                                                                                                
Model Family:     Intel 730 and DC S35x0/3610/3700 Series SSDs                                                                      
Device Model:     INTEL SSDSC2BA200G4                                                                                              
Serial Number:    BTHV618401CG200MGN                                                                                                
LU WWN Device Id: 5 5cd2e4 04c230bcb                                                                                                
Firmware Version: G2010140                                                                                                          
User Capacity:    200,049,647,616 bytes [200 GB]                                                                                    
Sector Sizes:     512 bytes logical, 4096 bytes physical                                                                            
Rotation Rate:    Solid State Device                                                                                                
Form Factor:      2.5 inches                                                                                                        
Device is:        In smartctl database [for details use: -P show]                                                                  
ATA Version is:   ACS-2 T13/2015-D revision 3                                                                                      
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)                                                                            
Local Time is:    Wed Dec  5 21:18:06 2018 EST                                                                                      
SMART support is: Available - device has SMART capability.                                                                          
SMART support is: Enabled                                                                                                          
                                                                                                                                  
/dev/ada9                                                                     
        512             # sectorsize                                           
        200049647616    # mediasize in bytes (186G)                           
        390721968       # mediasize in sectors                                 
        4096            # stripesize                                           
        0               # stripeoffset                                         
        387621          # Cylinders according to firmware.                     
        16              # Heads according to firmware.                         
        63              # Sectors according to firmware.                       
        INTEL SSDSC2BA200G4     # Disk descr.                                 
        BTHV618401CG200MGN      # Disk ident.                                 
        Not_Zoned       # Zone Mode                                                     
                                                                               
Synchronous random writes:                                                    
         0.5 kbytes:    213.3 usec/IO =      2.3 Mbytes/s                      
           1 kbytes:    205.9 usec/IO =      4.7 Mbytes/s                      
           2 kbytes:    189.1 usec/IO =     10.3 Mbytes/s                      
           4 kbytes:     92.6 usec/IO =     42.2 Mbytes/s                      
           8 kbytes:    100.7 usec/IO =     77.6 Mbytes/s                      
          16 kbytes:    123.0 usec/IO =    127.0 Mbytes/s                      
          32 kbytes:    157.1 usec/IO =    198.9 Mbytes/s                      
          64 kbytes:    229.3 usec/IO =    272.6 Mbytes/s                      
         128 kbytes:    433.8 usec/IO =    288.2 Mbytes/s                      
         256 kbytes:    843.5 usec/IO =    296.4 Mbytes/s                      
         512 kbytes:   1679.4 usec/IO =    297.7 Mbytes/s                      
        1024 kbytes:   3344.5 usec/IO =    299.0 Mbytes/s                      
        2048 kbytes:   6357.8 usec/IO =    314.6 Mbytes/s                      
        4096 kbytes:  13167.5 usec/IO =    303.8 Mbytes/s                      
        8192 kbytes:  26201.4 usec/IO =    305.3 Mbytes/s    
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
For giggles, I tried out a 840 EVO SAMSUNG SSD (SATA) that I had lying around. Note the terrible performance with small writes.

I might try this device as a L2ARC now that the MiniXL has 64GB of RAM. Looking to improve rsync performance... caching metadata might help.

Code:
=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 840 EVO 500GB
Serial Number:    S1DHNSAFB44719J
LU WWN Device Id: 5 002538 8a08099a1
Firmware Version: EXT0CB6Q
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Dec  6 19:29:13 2018 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

/dev/ada2
        512             # sectorsize
        500107862016    # mediasize in bytes (466G)
        976773168       # mediasize in sectors
        4096            # stripesize
        0               # stripeoffset
        969021          # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        Samsung SSD 840 EVO 500GB       # Disk descr.
        S1DHNSAFB44719J # Disk ident.
        Yes             # TRIM/UNMAP support
        0               # Rotation rate in RPM
        Not_Zoned       # Zone Mode

Synchronous random writes:
         0.5 kbytes:   5004.5 usec/IO =      0.1 Mbytes/s
           1 kbytes:   6415.0 usec/IO =      0.2 Mbytes/s
           2 kbytes:   5010.7 usec/IO =      0.4 Mbytes/s
           4 kbytes:   1956.2 usec/IO =      2.0 Mbytes/s
           8 kbytes:   2133.3 usec/IO =      3.7 Mbytes/s
          16 kbytes:   2717.4 usec/IO =      5.8 Mbytes/s
          32 kbytes:   3248.8 usec/IO =      9.6 Mbytes/s
          64 kbytes:   4174.5 usec/IO =     15.0 Mbytes/s
         128 kbytes:   4870.5 usec/IO =     25.7 Mbytes/s
         256 kbytes:   5704.0 usec/IO =     43.8 Mbytes/s
         512 kbytes:   7263.1 usec/IO =     68.8 Mbytes/s
        1024 kbytes:   8593.3 usec/IO =    116.4 Mbytes/s
        2048 kbytes:  11602.5 usec/IO =    172.4 Mbytes/s
        4096 kbytes:  16591.0 usec/IO =    241.1 Mbytes/s
        8192 kbytes:  25809.8 usec/IO =    310.0 Mbytes/s
 

svtkobra7

Patron
Joined
Jan 12, 2017
Messages
202
Speaking to adding more than one SLOG to a pool, with a new pool I'm benchmarking (12 x WDC WD100EMAZ), and looking at sync=always speed:
  • 1 20G vDisk = ~1000% improvement vs no SLOG
  • 2 20G vDisks (not "striped" as ZFS doesn't allow that) = ~2-11% further improvement
  • 2 20G vDisks Mirrored = ~-25% from 2 "non-mirrored" SLOGs
Since the vDisks are on the same Optane 900p (and equally likely to fail), is there any harm in using 2 vDisks in a non-mirrored configuration? Running iostat while the benchmark is running shows them both to be "working," but how so, I have no clue as they aren't striped. Granted this is a benchmark and whether 2 non-mirrored vDisks actually outperforms 2 mirrored vDisks during use isn't known.

slog.jpg

(I have 2 900ps per host and the passthru map workaround doesn't work for me)

#!/bin/sh

zfs create Tank1/disabled
zfs set recordsize=128k compression=off sync=disabled Tank1/disabled
dd if=/dev/zero of=/mnt/Tank1/disabled/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/disabled/tmp.dat bs=2048k count=25k
zfs destroy Tank1/disabled

zfs create Tank1/standard
zfs set recordsize=128k compression=off sync=standard Tank1/standard
dd if=/dev/zero of=/mnt/Tank1/standard/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/standard/tmp.dat bs=2048k count=25k
zfs destroy Tank1/standard

zfs create Tank1/always
zfs set recordsize=128k compression=off sync=always Tank1/always
dd if=/dev/zero of=/mnt/Tank1/always/tmp.dat bs=2048k count=25k
dd of=/dev/null if=/mnt/Tank1/always/tmp.dat bs=2048k count=25k
zfs destroy Tank1/always
NB: I believe I'm doing everything right here - RAM @ 8GB, pools blown up between tests, compression=off, etc.
 
Top