Need MAX Sequential WRITE Performance (40 GbE SMB)

Status
Not open for further replies.

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
Is FreeNAS 11.1 (or ANY version of FN), with the proper hardware (listed below) CAPABLE of saturating a 40GbE fiber NIC channel?

Our motion picture scanners can scan and WRITE 4K / 5K color TIFF or DPX frames at 30 FPS. 30 FPS at 4K 10-bit Color DPX is ~1,500 MB/s (~1.5GB/s -- i.e. NOT *bits* but BYTES). Each 10-bit color 4K DPX frame is ~50 MB/each, time 30 per second. Plus we'll usually have a 2K ProRes 4444HQ file and 96k WAV file output concurrently, so we need SUSTAINED sequential writes to storage of 20Gb/s or about 2GB/s indefinitely.

Our hardware is:

Supermicro Dual E5-2630 v2 CPU
128GB DDR3 ECC SDRAM
LSI dual-channel SAS3 12G HBA
2x Supermicro 45-disk SAS JBOD *each* with 30x 8TB HGST SAS3 12G spinning disks.
EACH 30-Disk JBOD is on its own 12G SAS channel on the HBA
Mellanox 2-port 40GbE QSFP+ Fiber NIC
Client/Scanner PC is an Asus-based unit with matching Mellanox 40GbE NIC and gobs of CPU, GPU and RAM for scanner operation.
Client PC is Windows 10 Enterprise (LTSB)

We have storage that can handle the throughput from the Mellanox CX3 / Scanner PC, but we're getting abysmal performance from our FreeNAS build over SMB.

The only "tuning" we've done is adjust MTU to 9000 on both FN and Win10.

Open-E, and other paid solutions, operate MUCH faster on the same hardware, but we assume this is because we have to dig deep and perform some significant tuning. We're running a SINGLE ~400TB (available) RAIDZ3 volume to maximize number of spindles and available space.

Should we run a different RAIDZ level (or mirror, which is not desirable due to capacity penalty, but doable if necessary)?
What type of volume, file system, network, memory caching, or other tuning should we perform.

We are running the latest 11.1 build downloaded yesterday (Friday, January 5, 2018) and have done nothing in terms of tweaking or running any additional plugins, etc.

We're in Irvine, California, if there are any true FreeNAS gurus looking for a little work on the side.

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Open-E, and other paid solutions, operate MUCH faster on the same hardware, but we assume this is because we have to dig deep and perform some significant tuning. We're running a SINGLE ~400TB (available) RAIDZ3 volume to maximize number of spindles and available space.
Please tell me it's not a single 30-wide RAIDZ3 vdev. "Abysmal" is just the right word for such a thing.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
For a proper view on that, and to proof if Ericloewe was right, please post the output of zpool status.
RAIDZ3 with 30 drives is a bad idea, and you really loose throughput from the disks by the layout. But let me explain that later, after we have seen your pool config.
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
How would you configure two independent JBOD channels, each with 30 drives, for absolute maximum sequential write throughput (omitting RAID-0, of course)?

Thanks for the education. It's very welcome.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
two independent JBOD channels, each with 30 drives,
This is completely irrelevant. You have 60 drives and that's what matters.

absolute maximum sequential write throughput
Probably 10 six-wide RAIDZ1 vdevs. Six 10-wide RAIDZ2 vdevs sound like a better option to me, overall.

Bottom line is that RAIDZ vdevs wider than 10 drives will behave poorly.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Actually, I think it's a single 60-wide Z2. How do I obtain and post the relevant info? Thanks,
zpool status - post the output in [CODE][/CODE] tags, please.
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
This is completely irrelevant. You have 60 drives and that's what matters.


Probably 10 six-wide RAIDZ1 vdevs. Six 10-wide RAIDZ2 vdevs sound like a better option to me, overall.

Bottom line is that RAIDZ vdevs wider than 10 drives will behave poorly.

Which of the two options should -- theoretically -- perform better? 10x6 Z1 or 6x10 Z2? Fault tolerance is very secondary to raw write performance. In reality, the order of importance for this is 1) Sustained sequential write performance, 2) capacity retention, 3) degrade rebuild speed, 4) fault tolerance.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The former has more IOPS and very slightly better best-case sequential throughput, but reliability is more contingent on immediate drive replacements and reasonably speedy resilvers. Not that any option absolves you from doing backups.

By the way, I'm not clear on when and what read activity happens.
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
The former has more IOPS and very slightly better best-case sequential throughput, but reliability is more contingent on immediate drive replacements and reasonably speedy resilvers. Not that any option absolves you from doing backups.

By the way, I'm not clear on when and what read activity happens.

Reads are important as well. We might be writing and reading concurrently to the storage. We're not too concerned about backups, because this is a "staging area" for data, which gets transferred to long-term and/or LTO7/8 on nights and weekends.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
IMO... you're trying to make the whole thing Just Work (tm) without stopping to consider each bit. At this level of performance, assumptions won't work... you're going to need to consider every piece in the chain between the scanner and the FN disks. Otherwise, you should call iX and buy a TrueNAS.

Just a few thoughts:
  • Post a zpool status. If you're really running a single 60-disk wide vdev, it'll suck massively for... well, everything. I would consider 10 6-disk vdevs in RAIDZ2 as a potential option, although even that may not be fast enough.
  • You've got a single dual-channel HBA. Assuming everything supports 12Gbps performance (is that JBOD enclosure 12Gbps? or 6?) you're already limited to 24Gbps there, and that assumes a perfect balance between traffic on the two channels. Do the chassis support dual links?
  • Have you done testing on FN directly? With compression disabled (you'll have to change this most likely, although you probably shouldn't be running compression on this dataset anyway... the data is likely not very compressible, and even the default settings may slow things up), write a large file (something like dd if=/dev/zero of=/mnt/YourPool/Somefile bs=1M count=1000) to see how fast things are. If you aren't getting the speed you need on the box itself, there's no point in going further.
  • Have you done any testing on the network side of things with iperf? Do you know you're actually getting 40Gbps from the scanner to FN?
  • SMB may be a real issue for you. It's single-threaded, thus bound by the speed of one processor core. You've chosen a mid-tier processor... if you're stuck on SMB, you might actually be better off with a processor of fewer cores but more speed per core. You can watch top to tell you if smb is your issue.
Here's reality... every time you double performance, you increase cost and effort 10X, especially at the top end. You'll also find very few people with experience to help you. At 40Gbps performance, you're in rarified air... I bet there aren't a dozen people on this forum that have successfully implemented a 40Gbps solution. If you're intending to use this stuff for production workloads, and you don't have an intimate understanding of how this stuff works (plus the time to fight the battles yourself), you may find yourself better served with a paid, supported solution.
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Mellanox 2-port 40GbE QSFP+ Fiber NIC

That may not work well.

The only "tuning" we've done is adjust MTU to 9000 on both FN and Win10.

That also may not work well.

The Mellanox stuff is good cheap stuff, but FreeNAS really shines with the Chelsio. See what @tvsjr said in addition to the previous comments about how to configure your pool. If you really want to understand what the system is capable of, configure with mirrors, THEN go and experiment with RAIDZ levels later to see how the performance impact is.
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
IMO... you're trying to make the whole thing Just Work (tm) without stopping to consider each bit. At this level of performance, assumptions won't work... you're going to need to consider every piece in the chain between the scanner and the FN disks. Otherwise, you should call iX and buy a TrueNAS.

Just a few thoughts:
  • Post a "zpool status". If you're really running a single 60-disk wide vdev, it'll suck massively for... well, everything. I would consider 10 6-disk vdevs in RAIDZ2 as a potential option, although even that may not be fast enough.
  • You've got a single dual-channel HBA. Assuming everything supports 12Gbps performance (is that JBOD enclosure 12Gbps? or 6?) you're already limited to 24Gbps there, and that assumes a perfect balance between traffic on the two channels. Do the chassis support dual links?
  • Have you done testing on FN directly? With compression disabled (you'll have to change this most likely, although you probably shouldn't be running compression on this dataset anyway... the data is likely not very compressible, and even the default settings may slow things up), write a large file (something like "dd if=/dev/zero of=/mnt/YourPool/Somefile bs=1M count=1000") to see how fast things are. If you aren't getting the speed you need on the box itself, there's no point in going further.
  • Have you done any testing on the network side of things with iperf? Do you know you're actually getting 40Gbps from the scanner to FN?
  • SMB may be a real issue for you. It's single-threaded, thus bound by the speed of one processor core. You've chosen a mid-tier processor... if you're stuck on SMB, you might actually be better off with a processor of fewer cores but more speed per core. You can watch "top" to tell you if smb is your issue.
Here's reality... every time you double performance, you increase cost and effort 10X, especially at the top end. You'll also find very few people with experience to help you. At 40Gbps performance, you're in rarified air... I bet there aren't a dozen people on this forum that have successfully implemented a 40Gbps solution. If you're intending to use this stuff for production workloads, and you don't have an intimate understanding of how this stuff works (plus the time to fight the battles yourself), you may find yourself better served with a paid, supported solution.

We use this hardware configuration with other software to achieve ~30 Gb/s sustained writes. You mention the same vdev configuration that Ericloewe does, so we're going to give that a shot right now, which sounds like it should make an enormous difference. I'll post a zpool status momentarily before we make the changes.

There might be a slight misunderstanding on how SAS HBAs work. SAS HBAs don't operate like NICs. For instance, a 12Gb/s SAS3 HBA will sustain (practically) about 40 Gb/s (or ~5 GB/s) per 12Gb channel. Each 6/12G port is 4 lanes wide (4x12Gbps or 4x6Gbps). A dual-port 12Gb/s SAS HBA, like the one we're running, if both ports are used, is theoretically capable of sustaining about 9,600 MB/s, or close-as-nuts to 100 Gb/s. Practically, it's closer to 80Gbps or 8GB/s transfer. Properly tuned, we've had this very hardware configuration (using a Win2016-based iSCSI configuration) performing sustained writes of uncompressed data at ~5.3 GB/s.

With FreeNAS, we need to achieve 2-2.5 GB/s sustained transfer to make it a viable solution for us.

We have disabled compression entirely on our FN volume, but this is the 60-disk wide vdev. We'll continue to run with compression disabled.

iperf averages ~38Gbps.

SMB could be a major issue. We'll see. Client PC is Win 10 LTSB.

I'll post back shortly with zpool status and a camera phone pic of our setup.
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
OK, I'm doing a zpool status in the shell, but I can't copy anything above the top visible line -- i.e. I can't scroll up and get all of the drives. It's showing all zeroes for errors, and even if I increase the size of the shell window to maximum, it shows only half of the drives. Is there any way I can show a system status with all disks, etc?
 
Last edited by a moderator:

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
This is the current Mellanox CX3 ifconfig:

Code:
mlxen1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000	 options=ed07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,LINKSTATE,
RXCSUM_IPV6,TXCSUM_IPV6>																											
	   ether 40:f2:e9:fe:84:51																									
	   hwaddr 40:f2:e9:fe:84:51																									
	   inet 20.20.20.10 netmask 0xffffff00 broadcast 20.20.20.255																
	   nd6 options=9<PERFORMNUD,IFDISABLED>																						
	   media: Ethernet autoselect (40Gbase-CR4 <full-duplex,rxpause,txpause>)													
	   status: active 
 

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
This is the current zpool status, but only with what is visible (only half the disks show up):
Code:
			gptid/c9483956-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ca506be2-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cb4775e6-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cc52ae9d-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cd5e2d1e-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ce75bc68-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cf8040f5-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d08e8b57-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d1862cbf-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d292dea7-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d3992c78-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d49f9487-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d5a3618a-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d6a86af9-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d79b2987-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d8a4e616-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d9b274b4-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dabb633c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dbc9c927-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dcc0df28-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ddcc3e9c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dedd809f-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dfeb8421-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e0fc7caf-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e21594de-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e3132216-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e425f8a3-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e5357086-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e6501054-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e75e888f-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e8621b75-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e974653c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
																																  
errors: No known data errors																										
																																  
  pool: freenas-boot																												
 state: ONLINE																													
  scan: none requested																											
config:																															
																																  
	   NAME		STATE	 READ WRITE CKSUM																					
	   freenas-boot  ONLINE	   0	 0	 0																					
		 mirror-0  ONLINE	   0	 0	 0																					
		   ada0p2  ONLINE	   0	 0	 0																					
		   ada1p2  ONLINE	   0	 0	 0																					
																																  
errors: No known data errors
 
Last edited by a moderator:

BiffBlendon

Dabbler
Joined
Jan 6, 2018
Messages
20
A couple shots of our FrankenRAID:

filmscan_freenas_01-e1515448267666.jpeg


filmscan_freenas_02-e1515448289144.jpeg


Supermicro BigTwin Dual-Node Server, each node with 2x E5-2630 CPU, 128GB ECC SDRAM, LSI 12G SAS3 Dual Port HBA, Mellanox CX3 40GbE QSFP+ Dual Port NIC, and twin Supermicro 45-Disk SAS JBODs each loaded with 30x HGST 8TB 12G SAS 7.2K running to each channel on the SAS HBA for a total of 60x8TB SAS3 disks on two independent SAS3 channels.

Hoping to get FreeNAS 11.1 cranking to 20-30 Gb/s sustained sequential writes and reads.

Any and all optimal settings for vdevs, NICs, HBAs, etc are appreciated. Fault tolerance is a LOW priority, but not so low that pure striping is an option. We'd *prefer* to keep available capacity at maximum, but are willing to try mirroring for performance testing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There might be a slight misunderstanding on how SAS HBAs work.

Indeed. Don't worry, we'll get you straightened out.

For instance, a 12Gb/s SAS3 HBA will sustain (practically) about 40 Gb/s (or ~5 GB/s) per 12Gb channel. Each 6/12G port is 4 lanes wide (4x12Gbps or 4x6Gbps). A dual-port 12Gb/s SAS HBA, like the one we're running, if both ports are used, is theoretically capable of sustaining about 9,600 MB/s, or close-as-nuts to 100 Gb/s. Practically, it's closer to 80Gbps or 8GB/s transfer.

That's not a "dual-port 12Gbps SAS HBA". The connectors on the boards are not "channels" or "ports". There are four PHY's ("channels") on each connector, and therefore the controller offers eight channels (PHY's). A single channel is not capable of more than 12Gbps. That number at the end of an LSI part number, like "9341-8i" means 8 internal PHY's.

The individual channels can be used in a narrow port configuration (12Gbps) or a wide port configuration (4 x 12Gbps = 48Gbps) and it is common to see a wideport cable go from the HBA to a SAS backplane. It is also possible to go to an 8x wideport configuration. This is what @tvsjr was talking about, I believe. Mixing up all the various terms is not really helpful to getting points across, and it's useful to make sure we're all talking the same language. SAS is particularly crappy about the word "port" which is generally used to mean "the connection to a SAS endpoint" (how wonderfully abstract!) rather than any physical connector.

Back to the issue at hand, the point was made that you're going to need to walk through the system and tune it at multiple levels. Unlike many iSCSI products that mostly act as a conduit between a network controller and a RAID controller, such as that Win2016 solution mentioned, FreeNAS is actually doing substantial work and you will find it necessary to properly design your hardware (probably Chelsio), your pool (probably mirrors with lots of large disks), your network (probably turn off jumbo, or carefully tune this very sharp edge), etc. in order to get what you seek.

Here's reality... every time you double performance, you increase cost and effort 10X, especially at the top end. You'll also find very few people with experience to help you. At 40Gbps performance, you're in rarified air... I bet there aren't a dozen people on this forum that have successfully implemented a 40Gbps solution. If you're intending to use this stuff for production workloads, and you don't have an intimate understanding of how this stuff works (plus the time to fight the battles yourself), you may find yourself better served with a paid, supported solution.

I did not want to step on toes by providing that same answer, but I am going to requote it and emphasize: this comment is 100% spot-on, could not have said it better myself. ZFS is a software package that does in software something that has traditionally been done in dedicated RAID controller silicon. What you want is absolutely possible, in my experience, but may be outside your ability to achieve, especially if you are unwilling or unable to take the deep technical dive into understanding the issues.

ZFS is capable of some truly amazing things, but usually to get there, there's a commitment of resources that has to happen that's a little greater than for some other packages.

I'm sorry that I don't have time this month, or next, to spend a lot of time on this, because it's the kind of thing I would normally love to talk about at length. I'll see if I can pop in to answer anything that's not been answered sufficiently.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
This is the current zpool status, but only with what is visible (only half the disks show up):
Code:
			gptid/c9483956-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ca506be2-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cb4775e6-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cc52ae9d-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cd5e2d1e-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ce75bc68-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/cf8040f5-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d08e8b57-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d1862cbf-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d292dea7-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d3992c78-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d49f9487-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d5a3618a-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d6a86af9-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d79b2987-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d8a4e616-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/d9b274b4-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dabb633c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dbc9c927-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dcc0df28-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/ddcc3e9c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dedd809f-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/dfeb8421-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e0fc7caf-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e21594de-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e3132216-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e425f8a3-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e5357086-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e6501054-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e75e888f-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e8621b75-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
		   gptid/e974653c-f4b3-11e7-8696-0cc47a486a88  ONLINE	   0	 0	 0												
																																  
errors: No known data errors																										
																																  
  pool: freenas-boot																												
 state: ONLINE																													
  scan: none requested																											
config:																															
																																  
	   NAME		STATE	 READ WRITE CKSUM																					
	   freenas-boot  ONLINE	   0	 0	 0																					
		 mirror-0  ONLINE	   0	 0	 0																					
		   ada0p2  ONLINE	   0	 0	 0																					
		   ada1p2  ONLINE	   0	 0	 0																					
																																  
errors: No known data errors
Oh my, that's either a 32+-way mirrror or a 32+ wide RAIDZn vdev. Holy crap. That is the first cause of your atrocious performance, first and foremost.

Please bear with me, but I want to immortalize this setup, so please pipe the command I gave you into less and copy/paste it piece-by-piece. Either that or login via SSH and just use the scrollback buffer.

zpool status | less

To choose the proper fix for this, you'll have to experiment. Try out mirrored pairs and the RAIDZ1 option in as close to real production environment as possible and choose the one that fits best - but that might an oversimplification, since we are dealing with a lot of bandwidth.
 
Status
Not open for further replies.
Top