Help setting up media NAS

Sharethevibe

Dabbler
Joined
Aug 21, 2019
Messages
21
Hi there,

I am setting up a new NAS (media-storage; incl mass quantities of files, mostly 10MB or 40MB-size).
Pool of 8x 8TB WD-Reds in Z2. Blocksize 1MB.

So far getting very poor r/w performances (reads 225MB, writes 400MB) during 20GB-testruns (though I think I've set the thing up nice and upright).

As I think this is way off the normal speed, there must be 1 or 2 big things I can adapt to get a better performance.
Did a lot of investigating and tuning and testing but nothing effected the speeds.

Attached a PDF write-up listing the NAS set-up, settings and all tests done.

Thanks in advance for any suggestions!
 

Attachments

  • NAS r-w speed.pdf
    350.6 KB · Views: 3
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Share,

A single vDev will give you the speed of a single hard drive. So despite you have many in your pool, consider your pool as a single HD and not a group of 8.

Second point is that your test protocol mention network transfert to a workstation. Know that the network speed of that workstation can very well be your bottleneck, just like your network switch.

You achieved higher write speed because the volume of data you are writing fits in RAM. For that reason, no disk access is required. But for read, the system must go down to the drives and for that, drops from RAM speed to Drive speed.

To do actual, neutral and objective performance testing is very difficult. Here, you have but a few example of what / how your tests can be fooled.

Have fun with your setup,
 

Sharethevibe

Dabbler
Joined
Aug 21, 2019
Messages
21
Tx for the swift reply.

In the write-up it shows that I also tested using a 2 Vdev set-up (2x 4disk) and even an 8-disk striped set-up (...) and all these Pools gave me the exact same (poor) r/w speeds....
Also it is not so that a 'Vdev gives you same speed as that of a single disk'. It gives you the same IOPS as that of a single disk. But every extra disk in the Vdev adds to the r/w-speed (and the IOPS-number gives a maximum in the IO-actions per second)(but, as ZFS sequences reads/writes of several files, also IOPS of 'just 1 disk' is never a limitation when having such large files).
So those 400/225MB speeds are indeed very low.
(I also included the calculation of the speed I was expecting; this also shows how at least my current understanding is of the workings here).
(if anybody sees mistakes there, just let me know).

The connection with the Workstation is a direct, peer-to-peer, connection. At both ends there is a 10Gb-NIC.
(next to that I also saw incidental transfer speeds (coming from cache-reads etc) where speeds went up to 700-800; so the connection is OK)
(good check though).

Writes indeed go to the RAM first. Right you are. But I think it is just to the extent of the size of the transaction group, isn't it? With my 32GB RAM this as default is set on max 4GB. And the testruns were 20GB in size. But I did not see first parts going somewhat quicker and then slow down of speed, so probably the writes are a mix of writes to RAM and writes to disk?
This seems logical as pure writes to RAM (coming from a RAM-disk) would normally show speeds to the max of the 10Gb-line (i.e. 1250MB/s).
But this does clarify why writes are higher then the reads, and also indicates that the read-speeds are the pure disk-speeds.

What remains is that those speeds of 400/225 are indeed way off normal speed.
Any suggestions on how to correct are welcome!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Share,

I confess that I did not read your complete protocol. As I said, objective and neutral performance test is way too hard to achieve. That is why I do not really believe in such home made protocols...

As for the caching, I confirm that FreeNAS is using its entire RAM, unless you explicitly reduced that yourself with tunable parameters. For you to test, just do a write test above 60G. Your RAM will never be able to fit that, so you will reach a point where you will be forced to disk speed. You can also try to read and copy the very same 10G file over and over, for it to fit in the cache. Your read performance should increase.

225 MB of actual speed means 2.4 Gbps. You will have to look at all the specs in your motherboard and everything like that. Here, in my server, half of the PCI lines go straight to the CPU while the others are connected over a different path. For that reason, not all my PCI slots have the same speed.

The Dell H310 was never considered a top performer here... Not sure what it can achieved in reality despite the official specs announced by the manufacturer...

Are you doing deduplication ? Compression ? Encryption ? There are so many things that will affect a performance test like the one you try to achieve...

Have fun testing your setup,
 

Sharethevibe

Dabbler
Joined
Aug 21, 2019
Messages
21
I think it's clear from all tests done that the speeds are way off.
(writes probably improved from cache and those reads of 225MB/s show the real performance of the system when using the disk-pool).

The PCIE-slots have many free lanes in this system (it has dual-Xeon cpu's etc). And I tested used various slots (having bandwidth of up to 4000 MB/s).
So there's no bottleneck there.

Dedup is off. Compression on (also tested off). Encryption is off.
(all as in write-up).
So no sweat there.

I did/do also 'distrust' the Dell H310 disk-controller. It is mentioned in this iX-forum as a often-used controller. But it's fairly old (2008).
But can this controller be a bottleneck in terms of throughput? I believe, that next to direct attached disks, you can add quite some more indirect, so the card probably can handle quite a lot of MB/s. It's an 8pin PCIEv2, so should carry 8x500=4000 MB/s, right? (I need just say 1000).
So I cannot imagine that this controller is a bottleneck (remember I already saw incidental speeds of 700-800).
(will do a fast from-cache-read test though, just to be sure).

I did do some looking into this controller due to the fact that this one does not support 4kN-disks.
And I got the hunch that the disks perhaps might be mostly active, doing that simulation of 512B-blocks to 4k-block on disk (those 512B-blocks never fitting because ZFS uses variable data-blocksizes so when e.g. set at 1MB, that's a maximum, but it will e.g. choose 970kB as blocksize, and then split that data accross 8-2=6 disks, each disk getting 161,67 kB etc. So in stead of just bundling 8x 512-blocks that come from the controller and writing that into 4k-block, in stead it Reads 4k, Modifies to 512B, and then Writes 4k.
I am not sure when this 'RMW' modus happens, and if it happens here, but I can imagine that this gives a very high workload to the disks (thus accounting for that 3/4x lower speed we see).
 

Sharethevibe

Dabbler
Joined
Aug 21, 2019
Messages
21
So this is an 8disk-pool, running only large 25MB-files and still giving only 225MB/s .... :-(
(reading from Pool and writing to a RAM-disk on external Workstation).
I believe that I have set up the thing correctly, components are correct spec I think, tried all sorts of settings, but nothing makes a difference.

Would really appreciate any suggestions you guys have here on the forum in trying to find out where the botteneck(s) are.

Extra info:
Also did tests, to check what the speeds would be, when not writing to external/Windows-PC, but writing to a NVME-SSD-pool (keeping all process on NAS/ZFS). Unfortunately also in these tests same low speed of 225MB/s. As NVME-SSD has lots of MB's, it looks like the interaction between disks and controller is not OK?

Also did tests, to check what the speeds would be, when not reading from disk-pool, but reading all from ARC/Cache/RAM on NAS, writing to RAM-disk (avoiding usage of disk-pool and controller). Then we get a speed of 400MB/s, indicating that also in the traject from NAS to Workstation there is a major loss of speed (perhaps related to conversion from NAS/ZFS to Windows/NTFS?).

Attached somewhat graphical, I show the set-up and tests-outcome.
Also attached the specs again, just to be complete.

Thanks in advance for your thoughts!
 

Attachments

  • Graphic description of set-up and testresults.xlsx
    13 KB · Views: 272
  • NAS r-w speed.pdf
    350.6 KB · Views: 351

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Share,

Another thing that I can add is that I will never trust Windows for much, even less for it being a top performer, a reliable measure or anything like that...

Can you try testing from Unix to Unix ?
 
Top