Disk Performance Path

TravisT

Patron
Joined
May 29, 2011
Messages
297
I'm trying to remove bottlenecks on my FreeNAS server, and I know that one of them is my physical disk performance. I originally setup my FreeNAS box with a 4-disk RaidZ1 pool. I've increased the capacity of the disks from what they were originally to 4TB WD Red disks over the years.

I am no storage expert, but I believe I need to increase the number of disks in the system to improve performance. I'm trying to build a roadmap of a systematic upgrade of the system over time that I can work toward.

One of the biggest performance problems I'm trying to solve is working with ~100K RAW images stored on this pool via SMB over 1GB Ethernet.

I think:
- I should use 5-disk RaidZ2 vDevs
- to increase space, I should add additional 5-disk RaidZ2 vDevs to my pool

If those assumptions are correct, I think my next steps are:
- Build 5-disk RaidZ2 vDev
- Migrate existing data to new pool
- Destroy current 4-disk vDev
- Add another 4TB drive
- Create second 5-disk RaidZ2 vDev
- Add second vDev to pool

I'd love some feedback on this strategy, or recommendations on a better path to increased performance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you want to increase your IOPS, you need those VDEVs to be evenly populated, or ZFS will write more often to the more empty one until they are both equal.

This would mean your migration plan isn't going to get you to more IOPS (at least not right away).

You might find that 10G ethernet will be the best path to faster performance.

Certainly do RAIDZ2 with those larger drives and watch out for SMR when adding new ones. (see the resource from @Yorick)
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
While that makes perfect sense, I hadn't thought about that. Is there a way to force the balancing of data on already populated disks (i.e. moving files causing a rewrite)?

I'd like to think that my bottleneck is with my physical drives vs my network connectivity, but I could be wrong. Is there an easy way to tell which is the problem? Maybe iperf locally then across the network?
 
Joined
Apr 26, 2020
Messages
2
The max speed of 1Gbps ethernet = 125MB/s transfer speeds. You can most likely hit that cap with two vdevs.

I agree with going to 10G ethernet to increase performance. But I don't think two vdevs will be enough to saturate the 10G line. To increase your IOPS further, you could switch to mirrored vdevs instead of raidz2. This will give you a lot more vdevs, but sacrifice your storage capacity.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Without testing, I'm only speculating at this point, but as of right now I only have one vDev (4 disks/RaidZ1), so I think that *could* be my issue. And that is given everything else is working optimally.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
One of the biggest performance problems I'm trying to solve is working with ~100K RAW images stored on this pool via SMB over 1GB Ethernet.

What, exactly, is your performance issue? I am asking because I am using a 5-disk raidz2 and I can saturate GBit, but I am also not using 100k files.

- Is write slow? Copying a file to NAS from PC? If so, from which client OS?
- Is read slow? Copying a file from NAS to PC?
- Is showing the files in the directory slow?
- How much RAM does your FreeNAS have?
- Which NIC is in your FreeNAS?

Showing files in directory would be metadata, and the solution there is either more RAM, or a metadata-only L2ARC.
Write speed from MacOS would be sync, and the solution is to tell SMB not to sync.
Read speed would likely be either disk or network, ditto write speed from Windows or Linux.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
General performance is slow, especially in certain applications. Lightroom Classic is one that immediately comes to mind. To answer your specific questions:

- Writes are slow, specifically from through Lightroom Imports from Mac (Catalina) to NAS (~20-30MB RAW files)
- Reads seem faster than writes, but varies.
- Showing files in directories is very slow, but mainly in directories with a large number of files/subfiles.
- Server is running 32GB RAM (full specs in signature block)
- NIC in FreeNAS is Intel Quad Gigabit card (forget model number)

Because all of this is very subjective, I think a good performance test is in order to post some real numbers. Any recommendations on how to baseline performance and post results?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
> Writes are slow, specifically from through Lightroom Imports from Mac

Sync. Turn it off SMB-side on FreeNAS. As per anodos:

You can turn off sync writes through the SMB protocol by setting the share-level auxiliary parameter "strict sync = no". This parameter is documented here: https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html#STRICTSYNC
Do note that this setting is incompatible with time machine shares.


> Showing files in directories is very slow, but mainly in directories with a large number of files/subfiles.

Metadata. Things you can do to help:
- Add more RAM. DDR3 should be inexpensive on eBay. 64GiB for sure, 128GiB if it's not too expensive
- Create a new dataset with 1MB record size, move all your large files (photos and such) over there, use that for sharing, delete old dataset. This cuts the amount of metadata down
- When all else fails: Small (128GB or less) SSD as a metadata-only L2ARC
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I'm trying some of your suggestions. As a real-world example of current performance, I'm copying photos from my NAS over to the local SSD drive for editing/etc.

FreeNAS is reporting my link traffic in the KiB/s range with a couple spikes to the MB/s range. To copy about 700 ~20MB files took over an hour.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
@Yorick - I've tried a couple things with not much success.
Sync. Turn it off SMB-side on FreeNAS. As per anodos:
Did this on a new test dataset and moved data from the old one over (same pool). I didn't notice any significant increase in performance for writes.

- Create a new dataset with 1MB record size, move all your large files (photos and such) over there, use that for sharing, delete old dataset. This cuts the amount of metadata down
This seems to drastically increase the performance of browsing the datastores via Finder. While I don't have everything copied over to this new datastore, it seems promising that this has helped performance of directories containing photos. Is it safe to use this anywhere the majority of files will be 1MB or more? I don't completely understand the interworking of this parameter yet.

FWIW, after some really frustrating performance in Lightroom still, I tried some quick tests.

iperf from my mac to my NAS:
Code:
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.09 GBytes   937 Mbits/sec


DD locally on my NAS:
Code:
dd if=/dev/zero of=/mnt/globemaster/Media/deleteme bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes transferred in 8.814978 secs (116165917 bytes/sec)


Code:
dd if=/dev/random of=/mnt/globemaster/Media/deleteme bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes transferred in 19.969099 secs (51279229 bytes/sec)


DD from mac over 1GB Ethernet:
Code:
dd if=/dev/zero of=/Volumes/Media/deleteme bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes transferred in 10.429566 secs (98182418 bytes/sec)


Code:
dd if=/dev/random of=/Volumes/Media/deleteme bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes transferred in 11.927876 secs (85849316 bytes/sec)


These tests are probably not perfect for many reasons, but this is what I've concluded (please correct me if I'm wrong):
1. Networking is capable of Gigabit transfers
2. Local disk writing is capable of ~115Mb/s transfer rate
3. /dev/random locally on NAS is slower due to processor speed
4. Remote dd transfers are near/hitting Gigabit Ethernet limits

This may be a Lightroom-specific problem, but it's frustrating regardless. I'll keep testing and report any findings. I'm more than willing to throw some money at the NAS to increase performance, but I'm not convinced RAM or vDev additions will help much at this point. While my workstation is 10GbE ready, my network switch is not currently. I have setup lagg on the NAS (2x 1Gb Interfaces), but I know that will only help with concurrent transfers, not with increased speed of a single transfer.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I have been slowly hoping to isolate this to a Lightroom specific issue, and one of those test was to create a new catalog and add photos (in-place, no moving of files) to the catalog. This was to determine if my catalog was corrupt in some way.

That import took almost an entire day for about 11K photos. This seems EXTREMELY SLOW for just reading files.

Out of desperation, I downloaded a disk benchmarking tool for a second opinion. To my local SSD drive, I'm getting 2.8GB/s writes and 2.7GB/s reads. On my SMB shares, I'm getting 12MB/s writes and 3MB/s reads. While this is only one benchmark and may or may not be reliable, it's aligned with the performance I seem to be getting over the SMB shares.

My goal is to pinpoint the problem(s) before blindly throwing money at something. I don't think this is typical of performance on FreeNAS and I'd love to get this figured out. Any ideas or pointers on what to test/do next would be greatly appreciated.

-TravisT
 
Top