System Slowing down

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
So I'm a photographer/videography and been using FreeNAS for about a month. I store all my projects on my Freenas system and work on the projects from there. But I noticed that the freenas system start to slow down after some hours depending on my work load and noticed that the 32gb of ram is full. Is this because I'm running out of ram.
Here the hardware info:
Case: Cooler Master HAF Stacker 935
CPU: Intel I5-4670K
Motherboard: Gigabyte GA-Z87-D3HP
RAM: 32gb
Ethernet: 1GB network
16 port sata controller card
USB: 2- 64gb 3.1usb (for FreeNAS)
4-12tb hard drives ( RaidZ ) for Videos projects - 11tb used
4-10tb hard drives ( RaidZ ) for Photography projects - 12tb used

Is there any thing I can do to keep the speed of my FreeNAS system other then restarting my system before working on it.
 

Attachments

  • CPU.jpg
    CPU.jpg
    23.2 KB · Views: 223
  • Ethernet.jpg
    Ethernet.jpg
    29.2 KB · Views: 217
  • RAM.jpg
    RAM.jpg
    25.1 KB · Views: 235

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Your motherboard does have an Intel LAN card, so that's good, but I'd like to hear more about this piece of hardware:

16 port sata controller card

Generally once someone goes beyond the 6-8 drives that are supported with onboard SATA ports, we push them towards LSI SAS HBAs; a generic "SATA card" might be the source of your woes, especially if it's overheating.

You're definitely not at the point of a full system, but RAIDZ is not optimal for random I/O. Photo editing should be working with files small enough to cache locally in RAM, but video editing directly from the unit will feel slow as it will be "random" seeking.
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
This is the controller card I have in the system.
LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Single Controller Card.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Wonderful, it's a genuine SAS HBA then. Have you checked for potential heat issues (CPU and HBA) as well as the SMART health of your hard drives?

As far as the amount of RAM used, it's all being used for cache. ZFS will use all available RAM for read caching, and release it back if other services require it. Does your system show it as using any swap?
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
Hmmm, yes. I'm running a little hotter than normal. It normal run at about 40c. Also looking in the pools. It's saying healthy.
How do you show using swap? Here's photo of the inside of the computer if that help.
 

Attachments

  • Computer inside.jpg
    Computer inside.jpg
    296.7 KB · Views: 283

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What network protocol do you use? Do you have any tunables set? Can you describe slow? Like transfers slow? Directory listing slow? Reads or writes slow? webui slow?
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
Transfer slow. I'll be searching large images or large video, or transferring images or video that on the nas and it would be working good.
But after an hour or so, it would start pulling up the images slow. Like if the drives was busy. But my transfer speeds from nas to computer or computer to nas is about 950mbps or so when I pull up task manager. This is when the first start working from the nas.
Sorry, This is my first NAS and I only know a little about computers.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
So read speed. When it gets slow, what does zfs-stats -a have to say about ARC hit rate? Both file and metadata fitting entirely into ARC may be of interest.
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
Is this what I looking for, for sfs stats
 

Attachments

  • ARC Hit Ratio.jpg
    ARC Hit Ratio.jpg
    62.9 KB · Views: 235
  • ARC Request.jpg
    ARC Request.jpg
    75.2 KB · Views: 219

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I was hoping for the actual output of that command from CLI, accessed via SSH so you can copy/paste. One possibility is that tuning ARC to hold more metadata may help, unless you want to just throw another 32GiB of RAM at it.
Not saying it is ARC. I’d love to see whether it’s evicting metadata when it starts slowing down.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
How large are these files you are accessing?
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
The motherboard only support up to 32gb of ram. Sorry your way over my head with the command CLI etc.

(SweetAndLow) Some time 700mb 4k video files and 70mb raw image file.

I was just reading about SSD caching. It's sound like that may help me.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
The motherboard only support up to 32gb of ram. Sorry your way over my head with the command CLI etc.

(SweetAndLow) Some time 700mb 4k video files and 70mb raw image file.

I was just reading about SSD caching. It's sound like that may help me.
If your memory is maxed out on the system a l2arc could benefit you. It kind of seems like you are accessing lots of different data. Your file size is actually pretty small but maybe you just have lots of them. In reality you have probably just hit the iops max for your system.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
But after an hour or so, it would start pulling up the images slow.

Do you mean by that showing the list of images in a directory? You say transfer speeds are still good at that point.

How many of those files are there? 10,000? A lot more? A lot less?

I’m trying to get a sense for whether it’s the cached directory information (“metadata”) that’s at issue here, or something else.
 

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
Yup, about 1,700,000 or more images, photoshop files. And about half that in videos. Oh and the photoshop files can get up to about 2gb.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Oh right. Yes. You’re running out of cache space for metadata.
So a thing about L2ARC, secondary cache: Its index takes away from ARC, primary RAM cache. You want it large enough to solve the problem, but not so large it makes the problem worse.

Do you have a small 128GB SSD? If so, that could be added as L2ARC as a test. If not, someone will be along presently to help calculate the amount of metadata nearly 2 million files generate, the amount of ARC the L2ARC will eat, and the optimal size of that L2ARC.

About adding L2ARC: That’s reversible, but adding a disk to the pool proper is not. The latter has catastrophic impacts to pool redundancy, so, best to be very deliberate when adding read cache and making sure it’s really being added as read cache.

Come TrueNAS 12, which should be out in summer, you have another option: Remove the L2ARC again, get a second SSD of the same size, and add them as a mirror pair to the pool storage as a “special allocation vdev” that will only hold metadata. For that, definitely calculate how much metadata you will hold, first. This is also not reversible and will require careful planning.

What is your “pool layout” currently? Number of disks, in what kind of vdev? As in, 6-wise raidz2, or 3 mirrors, or something else ...

How much data do you have total in those files, expressed in TiB?

Here is a very rough formula for estimating metadata size, taken from a reddit thread:

“ If it's helpful for anyone, here are the estimates I used for sizing some metadata vdevs. Metadata is roughly the sum of:

a) 1 GB per 100k multi-record files.
b) 1 GB per 1M single-record files.
c) 1 GB per 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data.
d) 5 GB of DDT tables per 60 GB (recordsize=8k), 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data, if dedup is enabled.
e) plus any blocks from special_small_blocks.”

So 20GiB for a), plus another ... well depends on total TB of the data ... and assume no dedupe (hopefully, you don’t have enough RAM for dedupe), a 128GiB SSD should be good as an L2ARC right now. If you have one “flying about” to use as a test balloon.

(L2ARC size in kilobytes) / (typical recordsize -- or volblocksize -- in kilobytes) * 70 bytes = ARC header size in RAM.

So ... around 70MiB of RAM for a 128GiB L2ARC, give or take? Someone who can do this better needs to check my math ...

Lastly, how much data is free on your pool? If there is enough free space for the move, and if the bulk of your photos is at 1M or bigger, then you could potentially reduce the amount of needed metadata some by creating a new dataset with recordsize=1M and moving all your photos there. That’d be a lengthy move and definitely requires a sanity check before going down that path.
 
Last edited:

Kennee

Dabbler
Joined
May 1, 2020
Messages
10
I do have 2-Toshiba SSD 240gb. I'll use one of them. If the ssd don't help, will I have trouble removing them?

This is how I have it setup:
4-12tb hard drives ( RaidZ ) for Videos projects - 11tb used (Total space 30tb)
4-10tb hard drives ( RaidZ ) for Photography projects - 12tb used (Total space 26tb)
Total are round about numbers.
Also thank you'll for taking the time to help. Truly appreciate it.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
will I have trouble removing them?

It’s easy to remove as long as you add it as a cache disk, NOT a data disk. If you add it as a data disk, you cannot remove it again and you have lost pool redundancy. You’d add this to one of your two pools.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I do have 2-Toshiba SSD 240gb. I'll use one of them. If the ssd don't help, will I have trouble removing them?

This is how I have it setup:
4-12tb hard drives ( RaidZ ) for Videos projects - 11tb used (Total space 30tb)
4-10tb hard drives ( RaidZ ) for Photography projects - 12tb used (Total space 26tb)
Total are round about numbers.
Also thank you'll for taking the time to help. Truly appreciate it.
Using raidz with the size dives could be considered bad practices. As drive sizes increase rebuild times increase and in the event of a disk failure you have a very high chance that you'll lose your pool due to a second disk failure during rebuild.

You could also make your system faster if you created one pool with 4 mirrored vdevs. Storage space would be 35TB roughly.
 
Top