New to TrueNAS, introduction and tuning recs

mcsweeto

Cadet
Joined
Jul 27, 2022
Messages
2
Good morning, everyone. I've been lurking around the forum the past couple months and am excited to become a member of this community.

A little about me:

I've worked in IT infrastructure for 12 years, mainly on the datacenter/virtualization/storage side of things. While I'm familiar with storage platforms from EMC, NetApp, and others, I am new to TrueNAS (Core). My jobs have included a lot of RHEL administration over the years and, having been a Linux user for the past 20 years, it's good to get a little FreeBSD back in my life.

A little about my recent build:

I've cobbled together parts I think will suit my use case will predominately be providing storage to a bare metal k8s cluster in my homelab. There will likely be a blend of iSCSI and NFS depending on workload- some kubevirt, some web services, and possibly some home media tools to serve the rest of the home. The build consists of:

ASRock X470D4U AM4 Motherboard
AMD Ryzen5 3500X CPU (6 core, 3.6/4.1Ghz, 65W)
32G DDR4-3200 RAM (I know... not ECC)
4 x WD Blue SN570 256G NVMe
- Attached via ASUS Hyper X16 (PCI slot configured for x4x4x4x4)​
- Pooled in RAIDZ1 with default options​
5 x 8GB WD(HGST) 7200 SATA III drives
- Pooled in RAIDZ2 with default options​
Mellanox 10Gbe ConnectX-2 (old, I know, but flashed current and running stably so far)

The homelab uses an isolated 10Gb storage network between the k8s nodes and the TrueNAS filer joined by a Microtik 4 port switch. Management and other services on the TrueNAS goes over 1Gb on a more widely available internal VLAN.

So far I've just been making sure the hardware and OS are playing nice. I've also been trying to baseline performance in order to find the right setup based on the use cases I've mentioned. I have not ever worked directly with ZFS and while I understand the terminology conceptually, I have no practical experience on which to confidently base final design decisions... yet. For now, I'm going with two pools- one named "fast" built form the NVMe and one "slow" built from the SATA disks. My initial attempts at baselining performance of each is as follows. The tool used was kubestr (fio based) against two storageclasses in k8s- the "fast" and "slow" pools over iSCSI:

fast:

Code:
Running FIO test (default-fio) on StorageClass (freenas-iscsi-fast) with a PVC of Size (100Gi)
Elapsed time- 25.0771555s
FIO test results:

FIO version - fio-3.30
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=4965.363281 BW(KiB/s)=19878
  iops: min=2976 max=5928 avg=4969.966797
  bw(KiB/s): min=11904 max=23712 avg=19879.966797

JobName: write_iops
  blocksize=4K filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=2518.377930 BW(KiB/s)=10090
  iops: min=953 max=3290 avg=2522.066650
  bw(KiB/s): min=3815 max=13160 avg=10088.666992

JobName: read_bw
  blocksize=128K filesize=2G iodepth=64 rw=randread
read:
  IOPS=5859.064941 BW(KiB/s)=750497
  iops: min=3550 max=7144 avg=5864.533203
  bw(KiB/s): min=454400 max=914432 avg=750660.250000

JobName: write_bw
  blocksize=128k filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=2669.506836 BW(KiB/s)=342233
  iops: min=988 max=3288 avg=2671.566650
  bw(KiB/s): min=126464 max=420864 avg=341964.718750

Disk stats (read/write):
  sdc: ios=182418/89221 merge=2914/1519 ticks=2088920/2055219 in_queue=3605876, util=99.391953%


slow:

Code:
Running FIO test (default-fio) on StorageClass (freenas-iscsi-slow) with a PVC of Size (100Gi)
Elapsed time- 32.4202511s
FIO test results:

FIO version - fio-3.30
Global options - ioengine=libaio verify=0 direct=1 gtod_reduce=1

JobName: read_iops
  blocksize=4K filesize=2G iodepth=64 rw=randread
read:
  IOPS=4346.561035 BW(KiB/s)=17403
  iops: min=2230 max=6696 avg=4352.466797
  bw(KiB/s): min=8920 max=26784 avg=17409.966797

JobName: write_iops
  blocksize=4K filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=858.351074 BW(KiB/s)=3449
  iops: min=266 max=1688 avg=1105.333374
  bw(KiB/s): min=1064 max=6752 avg=4421.458496

JobName: read_bw
  blocksize=128K filesize=2G iodepth=64 rw=randread
read:
  IOPS=5322.973633 BW(KiB/s)=681877
  iops: min=1704 max=8660 avg=5330.500000
  bw(KiB/s): min=218112 max=1108480 avg=682307.812500

JobName: write_bw
  blocksize=128k filesize=2G iodepth=64 rw=randwrite
write:
  IOPS=943.589050 BW(KiB/s)=121314
  iops: min=30 max=1916 avg=1134.479980
  bw(KiB/s): min=3840 max=245248 avg=145213.437500

Disk stats (read/write):
  sdd: ios=147595/29927 merge=2543/438 ticks=2130599/1435185 in_queue=3246460, util=98.905907%


Each pool has its strengths but otherwise seem pretty evenly matched. "Fast" and "slow" might not be the best names to use here. I realize the NVMe specs are not good and certainly not ideal for critical workloads but it's what I had on hand and I'll consider upgrading when they start causing me pain or frustration.

Would the community mind providing some feedback, something for me to consider for a round of tweaks before I get too hip deep in my lab work?

Thanks everyone, I'm looking forward to getting to know you.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I've cobbled together parts I think will suit my use case will predominately be providing storage to a bare metal k8s cluster in my homelab. There will likely be a blend of iSCSI and NFS depending on workload- some kubevirt, some web services, and possibly some home media tools to serve the rest of the home.
The iSCSI part would do best on mirrors. Possibly the SSD, while the HDD RAIDZ2 would provide bulk storage over NFS.
Generally it would be best to start from the use case and work out suitable pool geometry/parameters rather than create pools and then look for use cases.

Each pool has its strengths but otherwise seem pretty evenly matched. "Fast" and "slow" might not be the best names to use here. I realize the NVMe specs are not good and certainly not ideal for critical workloads but it's what I had on hand and I'll consider upgrading when they start causing me pain or frustration.
Why "not ideal"? As long as you're not trying to use them as SLOG (which NFS may use, if you do enforce sync writes) or hammering them with constant writes (is this the risk here?), consumer drives are fine.
 

mcsweeto

Cadet
Joined
Jul 27, 2022
Messages
2
The iSCSI part would do best on mirrors. Possibly the SSD, while the HDD RAIDZ2 would provide bulk storage over NFS.
Generally it would be best to start from the use case and work out suitable pool geometry/parameters rather than create pools and then look for use cases.


Why "not ideal"? As long as you're not trying to use them as SLOG (which NFS may use, if you do enforce sync writes) or hammering them with constant writes (is this the risk here?), consumer drives are fine.

Thanks for being the first to respond to my first post. :smile:

Agreed, the use cases have largely been identified. The pools and parameters are what I need to finalize. They exist now as simply something to start baselining.

On the NVME, even for consumer grade there's better available for the money spent. However, at the time I needed to replace slow boot drives with something better and that's why I have these. Were I to buy NVMe for this build I would go with something else.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
iSCSI is best on mirrors
iSCSI with drives slower than your network bandwidth might (will) need a SLOG for best performance
  • Steady state post write cache on the 1TB version of the SN570 drive is ~500MB/s, the 256GB version will likely be noticeably lower
  • Consumer drives frequently are tuned differently to server drives
iSCSI likes RAM (ARC) and is suggested to start at ~64GB for best performance
  • Check your ARC usage with "arc_summary" at the shell
 
Top