nexlevel
Cadet
- Joined
- Feb 22, 2015
- Messages
- 7
Greetings Experts,
I have been lightly dabbling into learning how to implement NAS capabilities into my home lab environment for the last few months or so. I've been familiarizing myself with FreeNAS to this end and stood up my first iteration of FreeNAS about two and a half years ago for simple tasks like backup storage, Plex, and Minecraft (for my kids). It's been a great learning tool and has since expanded roles to providing NFS and SMB storage repositories for my lab XenServers and ESXi servers, as well as ISO storage repository for the same.
Prior to upgrading/expanding the local storage of my main XenServer, I attempted to use FreeNAS as a remote SR for some of my VM's but was unable to fully test how it performed in that capacity due to a hardware failure that forced me to replace my aged Dell R900 (which also led to the expansion of the local storage for the XenServer). However, as many of us newer users discover, FreeNAS does not perform as fast as we come in expecting it to. This prompted me to undertake a bit of a "refresh" of my FreeNAS build to allow me to throw more hardware resources at it to help improve its performance.
After reading through some of the posts and blogs on SLOG, ZIL, L2ARC, etc., I came to the conclusion that there would likely be two ways that I could improve read/write performance:
===========================================
FreeNAS 11 latest as of 8/27/2017
Supermicro 4U 24x 3.5" SAS2 Storage Array | X8DAH+-F Mobo
BPN-SAS2-846EL1 Backplane | LSI SAS9211-8i (both ports connected to 7&8 on backplane)
Xeon L5630 @ 2.13GHz | 36GB Reg ECC RAM (soon to be doubled on both as mobo is dual-socket)
HP Mellanox 10Gbps Connectx-2 Single Port (direct connect to XenServer) | 2x 74GB SAS Raid1 for OS
Intel Quad 1gbps nic for direct network connection to other machines
6x 4TB sata RaidZ2 | 3x 750GB sata RaidZ1 | 2x 1TB + 4x 500GB sata Raid10
5x 147GB SAS RaidZ1 | 2x 74GB SAS Spares for OS
...and recently added but not yet properly configured:
MyDigitalSSD BPX 80mm (2280) M.2 PCI Express 3.0 x4 (PCIe Gen3 x4) NVMe MLC SSD (120GB)
Mailiya M.2 PCIe to PCIe 3.0 x4 Adapter - Support M.2 PCIe 2280, 2260, 2242, 2230
I know that I am being long-winded but based upon my observations in following other similar posts over the last couple of years, the community seems to favor more information and detail over not enough. So please humor me a little further.
In order to expand the system memory I have to also add a second processor, which I intend to do over the next few days. This will bring me up to 72GB of RAM and 8C of Xeon.
Here is what I want to know: seeing that this is a purely lab centric machine, what other considerations can I make to help speed it up in the read/write area? More importantly, and seemingly more challenging, read.
I "know" (read as, anticipate) from the blog posts,
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/
and
https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/
that implementing the NVMe as a separate SLOG should go a long way in increasing the write speeds. Reads are another beast altogether. I have a number of ideas floating in my head but have no real certainty as to whether my thinking is logical or has a shot at producing the desired outcome for my end-state. I know that some of this involves risk but please keep in mind that this is purely a lab machine with multiple backups of any data that I don't want to risk losing.
Here are my thoughts:
I have been lightly dabbling into learning how to implement NAS capabilities into my home lab environment for the last few months or so. I've been familiarizing myself with FreeNAS to this end and stood up my first iteration of FreeNAS about two and a half years ago for simple tasks like backup storage, Plex, and Minecraft (for my kids). It's been a great learning tool and has since expanded roles to providing NFS and SMB storage repositories for my lab XenServers and ESXi servers, as well as ISO storage repository for the same.
Prior to upgrading/expanding the local storage of my main XenServer, I attempted to use FreeNAS as a remote SR for some of my VM's but was unable to fully test how it performed in that capacity due to a hardware failure that forced me to replace my aged Dell R900 (which also led to the expansion of the local storage for the XenServer). However, as many of us newer users discover, FreeNAS does not perform as fast as we come in expecting it to. This prompted me to undertake a bit of a "refresh" of my FreeNAS build to allow me to throw more hardware resources at it to help improve its performance.
After reading through some of the posts and blogs on SLOG, ZIL, L2ARC, etc., I came to the conclusion that there would likely be two ways that I could improve read/write performance:
- Add more RAM
- Add a separate flash based SLOG
===========================================
FreeNAS 11 latest as of 8/27/2017
Supermicro 4U 24x 3.5" SAS2 Storage Array | X8DAH+-F Mobo
BPN-SAS2-846EL1 Backplane | LSI SAS9211-8i (both ports connected to 7&8 on backplane)
Xeon L5630 @ 2.13GHz | 36GB Reg ECC RAM (soon to be doubled on both as mobo is dual-socket)
HP Mellanox 10Gbps Connectx-2 Single Port (direct connect to XenServer) | 2x 74GB SAS Raid1 for OS
Intel Quad 1gbps nic for direct network connection to other machines
6x 4TB sata RaidZ2 | 3x 750GB sata RaidZ1 | 2x 1TB + 4x 500GB sata Raid10
5x 147GB SAS RaidZ1 | 2x 74GB SAS Spares for OS
...and recently added but not yet properly configured:
MyDigitalSSD BPX 80mm (2280) M.2 PCI Express 3.0 x4 (PCIe Gen3 x4) NVMe MLC SSD (120GB)
Mailiya M.2 PCIe to PCIe 3.0 x4 Adapter - Support M.2 PCIe 2280, 2260, 2242, 2230
I know that I am being long-winded but based upon my observations in following other similar posts over the last couple of years, the community seems to favor more information and detail over not enough. So please humor me a little further.
In order to expand the system memory I have to also add a second processor, which I intend to do over the next few days. This will bring me up to 72GB of RAM and 8C of Xeon.
Here is what I want to know: seeing that this is a purely lab centric machine, what other considerations can I make to help speed it up in the read/write area? More importantly, and seemingly more challenging, read.
I "know" (read as, anticipate) from the blog posts,
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/
and
https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/
that implementing the NVMe as a separate SLOG should go a long way in increasing the write speeds. Reads are another beast altogether. I have a number of ideas floating in my head but have no real certainty as to whether my thinking is logical or has a shot at producing the desired outcome for my end-state. I know that some of this involves risk but please keep in mind that this is purely a lab machine with multiple backups of any data that I don't want to risk losing.
Here are my thoughts:
- As previously mentioned, add another 36GB of memory to improve ARC
- Also add another NVMe or SSD device as L2ARC
- Reconfigure a new volume to have no local swap partition but place swap onto another separate flash/NVMe device? (spit-balling on this one as I don't know if it is even possible or what benefit if any it will provide besides reducing the number of writes to disk needed for the same data) Maybe flash-based L2ARC removes need for swap space on mechanical HDD's?
- Flash-based NVMe swap drive to support 72GB ram?
- Add more drives to my main RaidZ2 volume (maybe another qty. 2-6 or more 4TB drives) and/or reconfigure them altogether into multiple RaidZ volumes that are then striped but no idea how to best accomplish this. I think that this is my main bottleneck and feel that I need more knowledge/understanding of how to properly utilize/setup vdevs to increase IOPS for my VM use-case. For instance, create qty. 6 mirrored vdevs and then stripe those for 6 drives worth of read/write performance which would also reduce rebuild stress on overall pool in case of a drive failure?
- Which prompts me to think...I could add 4 more 4TB drives, create 5 mirror vdevs and then stripe those to get the performance of 5 drives and have 5 drives worth of redundancy? (I know...just re-read this...it's extremely late and I am running on fumes now, but I'll leave it so you can laugh at my expense.)
- I would like to be able to generate +500MBps of read throughput across my 10ge interface to my XenServer to be able to run a dozen or so storage heavy VM's off of my remote storage repositories for lab exercises (NetScaler MAS, SQL, Exchange, Docker, ADC's, NFV, etc.) and don't want my virtual instances to cry about the SR's performance.
- I chose 500MBps because I have two machines with a single Samsung SSD in the one, and a pair of striped Samsung SSD in the other that are each able to clock ~500MBps of read throughput. The single SSD is obviously more high end than the striped pair.
- Would like to figure out a way to have high read/write throughput for VM's, and be able to withstand at least 2 drive failures in the pool.
- Continue running Plex from it and reduce the latency/buffering through the app which doesn't exist playing the .mkv file directly from disk, as my family and I have grown quite fond of it to the point of purchasing a Plex pass recently.
- Utilize more of the FreeNAS services for hosting SNMP server, TFTP, WebDAV, etc. within my lab
- Prove out for clients that opensource solutions can be a viable option for creating robust lab environments on a limited budget, while creating an archetype model for accomplishing it.
Last edited: