I'm putting together a new FreeNAS build. The main hardware issue left is SLOG/L2ARC to get the best chance of good performance/reliability.
Hardware -
My questions:
Hardware -
- Xeon E5-1620 v3 (4 core, 3.5 GHz+ as I might heavily use CIFS or other single-thread services)
- 32GB RDIMM 2400 (Amazon have an insanely good offer on Crucial DDR4 RDIMM kits, go check it out!) - I can always increase to 64GB in future if needed.
- Baseboard - either a Supermicro X10-something or perhaps a well-built X99 (Asus, ASRock)
- NICs - Chelsio T420 (10G dedicated links to an ESXi server and to a Windows workstation) + 1G Intel for LAN file shares + server admin.
- Data drives redundant 7200 4+6 TBs HDDs, to be migrated from my old Windows server.
- ESXi file store - VMs will mostly be Windows, not many but could have 1 or 2 heavy use (computational workstations).
- LAN general-purpose file-share - mix of folder/file uses, some large file manipulation might go on (mass renames/moves/upload/download of photo archive and data folders up to 50-80GB at times, though mostly less). Clients something like 4 Windows PCs, 2 Windows workstations, 1 Mac
- ZFS snapshots
- Data integrity (goes without saying)
- Performance - Once reliability is taken care of, I want to feel a performance improvement over my current setup. I want it to feel like it's run faster and more consistently/reliably than my current setup. So for example, I want ESXi to continue to run fast and smoothly during VMDK loads, saves, and snapshots. I don't need blinding speeds but I would like these to be served fast, as I often go back and forth between them and I take quite a few snapshots as I test things, and as they're 5-10GB file load/saves this could be worth attention. I also want the same for the LAN file shares, to be able to access, modify, and copy back and forth as needed, with large folders as needed, running smoothly.
- Keeping raw disk space down if possible - The workstation creates a lot of photo processing datasets which include a lot of duplicate data, and I have to keep disk costs down. I'm hoping it may not be needed but I'm planning that I may need to enable dedup, even though it hits performance much more than compression. I reckon at the moment about 2 TB of the datastore will benefit from dedup, and as much as 4TB in future. I can put these on their own ZFS volume. I'm hoping to compensate by making the file server high enough spec to comfortably manage 2-4 TB of deduped data in RAM, if that's possible.
My questions:
- Where might SSDs help, and what size would be sensible? - Will it help to add SSDs for SLOG, L2ARC or any other kind of caching? I want to have ESXi/NFS sync write enabled for data safety, but performance on large (5 - 20 GB) read/writes not too badly impaired. From reading the forum, SLOG is only useful up to a given size which is based on the amount of data able to be queued for writing during about 5 seconds, and 1/8 of RAM, by default. I'd be happy to add SSDs (SATA or NVMe) to my build, but it's hard to figure out if they'll definitely do good or if they might be no point and what places to use them. If I add SSDs, they would probably be mirrored NVMe SSDs on PCIe (128 - 256GB Samsung SM/PM series, 380-500k IOPS, 1.5-2.5 GB/s) or fast SATA/SAS SSDs, for SLOG or other caching. It also seem to warn that it can be a problem to have so much RAM that 1/8 of RAM every 5 seconds at peak could overload the disk system. But I'm stuck at that point and confused what to do.
- Possible use of caching/BBU on RAID card used as HBA - I have an old MegaRAID card with 8 ports and Cachecade 2.0 (LSI's write-back caching SSD system) and a near-new battery, so while ZFS doesn't like hardware RAID, I could use it as a pure HBA making use of its onboard SSD battery-backed SSD caching mechanism. Reviews say it's very fast indeed and improves HDD write speeds a lot. Would this help or isn't this card useful for my build? Will UPS do much if the SSDs are battery backed?
Last edited: