Greetings! I just registered but I've been watching FreeNAS off and on for nearly a decade. I have a strong storage/NAS background and I believe I'm caught up on the latest stickies and guides, but I wanted to seek real human advice about my upcoming build. Please don't hesitate to call me out if something I'm planning sounds stupid! :p I wont take it personally, and I'd much rather build it right the first time! I also tend to overanalyze things and spec stuff out for future growth, so recommending a little bit of overkill is no problem, vs. saving $50 today.
So, this will be my 3rd home server build, but my first experience with BSD, FreeNAS or ZFS. As a storage geek I've always wanted to run ZFS at home, but when I built my second server in 2008, ZFS on Linux (Fuse?) was nowhere near where I'd want it to be to trust my data to it. FreeNAS existed at the time, but it also wasn't there, IMHO. Needless to say things have come a long way in ~8 years, I'm ready to upgrade my server, and FreeNAS has evolved into exactly what I've always wanted!
My plan is to retire my original home server (~2004, P4, HW RAID, WinXP) currently serving as my backup target and move my current server (~2008, see specs below) into the backup role. Then I'll build a shiny new FreeNAS server (proposed specs below) as the primary.
Current Server specs (future backup target):
Proposed FreeNAS Server specs:
Questions:
So, this will be my 3rd home server build, but my first experience with BSD, FreeNAS or ZFS. As a storage geek I've always wanted to run ZFS at home, but when I built my second server in 2008, ZFS on Linux (Fuse?) was nowhere near where I'd want it to be to trust my data to it. FreeNAS existed at the time, but it also wasn't there, IMHO. Needless to say things have come a long way in ~8 years, I'm ready to upgrade my server, and FreeNAS has evolved into exactly what I've always wanted!
My plan is to retire my original home server (~2004, P4, HW RAID, WinXP) currently serving as my backup target and move my current server (~2008, see specs below) into the backup role. Then I'll build a shiny new FreeNAS server (proposed specs below) as the primary.
Current Server specs (future backup target):
- Intel S3210SHLC Mobo
- Celeron E1500 (2x 2.2GHz)
- Can drop in a spare Q6600 (4x 2.4GHz) which would add VT-d support
- 4GB (2x2GB) DDR2 ECC
- I could fill the two empty slots to max out the board at 8GB, but I'd have to buy the RAM
- 2x IBM BR10i flashed to IT mode (yesteryear's M1015, based on 1068e - limited to 2TB drives)
- 11x 2TB HDD (10x mdadm RAID6 XFS, 1x hot spare)
- Plan to move 6 of these to the new FreeNAS server and rebuild the other 5 under ZoL as a replication target for the FreeNAS server
- This chassis has 15 bays, so I figure eventual possible growth to 3x vdevs of 5x2TB drives each.
- Ubuntu 12.04 LTS (upgraded in place from 8.04 LTS)
- Plan to rebuild as either Centos 7 (first choice?) or Ubuntu 16.04 LTS
- Update: If I go ESXi on the primary server and let it handle any future VMs, then the backup machine could be bare-metal FreeNAS
Proposed FreeNAS Server specs:
- Supermicro X11SSM-F or X11SSH-LN4F Mobo
- $30 more adds two extra NICs (not too useful unless I go ESXi and VT-d them to FreeNAS) and an M.2 slot.
- E3-1245v5 or E3-1230v5 or i3-6100
- All have ECC, VT-x, VT-d and AES-NI. Update: I considered Quicksync as a growth feature, but apparently it's not supported in FreeBSD at all.
- 32GB (2x16GB) DDR4 ECC (empty slots for growth to 64GB)
- 2x 16GB SATA DOM for OS (mirrored)
- or if even possible, could mirror a SATA DOM with an M.2?
- Update: System boot drives TBD. Options include DOM, SATA SSD, USB and M.2
- 2x IBM M1015 flashed to IT mode
- Update: 1x IBM M1015 flashed to IT mode + 1x HP SAS Expander + 1x PCIe Molex power 'mining card'.
- Allows VT-d passthrough of 24+ SAS ports while consuming only one motherboard PCIe slot
- 6x 3TB WD Red (plus 6x 2TB from current server) as two RAIDZ2 vdevs
- 20TB usable pool (~40-50% free)
- Chassis has 20 bays, growth for one future RAIDZ2 vdev w/ 8 drives
- 2x (or 3x if using M.2 instead of 2nd SATA DOM) empty SATA ports in case I need a ZIL-SLOG or L2ARC down the road (no free bays, but could stick SSDs to internal chassis walls )
- Update: SLOG: If determined to be required in the future, there are lots of options:
- NVME PCIe AOC, ie: Intel 750. Theoretically could be passed through with VT-d.
- NVME M.2 SSD. Same theory on VT-d. Potentially cheaper/cleaner than AOC, but to my knowledge there are currently none available with power loss protection. (ie: Intel S3500 is SATA M.2, not PCIe). If that changes, it's a viable option.
- Standard 2.5" SATA SSD on the SAS Expander, ie: Intel S35x0/37x0 - 'Guaranteed' to work, just not as low latency as the NVME options.
- Planning to build early-ish 2017 with FreeNAS 10-STABLE
- As a fun side note, I actually wrote a (very) small piece of software that runs under the hood in FreeNAS 10, so it will be neat to have that running on my own server :)
- Update: Completely forgot to mention that I'll be reusing my APC Smart-UPS 1500RM2U to power both servers.
- Plex or Emby or Kodi (jail or VM)
- Nextcloud or Owncloud (jail or VM)
- Windows VM with IP Camera NVR software - should record to a dataset on the FreeNAS zpool
Questions:
- Have I said anything outrageously dumb yet? :D
- Almost all of my old 2TB drives are 512B sectors, while the new 3TB drives will be 4kB sectors. I'm not proposing to mix these devices within a vdev, but would be combining the respective vdevs into one pool. I understand that this is not "perfect", nor is having vdevs with different disk counts and capacities, or otherwise unbalancing the pool in any way, but as far as I can tell this is not "asking for trouble" either? A small performance hit is OK. An increased risk of data corruption/loss is not. Am I missing anything here?
- Similar to the above question, down the road I may want to upgrade the 2TB drives to something larger (one at a time + resilver). For this reason I'd like to set ashift=12 on the vdev at creation, even though it's currently using 512b sector drives. From my research, this will waste some space with overhead/padding, but again shouldn't lead to corruption, etc? Am I missing anything here or is this good planning for the long term?
- For my final 20-drive 3xRAIDZ2 vdev configuration, I'm leaning towards 6/6/8 vs. 6/7/7, even though the "2n+2" rule seems to be moot nowadays, particularly when compression is used. 6 additionally seems to be the "perfect" size for minimizing RAIDZ2 overhead, so I figured I'd capitalize on that for two vdevs. 7 disks vs. 8 is about the same in terms of overhead, both much worse than 6. Does anyone have a counterargument for going 6/7/7 instead?
- Speaking of compression... On a pure multimedia dataset dedicated to already compressed data, should I leave the default compression on anyway? I don't care about a "negligible" CPU hit, but are there other possible downsides? possible upsides? For example some compression algorithms make compressed data larger... I have to assume lz4 wouldn't do that since it's the default and most people use their NAS for compressed media, but I hate assumptions and my searching of both this forum and Google turned up very little...
- Update: On further research, I found that LZ4 'aborts' quickly if it detects that it can't compress data by a set amount. As such, it might as well be left on, even for datasets of incompressible media.
- Plex/Emby/Kodi and Nextcloud/Owncloud should all be easy enough to set up in a jail, but I see three options for the Windows VM:
- Host it on the Backup Server (ie: KVM)
The only argument for this seems to be that the backup server will do literally nothing when it's not receiving replication data (so much so that I'd prefer to power it down), so the Windows VM should be able to get great throughput. Unless there's some limitation in the hypervisor... - Host it in a FreeNAS virtualbox/Bhyve jail
Most of what I've read suggests FreeNAS isn't the greatest VM host, even with the new-ish Virtualbox jails, but speculation is Bhyve might be better? I wouldn't hang my hat on speculation anyway, but does anyone have opinions on hosting a VM inside FreeNAS and maintaining good GbE throughput? - Virtualize FreeNAS and Windows on top of ESXi.
I've also read that hosting FreeNAS on ESXi and then sharing back the zpool to host VMs generally gives terrible performance, and the reasons why make sense. My main concern is keeping high GbE throughput to the Windows VM for recording the IP cameras. A side issue is that if I go this way I'd lose access to AES-NI and Quicksync from the CPU, though at this point they're both mooe for "future possibilities". Does anyone have thoughts on these options (assuming whatever I do, I do it "correctly" [VT-d, etc.]) I'll likely end up testing all three before committing to anything, to satisfy my OCD... :) - Update: It turns out I was grossly over-estimating the bandwidth required by typical 1080p H.264 IP cameras (I'll have 4). I have no concerns now about bandwidth due to hosting the camera recording software on a Windows VM, whether inside FreeNAS or on ESXi.
- Host it on the Backup Server (ie: KVM)
- I feel like I have to be missing something, but I've searched, I swear. Is there a list of all the volume properties that can be set in FreeNAS, or even a list of common ones recommended to be modified at pool creation time? For example 'autoexpand=on' and 'atime=off' come to mind as things I'd want on my pool. The best list I've found is http://docs.oracle.com/cd/E19253-01/819-5461/gazsd/index.html, but I've learned not to assume that Oracle documentation is 100% applicable to FreeNAS (not to mention 'autoexpand' isn't even in that list...). Any pointers would be much appreciated.
Last edited by a moderator: