SuperWhisk
Dabbler
- Joined
- Jan 14, 2022
- Messages
- 19
This is my first TrueNAS setup, or indeed any kind of NAS or home server setup other than pfSense.
I have tried to be thorough and have spent a lot of time reading posts here on the forum as well as other places, and aside from a few hopefully minor points I think my plan follows the generally recommended guidelines.
I know it's a long post, so thank you in advance to those who take the time to read it and respond.
My Usecase and Goals
The New (to me) Hardware:
I have tried to be thorough and have spent a lot of time reading posts here on the forum as well as other places, and aside from a few hopefully minor points I think my plan follows the generally recommended guidelines.
I know it's a long post, so thank you in advance to those who take the time to read it and respond.
My Usecase and Goals
- Home Use
- "Better than my current data storage setup" - Surely even a small step in the right direction is better than no step at all?
- Looking for bulk network storage to share across Linux, Windows, and MacOS.
- High Availability and Performance are nice to haves
- Data Integrity is Important/Critical (but refer back to point 2)
- I want to run a number of virtual machines on the same hardware, but these do NOT need to have their virtual disks backed by the TrueNAS pool
- iMac with 256GB internal SSD and three USB hard drives - 2TB and 3TB for bulk storage, 8TB for time machine backup of the other two plus the internal drive.
- Linux and Windows machines are not backed up at all, but they don't currently contain anything that isn't replaceable. All the important data like pictures is on the iMac.
- This really feels like a "better than nothing at all" setup, but not much better.
The New (to me) Hardware:
- Dell PowerEdge R630 Server with 8 2.5" SAS/SATA bays connected to a Dell HBA330mini Controller (LSI 3008 chipset)
- Originally had a PERC H730mini raid card that includes an "HBA mode" (which I verified does pass through drive data like SMART, Serial Number, etc) but I decided to bite the bullet and get the HBA330mini anyway just to be safe. Sadly they both use a proprietary? PCIe connector so I can't use the raid card in another machine, or in a different slot on this one.
- 8 new 1TB Seagate Constellation.2 2.5" SATA hard disks (plus two cold spares)
- Older model enterprise drives purchased from a liquidator, but verified to the best of my ability to be completely unused. Clean SMART, and no scratches on the pins from having a SATA cable attached and removed (much harder to fake that than clean SMART).
- I was considering retail WD Red Plus drives of the same capacity (nobody seems to make 2.5" drives over 1TB) but these came up and the price was right (1/4 of the cost of the reds) so I went for it. Happy so far following burn-in testing.
- Dual Xeon E5-2620 v4 (8 cores each with HT - 32 Logical Cores)
- 64GB of DDR4 ECC Memory (32GB per CPU)
- 500GB Samsung 870 Evo SATA SSD connected to the on-board SATA controller (normally used only for disc drive and possibly tape backup).
- 500GB Samsung 970 Evo Plus NVMe SSD in a PCIe slot using an m.2 to full-size slot pass through adapter (just re-mapped pins. No smarts on the adapter).
- Dual 750W hot-swap PSUs, but currently no UPS or any other power loss protection - this is something I want to remedy in the future, but not in the budget at present.
- ESXi 7.0U3 on bare metal - this accomplishes the "run other VMs goal" as obviously this hardware would be gross overkill for just NAS use.
- TrueNAS Core 12U8 in a VM with PCIe pass-through for the HBA controller.
- TrueNAS will be exclusively focused on being a NAS. No VMs. Let each do what it does best.
- Use the existing 8TB USB disk as a local backup of the zpool contents (not sure the best way to do this. I know zfs has some built-in features, but need to research that more)
- Eventually have an off-site backup, possilbly using two 3.5" 4TB WD Reds that I have here unopened. Maybe add a third for z1 parity instead of just striping with no redundancy.
- ESXi is installed on the NVMe SSD, with the SATA SSD simply there as an extra ESXi datastore (ok, ok, it also hosts an EFI Shell, startup.nsh, and Intel's EDKII NVMe EFI Driver so that I can boot from the NVMe SSD, but once ESXi boots that memory gets dumped and ESXi has it's own drivers.)
- Obvously ESXi is NOT installed on redundant storage and I acknowledge and accept this single point of failure. I will make sure to backup all my configs (including TrueNAS) to my other computers regularly.
- Using a plain FreeBSD 12 VM, I did burn-in testing on the hard disks + controller using a combination of SMART tests, badblocks, and the sol-net array script found here on the forums. I didn't do months long testing as some here do, but after a solid week of testing I am reasonably confident that there are no bad apples in the bunch (the two spares also got the same testing regiment). This is certainly more testing than any of my current drives ever got (which is absolutely none).
- Using Dell's built-in diagnostics I ran "extended" memory and other hardware tests for a number of days with no errors. I have not used memtest86 or memtest86+ (since the original seems to have locked most of the tests behind a paywall)
- Create the TrueNAS VM and setup the zpool (still need to make a final decision on zfs layout, and how much to allocate as far as memory and cpus, etc).
- Still waffling between z1 or z2 but striped mirrors is out as I want more than 4TB of usable space in my 8 1TB drive array. I think I would be fine with 6TB, if I went with Raidz2...
- Would be willing to throw one of the two CPUs in it's entirety at TrueNAS if it was worth doing (CPU would come with 32GB of memory too).
- Test pool? (on top of the testing I did on the drives individually). Not sure what this looks like.
- Setup Samba, move data in. Be confident in stability, then re-purpose 8TB USB disk as local backup/copy of pool contents.
- Schedule scrubs, SMART tests, config backups, and other monitoring.
- What level of priority should a UPS purchase be? It sounds like zfs doesn't have the "write hole" problem of Raid5/6 but obviously sudden power loss is never a good thing.
- Burn-in testing on the pool itself? What does that look like? Performance is a marginal concern as long as I can at least saturate my 1Gbps LAN on occasion, which shouldn't be a problem even for a single SATA spinning disk with a 6Gbps interface.
- Is there any value at all to creating two virtual disks on different datastores (different physical drives) for the TrueNAS zfs boot pool, or is that pointless given the lack of redundancy in ESXi's boot drive?
- Any recommendations on VM settings for TrueNAS Core in ESXi other than passthrough for the HBA? (eg, scsi or sata based virtual boot disk? Reserved memory and/or cpu cores?)
- Any red flags jump out in any of this that I haven't acknowledged?
- Should I consider dumping ESXi and running TrueNAS SCALE on bare metal instead? The Linux base would enable more widely compatible virtualization directly within TrueNAS, but it doesn't have the tried and true legacy of TrueNAS CORE.