Ive been scouring the forums here, and other places for a while now, while planning a hardware for a NAS build, and I'm wrapping up the hardware build, which means moving onto the install. Not being experienced in Linux/BSD or enterprise hardware space there is a lot to take in, and there is quite a lot of advice spanning many hardware generations that may or may not still be valid. So before I start putting data on the server, I'm going to put out what I 'know' so I can be told why I'm wrong. (certainly not a dig against the community here, you all seem extremely helpful during all my reading previously. Just how the internet works, easier to tell me whats wrong than to tell me everything I don't know.)
First, hardware:
AsrockRack EPC621D6U-2T16R
Xeon 4210R
4x 32GB ECC Micron MTA36ASF4G72PZ-2G9J3 I couldn't find stuff on the QVL besides newegg third parties (which I generally regard as all scams), Amazon (again... scams) and suspiciously cheap ebay listings for 'new' memory. I wouldn't have had a problem with used memory from some server decommissioning somewhere, but 'new', cheap, and direct ship from china screams "I got a label maker and can label random sticks as whatever I want them to be" to me.
Silverstone SC381 Case and backplane/bays Somewhat infamous for drive cooling being a bit lacking. I designed and made a bracket/duct to mount between the drive bays with 3x blower fans with high static pressure to keep air moving over the drives
Startech 4x M.2 adapter to get SATA lanes not going to the backplanes broken out
Micro SATA Cables Slimline SAS 8x to M.2 Adapter breaking out the SFF-8654 connector on the motherboard to 2x M.2 Slots for NVME SSDs (currently unused)
For a UPS, planning on picking something supported by Network UPS Tools because TrueNAS can be configured to shut down gracefully when a power outage happens.
Drives:
6x Toshiba MG08, 14TB, SATA 512e drives MG08ACA14TA for the storage zpool
3x WD Blue 1TB SATA M.2 SSD consumer (non-PLP) SSDs for a second zpool
1x WD Blue 250gb sata M.2 SSD consumer SSD for 1/2 boot zpool mirror
1x WD Blue 250gb SN570 NVME M.2 SSD consumer SSD for other half of the boot zpool mirror
Usage:
Home use, single user. SMB shares, torrent/openVPN in a jail, 'personal cloud' for android phone(open to suggestions on setups for this that are the 'best' currently), backing up windows PCs.
In the future: Perhaps more VM related workload, as I certainly dont intend to use win11 for my main desktop, and will be moving to some flavor of Linux, but may need a VM for certain windows applications. If I decide to use PLEX instead of just browsing a file share like a heathen, Ive kept space free to add a GPU for transcoding.
So first off, since it sounds like this isn't something I can change after setup. Setting Ashift/sector size, I should be going with 4kB/Ashift=12 on the hard drives, and either 12/4kb or 13/8kb for the SSDs (details are lacking from the manufacturer). Is there any issues or settings to keep in mind for alignment of emulated sectors on drives vs physical sectors on drives, vs ZFS sectors? I.e. preventing write amplifications by a 4kb ZFS write from spanning across two physical sectors on the 512e drives. Just use ashift=13 for everything? I don't foresee a lot of sub 8KB files being stored.
Second, compression. Sounds like I should just enable LZ4 and don't think too much on it. I have enough CPU and memory to accommodate it, so next to no downsides. A majority of the files (space wise) wont be compressible, but it will help with those that are and save space on files that are below a single sector size?
For the actual arrangement of the drives. The boot volume will be the 250GB drives mirrored. Bigger than it needs, but its not like 120GB drives are available from non-questionable brands. Setup the mirror at the time of install by selecting both drives while installing?
The 6x HDDs will be in a single Z2 vdev in their own zpool, nothing too much to think on there.
The 3x 1TB SSDs will be in a single Z1 vdev, in their own zpool. Used for torrent files in-progress to keep the HDD zpool from getting horrifically fragmented. Files moved to the HDD zpool upon completion. Also for a fast network share. I should put the network share in a separate dataset from the torrents, so it can be incrementally backed up (rsync? something else?) to the HDD Zpool also.
No L2ARC, I'm not accessing a large set of small/medium files, so it wouldn't be of benefit to me. No SLOG, as I don't have anything that is going to be synchronous writes. If I end up running a VM in the future (either on the server or remote PC with the VM stored on the server) this may bring more sync writes into the picture, and perhaps benefit from a SLOG. If I do add a SLOG, it will be on mirrored NVME SSDs. Power Loss Protection wouldn't be strictly mandatory on either of these as the loss of either the L2ARC or SLOG doesnt present a lethal threat to the zpool, and only data-in-transit in the SLOG is at risk if the SSD lies about sync writes being complete, while actually still in cache. Unfortunately some do, so a PLP drive would be highly preferred in this application.
For fan control it sounds like AsrockRack isnt the best supported, but this thread has some tools for AsrockRack boards, and I'll have to sort though that further to make sense of it, see what is supported by this BMC/IPMI
Thats the setup/configuration stuff. For the operation/maintenance, either 1 or 2x per month scrubs, 1-2x per month SMART long tests, and weekly SMART short tests.
Fragmentation cant really be fixed with data in place at the moment, if it comes down to it, moving all the data off, nuking the snapshots, and moving all the data back will fix fragmentation, but is obviously rather invasive.
With the unused SAS channels I have, when time comes to upgrade. Would it be possible or advisable to do the following?
1)Add an external SAS chassis with my new drives in it
2)make a new vdev/zpool,
3)move all the automatic events to point at the new zpool, move the data to the new zpool,
4)decommission the old zpool, remove the drives
5)move the new drives into the server.
6)put the SAS chassis back away into the closet
That would simultaneously upgrade my storage without having to resilver 6x, while also removing fragmentation in the data that would remain if I upgraded the drives 1x at a time. At the expense of losing all snapshots, so I had better be sure.
Thanks for even reading my wall of text here.
First, hardware:
AsrockRack EPC621D6U-2T16R
Xeon 4210R
4x 32GB ECC Micron MTA36ASF4G72PZ-2G9J3 I couldn't find stuff on the QVL besides newegg third parties (which I generally regard as all scams), Amazon (again... scams) and suspiciously cheap ebay listings for 'new' memory. I wouldn't have had a problem with used memory from some server decommissioning somewhere, but 'new', cheap, and direct ship from china screams "I got a label maker and can label random sticks as whatever I want them to be" to me.
Silverstone SC381 Case and backplane/bays Somewhat infamous for drive cooling being a bit lacking. I designed and made a bracket/duct to mount between the drive bays with 3x blower fans with high static pressure to keep air moving over the drives
Startech 4x M.2 adapter to get SATA lanes not going to the backplanes broken out
Micro SATA Cables Slimline SAS 8x to M.2 Adapter breaking out the SFF-8654 connector on the motherboard to 2x M.2 Slots for NVME SSDs (currently unused)
For a UPS, planning on picking something supported by Network UPS Tools because TrueNAS can be configured to shut down gracefully when a power outage happens.
Drives:
6x Toshiba MG08, 14TB, SATA 512e drives MG08ACA14TA for the storage zpool
3x WD Blue 1TB SATA M.2 SSD consumer (non-PLP) SSDs for a second zpool
1x WD Blue 250gb sata M.2 SSD consumer SSD for 1/2 boot zpool mirror
1x WD Blue 250gb SN570 NVME M.2 SSD consumer SSD for other half of the boot zpool mirror
Usage:
Home use, single user. SMB shares, torrent/openVPN in a jail, 'personal cloud' for android phone(open to suggestions on setups for this that are the 'best' currently), backing up windows PCs.
In the future: Perhaps more VM related workload, as I certainly dont intend to use win11 for my main desktop, and will be moving to some flavor of Linux, but may need a VM for certain windows applications. If I decide to use PLEX instead of just browsing a file share like a heathen, Ive kept space free to add a GPU for transcoding.
So first off, since it sounds like this isn't something I can change after setup. Setting Ashift/sector size, I should be going with 4kB/Ashift=12 on the hard drives, and either 12/4kb or 13/8kb for the SSDs (details are lacking from the manufacturer). Is there any issues or settings to keep in mind for alignment of emulated sectors on drives vs physical sectors on drives, vs ZFS sectors? I.e. preventing write amplifications by a 4kb ZFS write from spanning across two physical sectors on the 512e drives. Just use ashift=13 for everything? I don't foresee a lot of sub 8KB files being stored.
Second, compression. Sounds like I should just enable LZ4 and don't think too much on it. I have enough CPU and memory to accommodate it, so next to no downsides. A majority of the files (space wise) wont be compressible, but it will help with those that are and save space on files that are below a single sector size?
For the actual arrangement of the drives. The boot volume will be the 250GB drives mirrored. Bigger than it needs, but its not like 120GB drives are available from non-questionable brands. Setup the mirror at the time of install by selecting both drives while installing?
The 6x HDDs will be in a single Z2 vdev in their own zpool, nothing too much to think on there.
The 3x 1TB SSDs will be in a single Z1 vdev, in their own zpool. Used for torrent files in-progress to keep the HDD zpool from getting horrifically fragmented. Files moved to the HDD zpool upon completion. Also for a fast network share. I should put the network share in a separate dataset from the torrents, so it can be incrementally backed up (rsync? something else?) to the HDD Zpool also.
No L2ARC, I'm not accessing a large set of small/medium files, so it wouldn't be of benefit to me. No SLOG, as I don't have anything that is going to be synchronous writes. If I end up running a VM in the future (either on the server or remote PC with the VM stored on the server) this may bring more sync writes into the picture, and perhaps benefit from a SLOG. If I do add a SLOG, it will be on mirrored NVME SSDs. Power Loss Protection wouldn't be strictly mandatory on either of these as the loss of either the L2ARC or SLOG doesnt present a lethal threat to the zpool, and only data-in-transit in the SLOG is at risk if the SSD lies about sync writes being complete, while actually still in cache. Unfortunately some do, so a PLP drive would be highly preferred in this application.
For fan control it sounds like AsrockRack isnt the best supported, but this thread has some tools for AsrockRack boards, and I'll have to sort though that further to make sense of it, see what is supported by this BMC/IPMI
Thats the setup/configuration stuff. For the operation/maintenance, either 1 or 2x per month scrubs, 1-2x per month SMART long tests, and weekly SMART short tests.
Fragmentation cant really be fixed with data in place at the moment, if it comes down to it, moving all the data off, nuking the snapshots, and moving all the data back will fix fragmentation, but is obviously rather invasive.
With the unused SAS channels I have, when time comes to upgrade. Would it be possible or advisable to do the following?
1)Add an external SAS chassis with my new drives in it
2)make a new vdev/zpool,
3)move all the automatic events to point at the new zpool, move the data to the new zpool,
4)decommission the old zpool, remove the drives
5)move the new drives into the server.
6)put the SAS chassis back away into the closet
That would simultaneously upgrade my storage without having to resilver 6x, while also removing fragmentation in the data that would remain if I upgraded the drives 1x at a time. At the expense of losing all snapshots, so I had better be sure.
Thanks for even reading my wall of text here.