I am new to TrueNas and the forums, and I'm planning a build later in the year. I'm starting to plan so I can budget and I'm looking for some direction. I have spent hours looking for answers to some of these questions that make sense, but so much of the information I read seems to be dated, in some cases by years, so I expect that many of the answers may be inaccurate based on all the recent updates to TrueNas. Any/all responses are greatly appreciated. So here goes:
My plan:
I'm looking for an expandable system that will serve 3 primary functions:
1) Media/File Server (just a share drive as my primary media consumption will be at home on a wired network, so no Plex server or internet streaming necessary)
2) Backup target
3) iSCSI target for 2-3 (desktop OS) VMs (the VMs will be hosted on another computer via 10g network)
4) REDUNDANCY & DATA SAFETY is my primary concern followed closely by expandability. Cost is at the bottom of my priorities, BUT I am cost conscious. So, starting with a larger number of 1-2tb drives and upgrading them later is good (system complexity - for future expansion - is designed from the beginning, but initial costs are lower due to smaller HDD size). A smaller number of larger HDDs is not as good (as the capacity expands, the costs to do so may be significantly higher at each increment due to need to purchase larger quantities of larger HDDs at once) since the design complexity may limit what each expansion looks like. So, cost is a consideration, but I'll spend more now if it means spending less later (although, probably higher TCO over the long run) OR it buys an easier upgrade path later
My first questions are about VDEV/Pool layout
1) How are Hot Spares allocated across the VDEVs and Pool. Specifically, if I have a (example) 16 bay chassis (not including boot), and I'm considering 2x 8 drive groups, so I'd have 2x 7 drive z3 VDEVs and 2 hot spares. Will that be 1 hot spare per VDEV (each VDEV has access to only 1 hot spare) or 2 hot spares available in the pool (so either VDEV has access to both hot spares as needed?
2) What are the current recommendations about # of drives per VDEV? Does this change if I'm using 2tb drives vs 4tb drives vs 10tb drives, etc. I know the higher drives per VDEV will be more cost efficient per/TB usable space, but I'm looking for stability/reliability first, then cost in a 70 (stability)/30 (cost) balance
A) 7 drives per VDEV + a hot spare (so 8 drives per group) (Chassis with multiples of 8 drives)
B) 11 drives per VDEV + a hot spare (so 12 drives per group) (chassis with multiples of 12 drives)
C) 13-14 drive per VDEV + 1-2 hot spare (so 15 drives per group) (Chassis with multiples of 15 drives)
3) To increase capacity (assuming available HBA ports and empty drive slots exist), which is better for capacity expansion, again, cost is a lower priority, but I don't want to waste money for no reason:
A) Add a VDEV first (cheaper option)
B) Swap out HDDs to a higher capacity first (seems to be more expensive sooner)
C) If swapping HDDs to a higher capacity (assume 2x Z3 VDEVs striped), is the upgrade 1 drive at a time (for the pool) or 1 drive per VDEV at a time (so 2 drives could be swapped out at once as long as it's 1 drive per VDEV)
4) Any thoughts (besides hardware failure rate and cost) as to the pros/cons of 1 larger chassis (i.e. Supermicro 24/36/45/60/90 bays vs several 12 bay (or similar) JBODs. I'm considering either:
A) Larger Supermicro chassis (45-90 bays - Multiple 15 drive groups) - PROS: higher physical density, fewer control boards to fail CONS: Limited to 6gbps backplanes because I don't intend to spend $10k on a new(er) one
B) Multiple Supermicro CSE-801L (12 drive groups - 1 per chassis), Chenbro RM25324 (multiple 12 drive groups) or AIC RSC-2MS (multiple 8 drive groups) with the necessary Supermicro CSE-PTJBOD-CB (control) & Intel RES3TV (expander) cards. PROS: Lower startup cost (I can add as I need) potential for 12gbps for less cost CONS: Higher cost to expand because I have to purchase both the chassis/control/expander as well as the drives, more devices = higher potential failure rate
5) If building my own JBOD(s), is there a recommendation between the version 1 and version 3 Supermicro JBOD control boards CSE-PTJBOD-CB1 vs CSE-PTJBOD-CB3. As best I can tell the only real advantage to the v3 board is fan speed control via IPMI (which IS significant). Are there other advantages/disadvantages to either board (other than cost and availability)?
Last question (for the moment):
6) Does anyone have any experience with something along the lines of the following hardware for a pool. I know Highpoint HBAs have previously been considered less reliable (drivers/support, etc.), but I haven't found much regarding the newer NVME cards in TrueNas. SO, the following SEEMS like a spectacular idea, but can someone speak to it being a good idea or a bad idea and why - If a bad idea, is there a better option for this level of performance:
HighPoint SSD7580 (A or B - is one better than the other other than hot swap, which I don't need)
IcyDock MB873MP-B
8x 1tb NVME drives (I only need 2-3tb usable, but I could always upgrade to 2tb sticks if I needed more)
Should this be 4x 2drive (mirrored) then striped VDEVs (i.e. raid 10) for the pool or 1x 8drive z3 for a single VDEV pool
Consideration should be for redundancy. Raid 10 (for the sake of conversation) means that 4 drives could fail, BUT only 1 per VDEV vs any 3 drives could fail in the Z3 option (Z3 seems like a better choice for redundancy, but remember, I'm looking for recommendations)
Thanks in advance
My plan:
I'm looking for an expandable system that will serve 3 primary functions:
1) Media/File Server (just a share drive as my primary media consumption will be at home on a wired network, so no Plex server or internet streaming necessary)
2) Backup target
3) iSCSI target for 2-3 (desktop OS) VMs (the VMs will be hosted on another computer via 10g network)
4) REDUNDANCY & DATA SAFETY is my primary concern followed closely by expandability. Cost is at the bottom of my priorities, BUT I am cost conscious. So, starting with a larger number of 1-2tb drives and upgrading them later is good (system complexity - for future expansion - is designed from the beginning, but initial costs are lower due to smaller HDD size). A smaller number of larger HDDs is not as good (as the capacity expands, the costs to do so may be significantly higher at each increment due to need to purchase larger quantities of larger HDDs at once) since the design complexity may limit what each expansion looks like. So, cost is a consideration, but I'll spend more now if it means spending less later (although, probably higher TCO over the long run) OR it buys an easier upgrade path later
My first questions are about VDEV/Pool layout
1) How are Hot Spares allocated across the VDEVs and Pool. Specifically, if I have a (example) 16 bay chassis (not including boot), and I'm considering 2x 8 drive groups, so I'd have 2x 7 drive z3 VDEVs and 2 hot spares. Will that be 1 hot spare per VDEV (each VDEV has access to only 1 hot spare) or 2 hot spares available in the pool (so either VDEV has access to both hot spares as needed?
2) What are the current recommendations about # of drives per VDEV? Does this change if I'm using 2tb drives vs 4tb drives vs 10tb drives, etc. I know the higher drives per VDEV will be more cost efficient per/TB usable space, but I'm looking for stability/reliability first, then cost in a 70 (stability)/30 (cost) balance
A) 7 drives per VDEV + a hot spare (so 8 drives per group) (Chassis with multiples of 8 drives)
B) 11 drives per VDEV + a hot spare (so 12 drives per group) (chassis with multiples of 12 drives)
C) 13-14 drive per VDEV + 1-2 hot spare (so 15 drives per group) (Chassis with multiples of 15 drives)
3) To increase capacity (assuming available HBA ports and empty drive slots exist), which is better for capacity expansion, again, cost is a lower priority, but I don't want to waste money for no reason:
A) Add a VDEV first (cheaper option)
B) Swap out HDDs to a higher capacity first (seems to be more expensive sooner)
C) If swapping HDDs to a higher capacity (assume 2x Z3 VDEVs striped), is the upgrade 1 drive at a time (for the pool) or 1 drive per VDEV at a time (so 2 drives could be swapped out at once as long as it's 1 drive per VDEV)
4) Any thoughts (besides hardware failure rate and cost) as to the pros/cons of 1 larger chassis (i.e. Supermicro 24/36/45/60/90 bays vs several 12 bay (or similar) JBODs. I'm considering either:
A) Larger Supermicro chassis (45-90 bays - Multiple 15 drive groups) - PROS: higher physical density, fewer control boards to fail CONS: Limited to 6gbps backplanes because I don't intend to spend $10k on a new(er) one
B) Multiple Supermicro CSE-801L (12 drive groups - 1 per chassis), Chenbro RM25324 (multiple 12 drive groups) or AIC RSC-2MS (multiple 8 drive groups) with the necessary Supermicro CSE-PTJBOD-CB (control) & Intel RES3TV (expander) cards. PROS: Lower startup cost (I can add as I need) potential for 12gbps for less cost CONS: Higher cost to expand because I have to purchase both the chassis/control/expander as well as the drives, more devices = higher potential failure rate
5) If building my own JBOD(s), is there a recommendation between the version 1 and version 3 Supermicro JBOD control boards CSE-PTJBOD-CB1 vs CSE-PTJBOD-CB3. As best I can tell the only real advantage to the v3 board is fan speed control via IPMI (which IS significant). Are there other advantages/disadvantages to either board (other than cost and availability)?
Last question (for the moment):
6) Does anyone have any experience with something along the lines of the following hardware for a pool. I know Highpoint HBAs have previously been considered less reliable (drivers/support, etc.), but I haven't found much regarding the newer NVME cards in TrueNas. SO, the following SEEMS like a spectacular idea, but can someone speak to it being a good idea or a bad idea and why - If a bad idea, is there a better option for this level of performance:
HighPoint SSD7580 (A or B - is one better than the other other than hot swap, which I don't need)
IcyDock MB873MP-B
8x 1tb NVME drives (I only need 2-3tb usable, but I could always upgrade to 2tb sticks if I needed more)
Should this be 4x 2drive (mirrored) then striped VDEVs (i.e. raid 10) for the pool or 1x 8drive z3 for a single VDEV pool
Consideration should be for redundancy. Raid 10 (for the sake of conversation) means that 4 drives could fail, BUT only 1 per VDEV vs any 3 drives could fail in the Z3 option (Z3 seems like a better choice for redundancy, but remember, I'm looking for recommendations)
Thanks in advance