Hello, TrueNAS Community,

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
It has been a while since my last post.

I'm in the process of setting up a Dell PowerEdge 740xd server as a dedicated storage server. Currently, it's equipped with 8 x 8TB NVMe drives, with plans to expand. I aim to organize these into 2-3 ZFS pools with RAID-Z2 for redundancy and am considering setting aside one drive as a spare to enhance fault tolerance. Do you have any suggestions for adjustments to this configuration?

This server will handle critical storage tasks, I'm evaluating the best boot drive configuration to support TrueNAS, especially considering its disk usage patterns, particularly with logging and system operations.

I have the opportunity to install two 280GB Intel Optane SSDs in HHHL format for the boot drives. My questions to the community are:
  • Boot Drive Overkill? Considering the critical nature of our storage and the heavy logging, are the Optane SSDs overkill for boot drives? I'm leaning towards Optane for its robustness and performance, especially for write-intensive operations. However, I'm curious about your perspective on whether this is a suitable choice or if there's a better alternative for TrueNAS's needs.
  • TrueNAS Disk Usage - How demanding is TrueNAS on the boot drives, especially in scenarios with extensive logging? Would the system significantly benefit from the high endurance and rapid write capabilities of Optane drives, or is this considered an unnecessary luxury?
  • Alternative Recommendations - If you think the Optane SSDs might be excessive for this purpose, could you recommend other SSDs that would offer reliable performance for TrueNAS as boot drives without being overspecified for a storage server's needs?
  • Configuration Advice - Do you have any tips on configuring the pools or the overall storage strategy to optimize for redundancy and performance?
I value your insights and recommendations as I finalize the hardware setup for this crucial storage server. Thank you in advance for your guidance and advice!

My first freenas server is still up and runnning without issues :)
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I'm not invested in your other thread on top of my head and I can't speak for the optane drives.

With a current config backup (automate that, for example with the multi_report.sh script around here or grab the nightly backups that are created automatically (on the boot drive itself though)) a boot drive failure should be a mere inconvenience. With a mirrored boot pool you should be able to just replace one drive if we assume they don't fail simultaneously.

I'm not sure whether you only talk about data integrity or availability. You would need to check it the mainboard supports hot swapping the drives (are they M2?) and then you could even replace without shutting the server down. Haven't tried that myself though, I can always afford a few min of downtime for hardware changes.

Your data pools are independent of the boot pool in that regard. Personally I go for the cheapest SSD of a brand I know.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
Hot swap support isn't viable for our setup due to the positioning of the drives on the motherboard. As a result, I prioritize stability when considering solutions.

While the likelihood of both Optane drives failing simultaneously is statistically low, it's a scenario worth preparing for. Ideally, I aim for a setup that requires minimal intervention for at least three years, until an upgrade to newer server or storage hardware is necessary.

To achieve this, I primarily utilize CEPH storage, leveraging its high throughput capabilities. Despite its efficiency in redundancy (e.g., three replication yielding only around 300GB of usable storage per 1TB), CEPH eliminates single points of failure.

Although hot swap support is available on the front panel, I reserve it for storage data drives. These drives are expected to undergo intensive usage, with a conservative estimate of one drive failing per year.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
While the likelihood of both Optane drives failing simultaneously is statistically low, it's a scenario worth preparing for.
Config backups ;)

If you want/can afford optane drives (I don't know what price tag you are looking at) do that, otherwise I stick to my recommendation of just using any cheap name brand solution. My boot drives were around 15 bucks each or old drives I had lying around.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Despite its efficiency in redundancy (e.g., three replication yielding only around 300GB of usable storage per 1TB), CEPH eliminates single points of failure.
Well, maybe. You still need to keep at least another copy off-site otherwise your server/building is the single point of failure.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
Well, maybe. You still need to keep at least another copy off-site otherwise your server/building is the single point of failure.
the CEPH data can be generated easily

Config backups ;)

If you want/can afford optane drives (I don't know what price tag you are looking at) do that, otherwise I stick to my recommendation of just using any cheap name brand solution. My boot drives were around 15 bucks each or old drives I had lying around.
around 150-200$ each
 
Top