Backing up zero-redundancy pool to single disk? metadata nvme cache

Intel

Explorer
Joined
Sep 30, 2014
Messages
51
I'm seeking advice on the best approach / pool setup for my new NAS. I have always had a striped vdev mirrors configuration of 4 disks on my current/old NAS.

My new build has larger drives and NVME and I wanted to get as many IOPS and speed as possible while having some backup assurances (I haven't had a hard drive die on me in the past 3 years). I am considering having a "speed" zero redundancy pool and then a backup "slow" pool.

my hardware:
  • 1TB PCIe 4.0 nvme (3500mb/s rw - passthru to TrueNAS VM)
  • 4x 18TB drives
  • TrueNAS Scale (running as a proxmox VM)
  • SAS3008 HBA (passthrough from proxmox to VM)
My workload looks like:
  • Proxmox VMs
  • Filestorage
I have a single NVME drive for metadata + small IO - there wouldn't be a backup. Even if I create two data vdev (stripe mirrors) if the NVME dies my entire pool and its data will be toast, is this correct? This is a stripe of mirrors not raidz array.

If the above is true, and stripe of mirrors don't hold metadata in spinning drives (aka there is no backup to whats on the nvme in a mirror stripe). I can see another way of achieving my goals, if I am willing to give up some data redundancy/parity.

speed_pool
- single data vdev (stripe of 2x18TB disks)
- metadata + SLOG + ZIL in nvme disk
total ~33TB raw storage + should be super fast IO. Risks: if any disk or nvme dies everything dies.

slow_pool
- single data vdev (stripe of 2x18TB disks)
- TrueNAS hourly scheduled backup job that syncs changes from speed_pool into slow_pool

Are there other alternatives I should consider, or any other recommendations? Is my understanding of metadata storage on spinning rust drives on stripe vdevs with nvme ssd incorrect, would nvme dying kill the entire pool if it were a 4x18TB + nvme setup?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
I have a single NVME drive for metadata + small IO - there wouldn't be a backup. Even if I create two data vdev (stripe mirrors) if the NVME dies my entire pool and its data will be toast, is this correct?

Yep...

any other recommendations?

What you are looking at is not what I would consider backups. They are in the same server and up at the same time. As such, both copies will be destroyed by the same physical incident like fire, as well as exposed to the same logical incident like intrusion or human error. The result is that the second copy has only a low probability of being helpful in case of incident.

A complete backup strategy is detailed in my signature...

Also, if it is true that 2 vdev will give you more IOPS than a single one, you will not achieve anything great with only 2 vdev. Also, what exact model are these 18 TB drives ? Drives that large are often SMRs and so to be avoided and very slow.

If indeed you need significant performance, you need to redesign your complete system...
 

Intel

Explorer
Joined
Sep 30, 2014
Messages
51
Yep...



What you are looking at is not what I would consider backups. They are in the same server and up at the same time. As such, both copies will be destroyed by the same physical incident like fire, as well as exposed to the same logical incident like intrusion or human error. The result is that the second copy has only a low probability of being helpful in case of incident.

A complete backup strategy is detailed in my signature...

Also, if it is true that 2 vdev will give you more IOPS than a single one, you will not achieve anything great with only 2 vdev. Also, what exact model are these 18 TB drives ? Drives that large are often SMRs and so to be avoided and very slow.

If indeed you need significant performance, you need to redesign your complete system...

I appreciate your feedback. You are absolutely right that this is no 'true backup', single point of failure everywhere. I have run things like this for a very long time... I am thankful for ZFS, which makes importing my disks from one dead server to another of a different chipset very easily.

This is primarily for home use (homelab) and funds are limited; in reality what I am really after is "storage tiering" in ZFS which doesn't really exist unfortunately, 'metadata' vdev is as close as I can get but as we know my single nvme drive for metadata is the achiles heel of my 4x18TB disks.

My 18TB drives are model:
WDC WD180EDGZ
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
My 18TB drives are model:
WDC WD180EDGZ

When searching about that model, I found that on Reddit :

The WD180EDGZ is a white label drive SKU. White label drives do not have spec sheets because they are whatever WD was manufacturing that could be repurposed the most cheaply to fill a specific role in an internal drive product. These can be anything from rejected drives that failed testing for a different product line to perfectly good full spec drives. There's no consistency or guarantee whatsoever. It's up to you on if the risks are acceptable for your particular application.

Considering all the crap that WD did in the past, I would not trust these drives at all...
 
Top