General ZFS features - discussion

Status
Not open for further replies.

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
We have had several threads opened concerning which file system is best for a NAS. This thread is intended to discuss what features are available in each file system, and what use they are in a NAS or server.

For example, many file systems are limited to one specific brand of OS. Like NTFS is basically Microsoft Windows only. The Linux kernel has some support for NTFS, but it's not listed as stable and reliable.

For ZFS, it's available on 5 different *nixes: Solaris, Illumos, MacOS, Linux and FreeBSD. It's still limited to *nixes, no local MS-Windows support. However, I would make the statement that *nixes are better NAS server OSes than MS-Windows. So for a NAS, it's no real loss that ZFS is not available on MS-Windows.

Now for BTRFS, it's only available on Linux, and only really stable on more recent Linux. Meaning more than 5 years ago BTRFS was loosing data pretty regularly. (Unlike ZFS, which has been in production since June 2006.)

So people, what file system features do you want in your NAS?
And why?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
For me, most important feature is data reliability and recoverability.
Thus, full checksumming and dual parity.

My NAS is the primary backup. I want it to live for 5 years without serious hardware updates or failures, (that require restoration of data, or recovery of data).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Now for BTRFS, it's only available on Linux, and only really stable on more recent Linux.
For some values of "stable", certainly nowhere near ZFS or even traditional filesystem + good HW RAID.

And that's its big problem, it's simply not correctly designed. They started throwing features at it without having a vaguely reliable filesystem first, so now they "support" stuff like compression (apparently still with major bugs) while the filesystem can't recover from a loss of parity (!) and the RAID5/6 functionality is even more unreliable and has a write hole.

For me, most important feature is data reliability and recoverability.
Thus, full checksumming and dual parity.
Definitely - otherwise, why bother? If it's not better than NTFS or something equivalent when it comes to reliability, what would I gain? Certainly not compression, NTFS already supports CoW-esque features and dedup is mostly useless.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
#1 integrity detection
#2 redundancy/reliability/recoverability and integrity recovery
#3 snapshots
#4 replication
#5 interoperability (Ie ability to coexist with Windows and Mac data)

And somewhere below, nice to haves like the ability to grow storage, or reshape pools (i.e. Mirror to raid5 to raid6 and back to mirrors)

Online updates (Ie replacing a drive or adding storage without offlining the storage is not critical to me, but is very nice)

Ie I would prefer to be able to reshape the pool even if it meant offlining it to do it.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One thing that bit me on my old NAS, (Infrant ReadyNAS 1000S), was a Linux EXT3 file system check. This NAS stayed powered off and only powered on for backups, and pushing misc. things to it, (like PD ISO images and other shareware). About every 25 or so mounts, the main file system had to have the EXT3 file system check run. On that old NAS, it was a slow operation, partly because my backups are not images, just the original plain and numerous files.

This could take quite some time, (10s of minutes, even more than a hour). And in the early days confusion on my part.

Later I added a serial console to the server, with network terminal server. Then I could see where it was during boot. Annoying that it took time, but at least I now knew the cause.

This contrasts with ZFS that may take a half a minute to import a large or complex pool.

In comparing boot speeds between BTRFS and EXT3/4 with journal, and the maximum mount count set to greater than 0, BTRFS seems like an improvement.

Note: This file system check occured regardless of journaling. This feature was likely added because of distrust in the file system journaling. Recent versions of Linux's mkfs.ext3/4 seem to set the maximum mount count to -1, which disables this automatic file system check. I don't know when that changed.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Ie I would prefer to be able to reshape the pool even if it meant offlining it to do it.
Yes, I wish there were an option to re-shape a ZFS pool even if it meant off-lining / exporting it.

In fact, that should be doable for these configurations;
  • Striped to RAID-Z1, (if you add a disk for the parity). Also for RAID-Z1 to RAID-Z2 and RAID-Z2 to RAID-Z3.
  • RAID-Z1 to Striped, (you gain one additional disk space). Also for RAID-Z3 to RAID-Z2 and RAID-Z2 to RAID-Z1
Other changes should be possible. For example, Mirror to RAID-Z1 could be done be dropping the mirror disks and using one or more newly freed disks for the RAID-Zx parity.

Please note that in some regards, this off-line work would not be as time consuming as you might think. If I were doing it, (though I don't have the skills), the part on adding parity could be done as a degraded entry. Meaning whence you change the pool layout, you import the pool and then replace the psuedo failed disk. It then re-silvers while the pool and data are on-line.

Of course, all pre-existing data will have it's new parity on one disk. Oh well. Wish we had that block pointer re-write / optimization.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
n fact, that should be doable for these configurations;
  • Striped to RAID-Z1, (if you add a disk for the parity). Also for RAID-Z1 to RAID-Z2 and RAID-Z2 to RAID-Z3.
  • RAID-Z1 to Striped, (you gain one additional disk space). Also for RAID-Z3 to RAID-Z2 and RAID-Z2 to RAID-Z1
Actually, it's apparently surprisingly difficult. RAIDZ block pointers are written in a way that depends on the RAIDZ width or something of the sort, so they'd have to be rewritten - and we all know what a can of worms that is.
 
Status
Not open for further replies.
Top