Tiered Storage

Is Tiered Storage a good idea for FreeNAS to include/add?


  • Total voters
    9

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Hi All, I'm posting this knowing I'll get a bit of response in confused anger that FreeNAS clearly doesn't do this now... I know that.

I'm curious if it has been something on a request list or if it is something available in TrueOS (and reserved for that platform only, I presume if that's the case).

I can see the great potential for benefit in many of the cases where we see complaints about performance related to the "right" balance between redundancy and storage capacity versus performance, particularly for VMs, but more generally .

So how would I see this working in order to add the benefit I'm talking about without the risk?

Let's say we want a high-performing pool with good redundancy, but not so much we need to go to the usual stripe of 3-way mirrors to allow for 2 device failure within a VDEV...

So what I would see is a pool that is able to manage 2 RAIDZ2 VDEVs, 1 of "high speed" and another of "large capacity" drives.


So for example:
Code:
pool1
    RAIDZ21-HighSpeed
        da0 (512GB SSD)
        da1 (512GB SSD)
        da2 (512GB SSD)
        da3 (512GB SSD)
    RAIDZ22-HighCapacity
        da4 (8TB HDD)
        da5 (8TB HDD)
        da6 (8TB HDD)
        da7 (8TB HDD)


The logic behind this would then do all writes on the HighSpeed VDEV, meaning fast writes and possibly no need for SLOG.

The logic would then manage all reads as usual from the pool, taking data from where it lies.

Some process would then run either in the background, scheduled or in real-time to move data from the HighSpeed VDEV to the HighCapacity VDEV in order to maintain a good (but perhaps configurable) amount of free space on the HighSpeed VDEV.

In addition, there would be some kind of evaluation done (similar to and perhaps in conjunction with ARC) which then decides to move data in either direction as it is accessed more or less frequently (after being in an ARC hit, the file should be on the HighSpeed VDEV until needs to be moved to accommodate space to meet either the previous point or to free space for a more recent hit from the HighCapacity VDEV. (I guess ARC and L2ARC already have this logic covered somehow, but perhaps at block level, so this would just translate a file with a hit on one block to moving the entire file).

I suppose we could consider marking VDEVs as HighSpeed or HighCapacity and allowing more than 2 in a pool and just apply the rules above in the same way, not being concerned about which VDEV amongst the HighSpeed ones or the HighCapacity ones gets the file in each case.

I would also suggest that an additional "bonus credit" option would be to allow a third or even more layers of tiering like this:

Code:
pool1
    RAIDZ21-HighSpeed
        da0 (512GB SSD)
        da1 (512GB SSD)
        da2 (512GB SSD)
        da3 (512GB SSD)
    RAIDZ22-HighCapacity
        da4 (8TB HDD)
        da5 (8TB HDD)
        da6 (8TB HDD)
        da7 (8TB HDD)
    RAIDZ22-VeryHighCapacity
        da8 (14TB HDD)
        da9 (14TB HDD)
        da10 (14TB HDD)
        da11 (14TB HDD)


Where VeryHighCapacity VDEV disks are archive HDDs with slow/reduced read performance.


What do people think... would this make FreeNAS "THE" option for iSCSI/virtual storage that performs like a rocket at a reasonable price without compromising redundancy?

I think what I'm describing here is in some way like Apple's CoreStorage option known as FusionDrive, but obviously would be much better since it's ZFS and FreeNAS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
What you describe sounds like it could be a good idea in concept, but the application layer (FreeNAS) wouldn't be the place to do this--it would need to be done in ZFS itself.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
@Allan Jude ... Looks like it would be important for you to back this if it will ever be possible. What do you think?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I guess it also would be possible to make it happen at the sharing level, specifying pools of high speed and capacity to a custom SMB, NFS or iSCSI service that would then have some application services moving the data around as suggested in my OP.

This would require someone with enough knowledge and motivation to mess with what are off-the-rack products compiled into FreeNAS today. I'm not very hopeful, but I think it is technically possible.
 

FraiseVache

Cadet
Joined
Nov 8, 2022
Messages
4
It’s also supported natively by gluster if I'm remembering well. So the only thing needed would be an UI for it and issuing the correct config for gluster. It wourd be awesome to have this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It’s also supported natively by gluster if I'm remembering well. So the only thing needed would be an UI for it and issuing the correct config for gluster. It wourd be awesome to have this.

Gluster is slow as hell. Plus, complexity. Plus, well, not to put too fine a point on it (sorry), you've included the death-phrase "the only thing needed would be"...
 

FraiseVache

Cadet
Joined
Nov 8, 2022
Messages
4
It’s also supported natively by gluster if I'm remembering well. So the only thing needed would be an UI for it and issuing the correct config for gluster. It wourd be awesome to have this.
After looking into it, seems support was deprecated. That’s sad. Maybe something like autotier could also do the trick nicely!
 

FraiseVache

Cadet
Joined
Nov 8, 2022
Messages
4
Gluster is slow as hell. Plus, complexity. Plus, well, not to put too fine a point on it (sorry), you've included the death-phrase "the only thing needed would be"...
You’re not wrong there. Sorry. Autotier seems easier to use.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
StorageTek used to have a tiered file system, called ASM / QFS. It allowed at least 3 tiers, and I know we used at least some of these;

FC disk - High speed, dual pathed, lower capacity
SATA disk - Slower, fake dual pathed, higher capacity
FC Tape - High seek speed, dual hub, low capacity
FC Tape - Slower seek speed, single hub, higher capacity

Data automatically migrated around as needed.

It's been too many years for me to remember if the last tier of storage always had a copy, and that the other faster tiers were cache tiers. I do remember that after one copy was written, the writer application was told the write is complete. Even if the slower tiers had not been written / updated.
 
Top