A (totally unofficial) Conversation about Relative ZPool Performance.

NickF

Guru
Joined
Jun 12, 2014
Messages
763
A (totally unofficial) Conversation about Relative ZPool Performance.
─────────────────────────────────────────────────────────────────────────────

DISCLAIMER:
I am not an authority figure on ZFS. I've made several generalizations and assumptions here that may not be accurate. Please, if you find discrepancies, join the conversation and help correct them. This is a collaborative effort for the benefit of all.

Introduction:
Been a member of this community for a while, and often I notice newcomers seeking digestible insights into ZFS. So, here's my attempt to shed some light. Feedback from forum regulars and ZFS experts would be greatly appreciated! :smile: I've standardized everything around 8VDEVs for simplicity, but there's really no reason why I chose that number.

─────────────────────────────────────────────────────────────────────────────

Assumptions:


Baseline HDD Performance: A 4-drive stripe equals 100% performance, equating to 600 MiBps for both read and write. These figures assume sequential operations, averaging ~150 MiBps from a single hard drive.

Baseline HDD IOPS: A 4-drive stripe equals a baseline of 400 IOPS for both read and write. Averaging ~100 IOPS from a single hard drive.

Baseline SSD Performance: A 4-drive stripe equals 100% performance, equating to 2,000 MiBps for both read and write. These figures assume sequential operations, averaging ~500 MiBps from a single Solid State Drive.

Baseline SSD IOPS: A 4-drive stripe equals a baseline of 200,000 IOPS for both read and write. Averaging ~50,000 IOPS from a single Solid State Drive.

RAIDZ3 Assumptions:
- Sequential Read: 90% of a 4-drive stripe.
- Sequential Write: 60% due to three parity calculations.
- Random Write: 50%.
- Random IOPS: 50%.

RAIDZ2 Assumptions:
- Sequential Read: 90% of stripe (rounded down from 92%).
- Sequential Write: 65%.
- Random Write: 55%.
- Random IOPS: 55%.

RAIDZ1 Assumptions:
- Sequential Read: 95% of stripe.
- Sequential Write: 75%.
- Random Write: 65%.
- Random IOPS: 65%.

Mirroring Assumptions:
- Read: 100% (same as stripe).
- Sequential Write: 90%.
- Random Write: 80%.
- Random IOPS: 80%.

FOR NOW I HAVE REMOVED THE SSD BAR CHARTS, GIVEN WE NEED TO POTENTIALLY RE-ASSES THE GENERAL ASSUMPTIONS GIVEN THEIR GREATER OVERALL PERFORMANCE.

THESE ASSUMPTIONS ARE NOT FACTS AND SHOULD NOT BE USED AS SUCH.
─────────────────────────────────────────────────────────────────────────────

ZFS HDD Performance Comparison:
*ZFS Performance and capacity is dependent each VDEV, not of each disk*
*This represntation is designed to more clearly show that the number of disks required to maintain baseline performance grows quickly*

Read Performance (MiBps) - Total Disks:
|------------------ 100% (600 MiBps)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

Sequential Write Performance (MiBps) - Total Disks:
|------------------ 100% (600 MiBps)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

Random Write Performance (MiBps) - Total Disks:
|------------------ 100% (600 MiBps)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

Read IOPS - Total Disks:
|------------------ 100% (400 IOPS)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

Sequential Write IOPS - Total Disks:
|------------------ 100% (400 IOPS)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

Random Write IOPS - Total Disks:
|------------------ 100% (400 IOPS)
|█████████████████████████ 4 Drive Stripe (4 disks)
|███████████████████████ 4 vdev Mirror (8 disks)
|██████████████████████ 4 vdev RAIDZ1 (12 disks)
|█████████████████████ 4 vdev RAIDZ2 (16 disks)
|████████████████████ 4 vdev RAIDZ3 (20 disks)

─────────────────────────────────────────────────────────────────────────────
ZFS HDD Performance Comparison for 12 Disks (Logarithmic Scale):
*Normalized to 12 disks instead of to a VDEV layout*
*Using a Logarithmic Scale to emphasize the performance dropoff for different topologies*


Read Performance (Logarithmic Scale):

|------------------ 100% Baseline
|█████████████████████████ 12 Drive Stripe
|███████████████████████▒ 6 vdev Mirror
|█████████████████████▒▒ RAIDZ1 (3 vdevs of 4 drives each)
|███████████████████▒▒▒ RAIDZ2 (3 vdevs of 4 drives each)
|█████████████████▒▒▒▒ RAIDZ3 (3 vdevs of 4 drives each)

Sequential Write Performance (Logarithmic Scale):
|------------------ 100% Baseline
|█████████████████████████ 12 Drive Stripe
|███████████████████████▒ 6 vdev Mirror
|████████████████████▒▒▒ RAIDZ1 (3 vdevs of 4 drives each)
|██████████████████▒▒▒▒ RAIDZ2 (3 vdevs of 4 drives each)
|████████████████▒▒▒▒▒ RAIDZ3 (3 vdevs of 4 drives each)

Random Write Performance (Logarithmic Scale):
|------------------ 100% Baseline
|█████████████████████████ 12 Drive Stripe
|████████████████████▒▒▒ 6 vdev Mirror
|██████████████████▒▒▒▒ RAIDZ1 (3 vdevs of 4 drives each)
|████████████████▒▒▒▒▒ RAIDZ2 (3 vdevs of 4 drives each)
|██████████████▒▒▒▒▒▒ RAIDZ3 (3 vdevs of 4 drives each)

IOPS (Logarithmic Scale):
|------------------ 100% Baseline
|█████████████████████████ 12 Drive Stripe
|█████████████████████▒▒ 6 vdev Mirror
|███████████████████▒▒▒ RAIDZ1 (3 vdevs of 4 drives each)
|██████████████████▒▒▒▒ RAIDZ2 (3 vdevs of 4 drives each)
|████████████████▒▒▒▒▒ RAIDZ3 (3 vdevs of 4 drives each)

─────────────────────────────────────────────────────────────────────────────
Note: The above are generalizations and will vary depending on real-world factors, and are predicated on my assumptions alone. These figures are *NOT* meant to represented as actual performance estimates of these various pool layouts in real terms. It is only a visualization of the relative performance expectations between them given the assumptions.
─────────────────────────────────────────────────────────────────────────────

ZFS Topologies Visualization

ZFS Topologies Visualization

Legend:
[D][D] = Disk
[P][P] = Parity
[M][M] = Mirror
─────────────────────────────────────────────────────────────────────────────
Striping (4 Drives)
| [D] | [D] | [D] | [D] |
Total Drives: 4
Pool Size: 4TB
Raw Size: 4TB
─────────────────────────────────────────────────────────────────────────────
Mirroring (4 vdevs of 2 drives each)
| [D][M] | [D][M] | [D][M] | [D][M] |
Total Drives: 8
Pool Size: 4TB
Raw Size: 8TB
─────────────────────────────────────────────────────────────────────────────
RAIDZ1 (3 Disks per vdev, 4 vdevs wide)
| [D] [D] [P] | [D] [D] [P] | [D] [D] [P] | [D] [D] [P] |
Total Drives: 12
Pool Size: 8TB
Raw Size: 12TB
─────────────────────────────────────────────────────────────────────────────
RAIDZ2 (4 Disks per vdev, 4 vdevs wide)
| [D][D] [P][P] | [D][D] [P][P] | [D][D] [P][P] | [D][D] [P][P] |
Total Drives: 16
Pool Size: 8TB
Raw Size: 16TB
─────────────────────────────────────────────────────────────────────────────
RAIDZ3 (5 Disks per vdev, 4 vdevs wide)
| [D][D][D] [P][P][P] | [D][D][D] [P][P][P] | [D][D][D] [P][P][P] | [D][D][D] [P][P][P] |
Total Drives: 20
Pool Size: 8TB
Raw Size: 20TB
─────────────────────────────────────────────────────────────────────────────
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I think I have gone blind
:cool:
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I think I have gone blind
:cool:
LOL
I am using the dark theme...and didn't think too much about people who are not...whoops. I'll re-do the formatting when I can.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
We have official documentation by iX regarding this, and as I am sure you know those numbers heavily depends on the use case (files size, etc...).

Not saying this thread is useless.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Sure...It's an attempt to garner a relational understanding of performance between topologies, not real world performance. These numbers are arbitrary estimates in an attempt to create a visual representation, which is why I didn't include numbers for each bar graph and only the baseline.

Just trying to make something digestable for people who have no idea about ZFS.

The best example I can think of to this approach is like teaching a 5 year old about a complex topic...
Think the Civil War or WWII.

Now, think of the approach to teaching a high-school student...and finally think of the approach to teaching a college student.

All of these approaches are differant and delve deeper into nuanced understanding.

This post was only designed as a foundational understanding for people who have no exposure to the topic, much like teaching a kindergartner about the atrocities of wars.

IMO there is value in building up knowledge step-by-step. We all have to make assumptions and generalizations as human beings. No one can be an expert on every topic, otherwise there would not be experts or SMEs...they would just be people. In the context of the current conversation, I don't think you should need to be a ZFS guru to understand a generalized way of what type of performance to expect in TrueNAS.

I worked in K12 Education from a technology perspective for over a decade. So this is a fusion of those different worlds. My wife is also a ludite and a teacher, and this approach is not vastly different than what I do to explain what I do at work in general. LOL. :wink:

My assumptions here can ABSOLUTELY use some work...and that is why I opened this to the floor.

I do think there is VALUE in this approach...or I wouldn't have done so. But I am also open to the discussion of whether or not this represents any real value at all.
 
Last edited:
Joined
Jun 15, 2022
Messages
674
TN-drives.jpg
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Top