DRAM in SSD zpools?

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
Is it generally a good idea to buy SSDs with dram if they're going to be used in raidz pools?

Also should TLC be preferred to QLC like with regular PC builds?(at least for the OS drive)
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
In general term you get what you pay for. The best SSD's are enterprise grade and thus expensive
DRAM = more expense
TLC = more expense

Don't use Samsung QVO which are shite. Other Samsung's are fine.

The OS drive doesn't really matter what you use in the way of cheap SSD's (within reason) as long as you move the system dataset off the boot device of a cheap and nasty SSD. Make sure you always have a copy of the config file
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
It really depends on your use case, but generally TLC (or MLC and SLC) is advised instead of QLC since it has higher endurance.

Also, from what I understand it's better to not mix cache on drives and ZFS (SMR drives and RAID controllers are prime examples).
 
Last edited:

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
In general term you get what you pay for. The best SSD's are enterprise grade and thus expensive
DRAM = more expense
TLC = more expense
I know, I'm just wondering if the zfs filesystem in particular gets any benefits from dram on an SSD.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I know, I'm just wondering if the zfs filesystem in particular gets any benefits from dram on an SSD.

In particular? Probably not. But of course there are benefits to DRAM on an SSD.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
It all comes down to your workload, at the end of the day. Writes suffer more than reads in QLC (and TLC!) drives, and the SSD manufacturer is well aware of this fact. Writes can get so bad that they can perform literally worse than a spinning HDD.

To mitigate this, the firmware of the SSDs have a buffer of NAND pre-programmed as SLC. This cache, when not full, performs at line rate for SATA devices and pretty stinken fast on NVME drives as well. If your workload is such that you don't often commit enough writes to fill the buffer then you might not even notice you are using QLC. If you are using mirror VDEVs the size of that buffer will linearly scale with the amount of mirrors. Write endurance will scale linearly in this fashion as well. The interaction between the logic of the drive firmwares and the logic of ZFS meld pretty well. Drives with DRAM caches vs ones that don't follow this same trend. DRAM is faster than SLC and acts as a first tier of caching, then it's flushed to either SLC or directly to TLC/QLC. If you write queue depths are shallow enough for the drives to commit the writes to their final destination, you are fine.

Generally I wouldn't recommend QLC NAND, or drives without DRAM but if the system is big enough the downsides aren't so bad. However, I personally have found that a single mirror VDEV of Optane 905ps was faster in all possible regards to a 24-drive 12-way mirror of crappy consumer 120GB DRAM-les TLC drives. Sometimes simplicity is your best bet xD.
 
Last edited:
Top