Tiered storage - Mech + SSD in same pool or in separate pools?

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Wow, this thread went 0 to 60 all the sudden today. Thanks everybody for all the input.

I thought this test system was sitting pretty at 96Gb RAM, considering that my previous systems had 32Gb to 64Gb. But if the recommendation in jgreco's post #4 is 128Gb, or even 256Gb, then I've got a long way to go considering all the current DIMM slots are full... The server can do it, it's just a matter of money that I don't have at the moment.

Well, keep in mind that I was talking about a theoretical working set size of 1TB. That might be pretty large. One of the things is to think about what you want to define as the time period for working set. Do you want the NAS to avoid having to pull stuff from the pool ... once an hour? Once a day? Because once an hour's a smaller working set. My VM's here, for example, do a daily host scan which causes lots of disk space to be read, but only once a day. I can get that to be covered by L2ARC but only by throwing lots of it at the problem. If the pool isn't busy, maybe it's worth letting that get read from disk.

Regarding the part(s) about 25% vs 50%-80% free space in a pool translating to increased write performance, I understand the bit about the physical limitations of spinning platters and flailing actuator arms swinging heads wildly. Is this free space to performance relationship still true if the pool is all SSD? SSD's don't have seek times nor reduced speeds as the drive fills issue that mechanical drives have right?

But ZFS still has to do more work to find free space, which translates to more I/O and more analysis of the metadata, which does mean it will be at least somewhat slower.

I have a pair of what I was told are mediocre NVMe drives (MyDigitalSSD 240GB (256GB) BPX 80mm (2280) M.2 PCI Express 3.0 x4 (PCIe Gen3 x4) NVMe MLC SSD) that I was originally going to use one as a SLOG device. I was kind of talked out of it in the previous thread as the gains were questionable since the pool was already all SSD. The only theoretical gain of using that NVMe as a SLOG against a SSD pool was to reduce writes being made to the SSD pool, thus extending the pool's write lifespan but at the expense of the SLOG NVMe itself. That all changes if this server is to have a mechanical pool again, so perhaps I should repurpose that NVMe as a SLOG for the mechanical pool? Yes? No?

If it doesn't have power loss protection, you kinda have to think about whether this is worth doing.
 

SamM

Dabbler
Joined
May 29, 2017
Messages
39
If it doesn't have power loss protection, you kinda have to think about whether this is worth doing.

One of the things I like about these Crucial MX500 SSD's I'm using is that they're supposed to have built-in power loss protection. There is a way to add two more SFF bays in the back of the HP DL380e G8 chassis along with 2 more PCI-E slots. That said, I can add a 250Gb 2.5" SSD for roughly $50 or a 500Gb for $70, but unfortunately the chassis parts are roughly $200 (just for the SSF cage, the mating PCI cage is another $100). So at least that's an option...
 
Top