DemohFoxfire
Dabbler
- Joined
- May 2, 2023
- Messages
- 11
Wouldnt theoretically using the micron tools to "reprovision" the drive in a different system then plopping it into the truenas server be "supported"? I read somewhere (I believe in Micron's own publication but dont quote me) that the Pro and Max drives are hardware identical and if you wanted the higher endurance but only purchased a pro you could reduce the provisioned size from the Pro size to the Max size... and vice versa.@NickF hit some odd snags with regards to the visibility of devices through the UI, likely due to the identifier fields from the device being the same, so it would be a CLI-configured option (and thus technically "unsupported"). You'd also have to be cautious about ensuring that your mirrors are set up to go across devices eg: (nvme1ns1 + nvme2ns1) and not (nvme1ns1 + nvme1ns2) otherwise a single device failure still takes you out.
Those look like some powerful devices; wonder how the performance scales down to the smaller sizes?
On paper the drives perform very similar across their product line.
So you are saying you could mirror + stripe using namespaces for the slog?
Nope, I havent gotten that far into truenas and have been running a lot of different hardware / configs through its paces while I figure it all out. Hence most of my idiotic questions. So is this a cap of 2 transaction groups or is this something that is just 'normal'?Thanks for the numbers.
You realise that a SLOG only needs to be large enough to hold 10 seconds worth of transactions (two transaction groups), right?
And that a SLOG need not be mirrored, except for the most paranoid users or if servicing a failed SLOG device would be problematic.
What I mean is (assuming round numbers for easy math): If I have a potato pool which is massive, say 100tb, but writes at 100MB/s or just under gigabit line speed and we have 25gbe available which the slog would have no problem keeping up with. With sync=always would the zil essentially cache a large transfer while the potato catches up even past 2 transaction groups? A transfer of 100GB @ potato speeds without a slog would take 16m40s @ 100MB/s. Say we get 2500MB/s with the SLOG would the network portion of the transfer finish at 40s where the remaining 16 minutes the potato is just writing the data from memory (ram) as what is in the SLOG gets abandoned as writes complete?
I ask because right now (or really next week) Ill finally have 36 potato drives in my lab and I am curious on how this reacts with the different write loads ill be throwing at it.