Advice on Disk Layout - Virtualisation

Joined
Jul 17, 2020
Messages
3
Evening all

I am in the process of building out a new homelab and to get myself out of my comfort zone I've decided it's going to be all Opensource. I've plunged with TrueNas 12 Beta, happy to play about rather than this being a system critical build. I'll have four hypervisors connecting in, I'm thinking KVM at the moment (comfort zone again). My hardware is as follows:

Motherboard: Supermicro X11SCL-IF CPU: Intel Core i3 9100 3.6GHz (Coffee Lake) RAM: Kingston Server Premier (KSM24ED8/16ME) 16GB 2400MHz DDR4 ECC CL17 DIMM 2Rx8 Micron E ([B]x2 32GB in total[/B]) M.2 nvme: Sabrent 512GB Rocket NVMe PCIe M.2 2242 ([B]for boot[/B]) LSI MegaRAID 9240-8i 8-port SAS SATA LSI00200 [B](I've flashed to LSI9211 IT mode, using firmware: 20.00.07.00[/B])

Question 1, my TrueNas reports Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd. Does TrueNas 12 use a new driver and am I still okay running 20.00.07.00 on the HBA?

I have the following disks available
6 x Seagate IronWolf 6TB NAS Hard Drive 3.5" SATA III 6GB's 7200RPM 256MB Cache ([B]attached to the HBA[/B]) 2 x Integral 240GB P Series 5 SATA III SSD Drive ([B]they were cheap :)[/B])

Now I've been doing a bit of reading on vdev setup, I understand the arguments between mirroring and raidz. I have found this article and if that is the current thinking then my best option would be to set up the 6 drives as a pool of three mirrored vdevs? I'm not bothered so much about maximizing storage, I'm concerned about VM IOPS. I'm not going to lose sleep if I lose data (this is a home lab) so mirrored gives me enough resilience. I am thinking of presenting the storage to the KVM hosts as either NFS or iSCSI (My NICS are lagged using LACP, both GbE, a Cisco 3750 in between them and the hosts)

Question 2, is the article I linked to considered sound advice and is my reasoning sufficient for a Virtualised environment

Now the question that I'm sure has been done to death a thousand and one times. L2ARC and SLOG. I understand the concept, I have read the article, I have two cheap SSDs (new) that I'm happy to use for this purpose if it truly will give me any benefit. I've read posts from people who say L2ARC misused can affect your ARC, I've read articles that say you only need a GB of SLOG, I've read articles that espouse using it and other articles saying it's of no benefit (seems to be talking about plex type setups though). I've seen articles that say mirror the ZIL, but not the L2ARC. There's a lot of detail out there and I'm wondering what is the best set up for me.

Question 2, my environment will be virtualised, with quite a lot of serverless functions, some Db activity as well. Should I use the two SSDs and if so how should I deploy them (mirrored, not) should I use L2ARC and SLOG and if so how much space shall I allocate to each?


Anyway, hello all and here's hoping to a happy TrueNas experience.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Joined
Jul 17, 2020
Messages
3

Not for SLOG

You're also light on RAM to be doing L2ARC... what's your hit ratio like for ARC?

Thanks for taking the time to reply, with regards the SLOG, do you mind explaining why? I've had a read of this article and I completely get in a scenario where I am concerned about my data that I would want something battery backed to ensure data consistency in event of an error. But I don't understand why it wouldn't give me any benefit having dedicated faster disk for heavy sync writes. In my head I see my pool being contested for read and write if I begin to over commit. I appreciate these are crappy consumer disks, but from what I've read you would only partition up a fraction of the drive anyway to allow for the natural wear leveling.

The numbers I get back for the disk latency are superior to the Intel S3700 which seems to be recommended. Am I missing something? I forgot to mention, the NAS and KVM hosts will be sat on a UPS and I have it scripted to shut these devices down in event of power failure. I was assuming this would flush out the SLOG?

Synchronous random writes: 0.5 kbytes: 111.2 usec/IO = 4.4 Mbytes/s 1 kbytes: 114.4 usec/IO = 8.5 Mbytes/s 2 kbytes: 114.5 usec/IO = 17.1 Mbytes/s 4 kbytes: 119.4 usec/IO = 32.7 Mbytes/s 8 kbytes: 126.0 usec/IO = 62.0 Mbytes/s 16 kbytes: 144.8 usec/IO = 107.9 Mbytes/s 32 kbytes: 148.1 usec/IO = 211.0 Mbytes/s 64 kbytes: 173.6 usec/IO = 360.0 Mbytes/s 128 kbytes: 298.1 usec/IO = 419.4 Mbytes/s


With regards the L2ARC I haven't set up the pool yet or loaded the NAS, I thought I'd ask for advice first :smile:.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
With regards the L2ARC I haven't set up the pool yet or loaded the NAS, I thought I'd ask for advice first
More RAM first, but you can always add L2ARC and/or remove it based on how things go.

with regards the SLOG, do you mind explaining why?
Cheap SSDs and SLOG don't usually go together and the only way to get a real performance boost is to use an Optane PCIe disk.

The amount of SLOG that can be useful is really quite small (30GB) due to the timing aspect... SLOG will stop providing a benefit when the oldest data sitting on it that isn't written out to the pool is 10 (or maybe 5) seconds old, so if your disks aren't able to cope with a decent amount of IOPS, putting a relatively slow SSD in front of them won't give you a very big buffer (OK, it will be something, but not much).

If your IO is lots of small writes with only a trickle of disk activity, maybe you will see a small benefit.

With 3 mirrors, your pool will have around 300 IOPS... the SSD might have something in the tens of thousands. So if you do something that causes a lot of IO... like thick provisioning a VM disk... you might find that you get 10 seconds of good IOPS at the SSD speed (let's say 50'000/sec, making 500'000 iops now stored on the SSD... now your pool needs to take that at 300 per second... 1'666 seconds (27 minutes) until you're back to having any benefit from the SLOG. OK, so that's a worse than worst-case scenario, but you get my point... surely you will never thick provision a VM HD. It's really important to have a super-high performing SSD for your SLOG to maximize what you can swallow at SSD speed, hopefully absorbing the height of your IOPS peaks without going beyond 10 seconds of your pool's ability to swallow it all right after.

You seem to have understood it all pretty well, so I'll stop there... go ahead and test it out (assuming you're really OK with the risks of no power protection on the SLOG). You can try it with and without the SLOG (you can remove and add it at any time to the pool) and see how it suits you best.
 
Last edited:
Top