Is this system powerful enough?

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Hi Everyone,

I'm a new person to the forums, however, before posting this thread I have done a lot of research and gathered lots of information so I just wanted to make sure that my understanding of the information is correct and if not if you could please kindly point me in the right direction.

So firstly I will be using FreeNAS to provide iSCSI backed datastore's to 4 ESXi hosts running around 30 VMs. They'll be more in the future but probably less than 100 for a considerable amount of time. I'm using this for a home environment and mainly a bit of fun, but it also provides services for my family like there offsite backup server etc. So although there is a small amount of important(ish) data there isn't much.

The hardware of the box I wanted to run FreeNAS on is:
  • Dell PowerEdge R510 (12x LFF Chassis)
  • 128GB ECC DDR3 Memory
  • 2x Intel Xeon X5650 @ 2.66 GHz
  • Dell H310 (Configured in IT Mode)
  • L2ARC - PNY CS900 240GB
  • 12 x 2TB 7200 RPM SAS Drives (HP Certified) (soon to change to 12 x 3TB 7200 RPM SATA Drives)
  • 2 x 1GB Uplinks Broadcom (soon to change to 10GB SFP+ Uplinks)
This device is also UPS backed with an estimated 17 minutes runtime (running 6 servers, plus networking equipment). In the process right now of setting up monitoring to auto power down everything safely.

Currently there's around a 2:1 L2ARC:ARC ratio which is below the recommended maximum of 4/5:1. Having said that this system has a lot more cache so I was hoping that I wouldn't need to get too much of L2ARC straight away because it should in theory take a while for it all to fill up.

I understand that when creating a iSCSI Target FreeNAS by default sets sync=disabled, which is bad, because when ESXi requests the data to be written to disk (which it does for all requests as it cannot determine the importance of the information) it instead gets written to RAM. Although this is faster it means that if the system was to crash or lock out then all pending writes which haven't been flushed would be lost. This would inevitably result to VM corruption which may not be noticed straight away.

However, the way to mitigate this is to set sync=always which means you MUST have a decent SLOG as without one performance will tank. Having looked online there are a lot of recommendations to get NVMe based devices as they have much lower latency due to the lack of having to serialise and deserialise when communicating to the physical device. The most recommended device is Intel Optane 900P 280GB (£300) or Zeus RAM (£300~). Obviously the size doesn't matter as much as you'll only really assign 8/10GB usable of that drive so that it can do inbuilt drive wear down.

My question is, is there any cheaper SLOG devices which we know can support network speeds of 10GB and also don't have much of an impact on sync=always. Or as this device is UPS backed is it reasonable to accept the risk that the OS may one day crash but this isn't likely and to leave sync=disabled and not invest? After all it's a home system and isn't mission critical data.

Finally with L2ARC the recommendation (as far as I'm aware) is to use NVMe drives when using Networking Speeds of 10GBs otherwise it'll bottleneck your system. Is anyone able to confirm if this finding is true or if standard SATA SSD drives should be ok? If not, are you able to advise any decent (relatively cheap NVMe drives)? I assume a M.2 on a NVMe to PCIe device (4 lanes or x4) might be ok?

Overall I'm new to FreeNAS and after reading Cyberjock's powerpoint presentation I don't want to become another statistic, and I want to make sure that my system can comfortably run FreeNAS without their being lag to my VMs.

So yeah TLDR can anyone advise of any problems that I might have with the above system and as to if they/what SLOG & L2ARC they recommend.

Many Thanks,

Tom.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
My question is, is there any cheaper SLOG devices which we know can support network speeds of 10GB and also don't have much of an impact on sync=always.
NVMe drives are recommended, but you can use other drives too. You can choose to use SATA based M2 drives or even just a quality SATA SSD.
Or as this device is UPS backed is it reasonable to accept the risk that the OS may one day crash but this isn't likely and to leave sync=disabled and not invest? After all it's a home system and isn't mission critical data.
This is a question only you can answer. Only you know how much risk you are willing to take with your data.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
NVMe drives are recommended, but you can use other drives too. You can choose to use SATA based M2 drives or even just a quality SATA SSD.

Ok, thank you, I will take a look around and see what I can do. I guess the best thing to do is just try it and if it's obvious writes are slow then look at upgrading it.

This is a question only you can answer. Only you know how much risk you are willing to take with your data.

Yeah this is very true, it's probably best not to become another statistic and actually do this properly. If I cheap out now I'll probably end up having to buy it in the future and I'll have then lost, time, money & data. I guess there's no real reason to not do it tbh.

But overall do you think this system is supported and would work?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
But overall do you think this system is supported and would work?
Yes, I do. Make sure you get a supported SFP+ card. Other than that, make sure you set up the UPS monitoring and switch off the servers 2-3 mins into a power failure if you haven't already. Since you have multiple servers off the same UPS, you need to make sure everything shuts down gracefully before the battery drains completely.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
The most recommended device is Intel Optane 900P 280GB (£300) or Zeus RAM (£300~).
I am not sure about UK, but the prices here on the other side of the pond is not too bad.

$64 for WD 250 NVMe -- https://www.amazon.com/Black-500GB-...me+drive&qid=1560960003&s=gateway&sr=8-6&th=1

$105 for a 500GB NVMe from WD -- https://www.amazon.com/Black-500GB-High-Performance-NVMe-PCIe/dp/B07MH2P5ZD/ref=sr_1_6?keywords=nvme+drive&qid=1560960003&s=gateway&sr=8-6&th=1

$148 for a Samsung 970 Pro NVMe which is also a great drive -- https://www.amazon.com/Samsung-970-...ds=nvme+drive&qid=1560960003&s=gateway&sr=8-3

If you look around hard, you should find a smaller drive for cheaper.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So firstly I will be using FreeNAS to provide iSCSI backed datastore's to 4 ESXi hosts running around 30 VMs.
I would suggest using NFS instead of iSCSI if you can.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Yes, I do. Make sure you get a supported SFP+ card. Other than that, make sure you set up the UPS monitoring and switch off the servers 2-3 mins into a power failure if you haven't already. Since you have multiple servers off the same UPS, you need to make sure everything shuts down gracefully before the battery drains completely.

Brilliant, thank you, that makes me feel a little relieved now as my previous storage server sucked and was causing VMs to literally take 20 minutes to turn on. Not only that but the latency on the VM HDD read/write was averaging at 500ms :/.

I am not sure about UK, but the prices here on the other side of the pond is not too bad.

$64 for WD 250 NVMe -- https://www.amazon.com/Black-500GB-High-Performance-NVMe-PCIe/dp/B07C9H8MHD/ref=sr_1_6?keywords=nvme+drive&qid=1560960003&s=gateway&sr=8-6&th=1

$105 for a 500GB NVMe from WD -- https://www.amazon.com/Black-500GB-High-Performance-NVMe-PCIe/dp/B07MH2P5ZD/ref=sr_1_6?keywords=nvme+drive&qid=1560960003&s=gateway&sr=8-6&th=1

$148 for a Samsung 970 Pro NVMe which is also a great drive -- https://www.amazon.com/Samsung-970-...ds=nvme+drive&qid=1560960003&s=gateway&sr=8-3

If you look around hard, you should find a smaller drive for cheaper.

Brilliant, I'll take a look thank you, hopefully I'll be able to find a decent SLOG device :D

I would suggest using NFS instead of iSCSI if you can.

Interesting, so when I was originally looking around at the NFS vs iSCSI threads the majority of them were saying that iSCSI is a lot faster. Now I have to admit that on most of the threads it was because they didn't have a SLOG device which you *need* for NFS as it's synchronous. However, on the thread marked below (post #2) cyberjock says (and I quote):

Thread Link

3. iSCSI is 'where its at' with regards to getting great performance with FreeNAS/TrueNAS when using ESXi hosts. NFS is a second-rate citizen on the ESXi side of the house. With Xen though, the opposite is somewhat true. If you are using Xen, I'd recommend you consider NFS first and foremost. Additionally, iSCSI has some major performance opportunities because it is kernel-based while NFS is not. So, all things being equal on the VM hosts side, iSCSI should outperform NFS virtually every time. For many, using NFS because it uses files is far more important than performance, so you can still justify using NFS on ESXi if that is what you want/need.

Please can I request what your reasoning behind using NFS is?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Please can I request what your reasoning behind using NFS is?
The resources needed to get great speed from iSCSI, but I suppose it depends on what you consider great speed. Here are some threads about iSCSI performance on ZFS that might help:

Why iSCSI often requires more resources for the same result (block storage)
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage (iSCSI)
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/

Then, someone may have pointed these out, or you may have seen them:

Some insights into SLOG/ZIL with ZFS on FreeNAS
https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Testing the benefits of SLOG using a RAM disk!
https://www.ixsystems.com/community/threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

Then I posted some speed tests I did on iSCSI here which might be informative:
https://www.ixsystems.com/community/threads/iscsi-performance-par-for-the-course.77108/post-536385
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
The resources needed to get great speed from iSCSI, but I suppose it depends on what you consider great speed. Here are some threads about iSCSI performance on ZFS that might help:

Why iSCSI often requires more resources for the same result (block storage)
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage (iSCSI)
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/

Then, someone may have pointed these out, or you may have seen them:

Some insights into SLOG/ZIL with ZFS on FreeNAS
https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Testing the benefits of SLOG using a RAM disk!
https://www.ixsystems.com/community/threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

Then I posted some speed tests I did on iSCSI here which might be informative:
https://www.ixsystems.com/community/threads/iscsi-performance-par-for-the-course.77108/post-536385

Thank you for taking the time to get these links for me, so from what I'm reading essentially because the file system is done by the client on iSCSI, FreeNAS is unable to prioritize what is most important without loads of RAM as it must load everything into RAM and then from there remove items which are no longer called. Because of this you are required to have loads of RAM so that there is enough room for everything.

However, with NFS because FreeNAS takes care of the file system the ARC/L2ARC is used more effectively because it understands the files which are being requested and as a result can prioritize correctly.

This allows for faster reads because the files which are read most often have a higher chance of being kept in memory, and the memory itself is full of less junk (especially when what it is referencing no longer exists).

Correct me if I'm wrong but write speeds also seems to be faster because fragmentation no longer exists as FreeNAS is taking care of the file system rather than ESXi. This means that there is one source of authority over the file server (the NAS) rather than the 4 ESXi hosts. Not only that but it can result in drastically less write operations because the communication is only about the file to be stored and not the file to be stored and how to store it.

I believe it also means the 50% rule doesn't exist for NFS as it doesn't fragment as much?

This is factually incorrect, which would give me pause for anything else contained in that statement.

Ahh ok thanks for letting me know, storage is not my strongest point so I'm glad these forums exist!
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
In regards to how I was going to configure the Z-Pool I was planning on doing a Striped Mirror for maximum read/write gain, with half of the storage lost. Does this seem reasonable / or are there better approaches?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
In regards to how I was going to configure the Z-Pool I was planning on doing a Striped Mirror for maximum read/write gain, with half of the storage lost. Does this seem reasonable / or are there better approaches?
Mirrors will give you max IOPS since there will be more vdevs relatively (given a specific amount of drives), so yes that would be best for your use case.

If maximizing storage space is what you are after, then RAIDZ2 would be a better bet.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Mirrors will give you max IOPS since there will be more vdevs relatively (given a specific amount of drives), so yes that would be best for your use case.

If maximizing storage space is what you are after, then RAIDZ2 would be a better bet.


Brilliant, thanks for confirming, I was thinking something like 6 x two-way mirrors and then stripe over the top of that :)
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
@Inxsible this may be a stupid question I was looking at the Technical Specs of the Samsung 970 Pro NVMe which are found here.

It doesn't mention anywhere about Power Loss Protection, is this because it doesn't have it or does it not need it?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
@Inxsible this may be a stupid question I was looking at the Technical Specs of the Samsung 970 Pro NVMe which are found here.

It doesn't mention anywhere about Power Loss Protection, is this because it doesn't have it or does it not need it?
I am not sure if the Samsung 970 Pro NVMe needs Power Loss Protection or not, but it is my opinion that you do not for the following reasons, and I quote:
I'm using this for a home environment and mainly a bit of fun, but it also provides services for my family like there offsite backup server etc. So although there is a small amount of important(ish) data there isn't much.
My question is, is there any cheaper SLOG devices

You are asking for advanced features of SSDs but you are also looking for cheaper drives. Those two usually don't go together.

However, here's a thread that lists SSD drives with PLP -- You might have to scan through to find a NVMe drive with PLP
https://www.ixsystems.com/community/threads/list-of-ssds-with-power-loss-protection.63998/

Note also, that as @HoneyBadger points out in that thread (post #3), WD Green claims to have PLP, but it's the implementation details that matter. Hope that helps.

I think you are overthinking this and you just need to buy the cheapest SSD to use as SLOG. That could even mean an mSATA, SATA SSD , or NVMe. Performance does vary between these, but will you be able to notice the difference?

If you still want an NVMe drive -- here's another cheaper one I found for $40 for 240GB and $140 for 960GB --- https://www.amazon.com/Kingston-SA1...ith+plp&qid=1561045152&s=gateway&sr=8-1-spell
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
My question is, is there any cheaper SLOG devices which we know can support network speeds of 10GB and also don't have much of an impact on sync=always

Not really. Speed costs; how fast do you want to go? Given the scope of what you're building I would think that an Optane P4801X drive is well within a reasonable cost and will give you decent speeds. For something that "supports network speeds of 10Gbps" - I don't believe there is any single, easily-consumer-available device that can handle that at the smallest block sizes. Systems with NVDIMM support would likely suffice but those are usually bespoke creations and not something that is anywhere remotely resembling "cheap." The best option would be an Optane card or potentially a more exotic NVRAM card like the Radiant Memory RMS-200 that was available periodically secondhand.

I think you are overthinking this and you just need to buy the cheapest SSD to use as SLOG. That could even mean an mSATA, SATA SSD , or NVMe. Performance does vary between these, but will you be able to notice the difference?

Oh, you'll absolutely notice the difference between a good SSD and a bad one. Take a look at the benchmarks in the SLOG thread in my signature and you'll see a few examples of SSDs that are great for consumer use and non-sync writes, but the lack of PLP (or the way they implement it) means they fall flat on their face when presented with the sustained, low-queue-depth write workload that SLOG generates.

Short answer: Buy an Optane P4801X or a DC P3700.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Thank you @Inxsible, I will take a look around and see what I can find :)

Not really. Speed costs; how fast do you want to go? Given the scope of what you're building I would think that an Optane P4801X drive is well within a reasonable cost and will give you decent speeds. For something that "supports network speeds of 10Gbps" - I don't believe there is any single, easily-consumer-available device that can handle that at the smallest block sizes. Systems with NVDIMM support would likely suffice but those are usually bespoke creations and not something that is anywhere remotely resembling "cheap." The best option would be an Optane card or potentially a more exotic NVRAM card like the Radiant Memory RMS-200 that was available periodically secondhand.

Yeah I guess the card itself doesn't need to support 10GB/s due to the nature of what it's writing. I just don't want the system to be bottlenecked by SLOG or L2ARC when I put 10GB/s networking interfaces in. Otherwise, I'm going to have to shell out again which I don't really want to do.

Oh, you'll absolutely notice the difference between a good SSD and a bad one. Take a look at the benchmarks in the SLOG thread in my signature and you'll see a few examples of SSDs that are great for consumer use and non-sync writes, but the lack of PLP (or the way they implement it) means they fall flat on their face when presented with the sustained, low-queue-depth write workload that SLOG generates.

Short answer: Buy an Optane P4801X or a DC P3700.

Ok thank you, that's made a lot of sense. I did think the result was going to be no you need to buy an Optane, but I wanted to confirm it before I bought one :(

Thank you guys, hopefully I'll be able to post a successful story on Reddit soon :D Probably to something like r/homelabs to detail the entire rack :D
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Yeah I guess the card itself doesn't need to support 10GB/s due to the nature of what it's writing. I just don't want the system to be bottlenecked by SLOG or L2ARC when I put 10GB/s networking interfaces in. Otherwise, I'm going to have to shell out again which I don't really want to do.

Networks aren't usually the bottleneck for writes at smaller recordsize (like you get when doing random benchmarks) - that's almost always the SLOG device. Optane is about the best you can get, but the field has gotten significantly better since the introduction of NVMe.

Ok thank you, that's made a lot of sense. I did think the result was going to be no you need to buy an Optane, but I wanted to confirm it before I bought one :(

Thank you guys, hopefully I'll be able to post a successful story on Reddit soon :D Probably to something like r/homelabs to detail the entire rack :D

If this is just for a homelab scenario, you could probably get away with using a 32GB M.2 Optane card on an adapter. They're not rated for the same ridiculous level of write endurance as the P-series cards, but they can provide good results as shown here:

https://www.ixsystems.com/community...inding-the-best-slog.63521/page-4#post-484151

The newer Optane M15-branded ones should be even better, as they get a bump to 4 PCIe lanes and some controller improvements, but I don't think they're available yet.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Networks aren't usually the bottleneck for writes at smaller recordsize (like you get when doing random benchmarks) - that's almost always the SLOG device. Optane is about the best you can get, but the field has gotten significantly better since the introduction of NVMe.



If this is just for a homelab scenario, you could probably get away with using a 32GB M.2 Optane card on an adapter. They're not rated for the same ridiculous level of write endurance as the P-series cards, but they can provide good results as shown here:

https://www.ixsystems.com/community...inding-the-best-slog.63521/page-4#post-484151

The newer Optane M15-branded ones should be even better, as they get a bump to 4 PCIe lanes and some controller improvements, but I don't think they're available yet.

Thank you, I'll take a look around and see what I can find!

Thank you everyone for all your input, I've got a very good idea at what I need now and hopefully I'll have a decent system whilst also ensuring my data is safe.
 
Top