Is this system powerful enough?

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The resources needed to get great speed from iSCSI, but I suppose it depends on what you consider great speed.

For the purposes of VM block storage, NFS and iSCSI have a large number of similarities. iSCSI, however, can more easily be made to have multiple paths, and the fancy features available in newer versions of VMFS are only available on iSCSI.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Finally with L2ARC the recommendation (as far as I'm aware) is to use NVMe drives when using Networking Speeds of 10GBs otherwise it'll bottleneck your system. Is anyone able to confirm if this finding is true or if standard SATA SSD drives should be ok? If not, are you able to advise any decent (relatively cheap NVMe drives)? I assume a M.2 on a NVMe to PCIe device (4 lanes or x4) might be ok?

So I'll say that whereever this came from, this is a ~~stupid recommendation. The point isn't to be as fast as your network, it's to reduce seek times for data that would otherwise be retrieved from your HDD's. L2ARC doesn't need to be "fastest". I would rather have twice the L2ARC at a quarter of the speed.

However, since you have the pleasure of an R510, you don't have a whole lot of options for a ton of SATA SSD capacity. You can toss a pair of 256GB or probably even 500GB SSD's in the internal bays if you wish to retain free PCIe slots. A pair of SATA SSD's will definitely go as fast as you really need.

However, you can use a card like this Addonics add-on card to add a PCIe NVMe SSD slot internally. We do this and also add an HBA with a pair of M.2 SATA SSD's so we have internal boot devices on R510's.
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
For the purposes of VM block storage, NFS and iSCSI have a large number of similarities. iSCSI, however, can more easily be made to have multiple paths, and the fancy features available in newer versions of VMFS are only available on iSCSI.

Ok thank you, so it makes more sense to go back to the original plan of using iSCSI. Especially if I get an Optane (NVMe) SSD for the SLOG as I will need to set sync=always.

So I'll say that whereever this came from, this is a ~~stupid recommendation. The point isn't to be as fast as your network, it's to reduce seek times for data that would otherwise be retrieved from your HDD's. L2ARC doesn't need to be "fastest". I would rather have twice the L2ARC at a quarter of the speed.

Yeah I did think it did not make sense when I read this. For reference's sake I read it here: https://www.servethehome.com/buyers...nas-nas-servers/top-picks-freenas-l2arc-ssds/

Capture.PNG


I read that to suggest that if you are using more than 1GbE you would need an NVMe but I probably read it wrong.

However, since you have the pleasure of an R510, you don't have a whole lot of options for a ton of SATA SSD capacity. You can toss a pair of 256GB or probably even 500GB SSD's in the internal bays if you wish to retain free PCIe slots. A pair of SATA SSD's will definitely go as fast as you really need.

However, you can use a card like this Addonics add-on card to add a PCIe NVMe SSD slot internally. We do this and also add an HBA with a pair of M.2 SATA SSD's so we have internal boot devices on R510's.

Ok so that makes sense, fortunately as I USB boot the FreeNAS my two internal SSD slots are unused. Interestingly though you recommended using 500GB SSD's, if I was to do that it would give me 1TB of L2ARC (which is great), however I thought this wasn't recommended as it would exceed the 5:1 ratio of L2ARC:ARC?
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah I did think it did not make sense when I read this. For reference's sake I read it here: https://www.servethehome.com/buyers...nas-nas-servers/top-picks-freenas-l2arc-ssds/

STH is still basically a home enthusiast's site, and has a lot of "bigger, badder = better" sort of feel to it at times. It's kinda like when you go off to the local hot rod show and they tell you all the cool things they did to a Mack truck to make it "better."

The flames on the side do NOT actually make it go faster.

View attachment 31498

I read that to suggest that if you are using more than 1GbE you would need an NVMe but I probably read it wrong.

It assumes that you have some magic workload where you will be constantly be accessing content only present in L2ARC and feeding that out the network. This is ... unlikely. Hot ARC content will be in the ARC. You will periodically have less-hot cached long runs of sequential data coming in from L2ARC -- yes -- ok -- occasionally. But it's not going to be particularly common.

The thing that L2ARC is necessary for when doing VM hosting is fragmentation. ZFS mitigates read fragmentation through use of the L2ARC. When highly fragmented, and especially when overfull, a ZFS pool will have poor read performance, because even a read of blocks that you might think are "sequential" can be coming in from different areas of the pool, incurring a seek penalty. That seek penalty can cripple a HDD pool down to the point where it is doing just maybe a thousand IOPS per second. So what you want is LOTS OF L2ARC. You want SO MUCH L2ARC that the system only rarely ever has to go fetch a block from the pool, because after the previous read, it sent it out to L2ARC. This is called the "working set."

Now the other thing is that you have more than one VM running, and it is very likely that each of those will be running off of some other set of interesting disk blocks. If you were pulling ~30 running VM's worth of disk blocks from the pool, each needing 50 IOPS average, you now have 1500 IOPS required on average. This may exceed what your pool can easily deliver, but it is well within the performance envelope of even very slow SSD.

So what you want is for all the blocks VM's commonly access to be loaded into L2ARC. This might be a good fraction of the total size of your pool. Most of that is NOT going to need to be pulled in at 3GBytes/sec over NVMe. And if you get a 500GB NVMe drive (SN750, yo!) for $120, okay, that's great, but I can get two 860 EVO 500's for the same price, and in aggregate that still gets me about 1GByte/sec of L2ARC read capacity -- far more than I'm likely to need.

Ok so that makes sense, fortunately as I USB boot the FreeNAS my two internal SSD slots are unused. Interestingly though you recommended using 500GB SSD's, if I was to do that it would give me 1TB of L2ARC (which is great), however I thought this wasn't recommended as it would exceed the 5:1 ratio of L2ARC:ARC?

I've done it, and I did it before the L2ARC indirect pointer changes, which is the era where that recommendation comes from. By the way, it's really freaky to run zpool iostat on your pool and see no pool reads, just writes...

The ratio is a curious thing because it is based on a number of assumptions that aren't guaranteed to be true. The better, more modern advice would be to keep an eye on the L2ARC statistics and memory pressure it is causing on the system. If you *needed* to, you can forcibly limit the amount of L2ARC used. So I would say it's fine to go big as long as "big" != "stupid big."
 

thomasjcf21

Dabbler
Joined
Jun 19, 2019
Messages
15
Thanks for all that information, with the 1TB L2ARC I think this system is definitely going to be capable, I've revised my build specs to be the following:

  • Dell PowerEdge R510 (12x LFF Chassis)
  • 128GB ECC DDR3 Memory
  • 2x Intel Xeon X5650 @ 2.66 GHz
  • Dell H310 (Configured in IT Mode)
  • L2ARC - 2x 500GB Samsung 860 EVO's
  • SLOG - Intel Optane P4801X M.2
  • 12 x 2TB 7200 RPM SAS Drives (HP Certified) (soon to change to 12 x 3TB 7200 RPM SATA Drives)
  • 2 x 1GB Uplinks Broadcom (soon to change to 10GB SFP+ Uplinks)
Now just got to wait till Pay Day for the new stuff :D
 

ehsab

Dabbler
Joined
Aug 2, 2020
Messages
45
Thanks for all that information, with the 1TB L2ARC I think this system is definitely going to be capable, I've revised my build specs to be the following:

  • Dell PowerEdge R510 (12x LFF Chassis)
  • 128GB ECC DDR3 Memory
  • 2x Intel Xeon X5650 @ 2.66 GHz
  • Dell H310 (Configured in IT Mode)
  • L2ARC - 2x 500GB Samsung 860 EVO's
  • SLOG - Intel Optane P4801X M.2
  • 12 x 2TB 7200 RPM SAS Drives (HP Certified) (soon to change to 12 x 3TB 7200 RPM SATA Drives)
  • 2 x 1GB Uplinks Broadcom (soon to change to 10GB SFP+ Uplinks)
Now just got to wait till Pay Day for the new stuff :D
So are you satisfied with the build? Did it live up to you expectations?
 
Top