FreeNAS 9.3 - How important is accurate LUN RPM setting in Extent

Status
Not open for further replies.
Joined
Aug 25, 2014
Messages
89
4U chassis with nice motherboard: SYSSUP60474R Supermicro SSG-6047R-E1R24L 4U Dual LGA2011
Two nice processors: CPUINTX260VR Intel Xeon Quad-Core E5-2609V2 2.5GHz Ivy Bridge-E
128GB ECC RAM: MEMDDR316G3R D629R DDR3-1600 16GB ECC
I have 20 SAS 4TB drives and six 6G SSD drives for Cache & Logs
My application is ESXi 5.5 and I will be supporting 4 VM Hosts that currently have 50 VMs running.

Currently I am testing configurations using ten 4TB SAS drives and a mirrored pair of 120GB 6G SSD for logs, and at this time I am not using any SSDs for Cache. Yesterday I was testing if size of SSD pools for cache helped or hurt and my testing kept saying less was better. This morning I started testing with no cache as each hard drive has 128 MB of cache and my FreeNAS box has 128GB of ECC RAM.

The first couple of tests today have no SSD cache and were done with an Extent that said LUN RPM = SSD.

I am using 4TB Seagate Constellation ES.3 SAS 7,200RPM hard drives and my average read/write speeds with no SSDs as cache is 80.7MB/s.

I deleted my Extent and recreated it and put the LUN RPM at 7,200RPM and my average read/write dropped to 80.2MB/s.

Next I deleted my Extent and recreated it and put the LUN RPM at 15,000RPM and my average read/write went up to 94.5MB/s. For testing purposes I am using 4.5GB files controlled by an ESXi host server using my FreeNAS storage as well as the server's SSD LUN and HD LUN.

So when using FreeNAS 9.3 - How important is accurate LUN RPM setting in Extent? Is there much/any danger of corrupting data?

Thanks group for any thoughts or inputs.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I don't think the extent type will have anything to do with performance. I believe all it is intended to do is report capabilities to the initiator. In the case of a Windows initiator, things like defrag get disabled if it comes across as SSD (which is a good thing for ZFS). If you report SSD to ESX and you have specific storage policies in place it could actually hamper your performance, hence non-ssd would be a better choice. For me, all of my ESX datastores are identified as non-ssd.
 
Joined
Aug 25, 2014
Messages
89
zambanini, do you really think I will need more RAM since I have 128GB? An answer with explaination why and I will see what I can do. I used 16GB EEC chips so only half of my RAM slots are full with the 128GB I have now.

I do have an update on my quest for a fast and reliable FreeNAS box. Since I think I am approaching a suitable speed for my FreeNAS box I went ahead and turned on a pair of mirrored 120GB SSDs for cache and reran my series of 14 file copies, pastes and duplication tasks and I hit 98.94MB/s. File duplication is the only test below 60MB/s. I got 59.58MB/s on two different shares duplicating a 4.48GB iso file with a readme and two folders.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
OP: Can you clarify the size of your SSD's for SLOG and L2ARC?
 
Joined
Aug 25, 2014
Messages
89
Late yesterday I turned L2ARC cache back on therefore I have two 120GB SSD mirrored for cache and two 120GB SSD mirrored for logs.

I am currently using ten 4TB SAS hard drives for testing.

When I go into production and I hope that starts next week I will have twenty 4TB SAS hard drives.

Do the mirrored 120GB SSDs seem like enough cache & log storage?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
120GB mirrors are more than adequate for SLOG. You only need ~8-16GB. The Cache should show up as a stripe, not a mirror, which means you have 240GB of L2ARC. using the 5:1 ratio of RAM to SSD, you should be OK if you are just using CIFS and NFS. As mentioned, if you are going to use ISCSI, you will probably want to consider adding more RAM.

Before you test the speed from ESX, you can test the speed of the pool itself:
https://forums.freenas.org/index.php?threads/nfs-performance-numbers-i-need-more-speed.30499/

I'm assuming you plan to use mirrors for the 20 4TB drives, right? Don't even think about supporting that many ESX hosts and VMs with anything else.
 
Joined
Aug 25, 2014
Messages
89
I Have been using Mirror for both Cache & Log.

I have been using what I believe to be Mirror & Stripe on the ten 4TB hard drives. And

Yes I am using iSCSI. Therefore I believe you are saying I need to upgrade the RAM from 128GB to 256GB or so? And this is because I potentially have 120 to 240GB of SSD? Should I consider buying a smaller 50GB ultra high performance SSDs and would that be a better way to go? Although I would still only have a 2.5:1 ratio. My guess is you and the group are going to say upgrade to 256GB RAM and drop the SSD size to 50GB?
FreeNAS.png

The picture above is how I had my share configured when I tried changing the Extent to LUN RPM = 15,000RPM. Since then I went back in and extended my share pool with a pair of mirrored 120GB SSDs for cache.

The geek in me wonders if there is a way to use the command line and change the LUN RPM to a number above 15,000 and what would the results look like?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Okay, time to trot out my old post again:

The rule of thumb for L2ARC is around a 1:4 or 1:5 ratio of RAM to L2ARC.

Each entry in the L2ARC index uses roughly 180 bytes, and iSCSI works on 4KB recordsize. So a 120GB L2ARC, completely filled with 4KB, would consume:

120GB = 125829120 KB, divide by 4 KB = 31457280 records, times 180 bytes = 5662310400 bytes or about 5400MB of RAM.

Assuming your max ARC is the default of 7/8ths your RAM, you've got 14336MB available. That should work, right?

Well, not quite. The L2ARC index is considered metadata, which by default is capped to 1/4 your total ARC size, or 3584MB. And we're trying to stuff 5400MB in there. Whoops.

So let's assume the 3584MB limit. Reversing the math above, you get a target L2ARC size of about 79.5GB, but realistically less because there's other metadata that will want to live in RAM as well.

Hey, wait a second.

16GB (of RAM) * 5 = 80GB (of L2ARC)

Well, would you look at that. ;)

Following the maths there, your "suggested L2ARC upper bound" is (128GB * 5) = 640GB. A more conservative one would be the 1:4 ratio of 512GB, and you're still under that mark at 480GB. With 480GB a full L2ARC index will consume ~21GB of RAM, and your default tunables will give you an arc_meta_max of 28GB.

You may also want to tune the L2ARC fill rate or it's going to take damned forever to warm up 480GB.

What SSDs are you using?

Edit time.

Changing the "RPM" value to something other than the expected 7200/10K/15K/SSD might just cause the connected OS to see the LUN as "unknown" or "SSD." For Windows OS, set it as an SSD. For ESXi, set it at the RPM of your underlying disks. Interesting to see the performance changes between 7200/15K setting though, perhaps ESXi sets different queue lengths or similar.

Edit II: The Revenge

My guess is you and the group are going to say upgrade to 256GB RAM and drop the SSD size to 50GB?

Honestly I'd say upgrade to 256GB and keep the SSDs; see signature. ;)

But the actual performance answer depends on your dataset size/access patterns. Only you can answer that.
 
Last edited:
Joined
Aug 25, 2014
Messages
89
Not sure what parameters you want for/with zpool status codeblocks so here is a simple zpool status using putty from my PC on my FreenAS box.

pool: Gold
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKS UM
Gold ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/2bed4ad7-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/2c5dc022-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/2cca0cf3-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/2d38b5c6-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/2da86807-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/2e16eb90-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/2e86bba4-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/2ef582fd-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/2f6c3892-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/2fda6767-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
logs
mirror-5 ONLINE 0 0 0
gptid/30217b2a-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/305bbf84-f9c4-11e4-b745-0cc47a18b26c ONLINE 0 0 0
cache
gptid/aff175b5-fa8f-11e4-b745-0cc47a18b26c ONLINE 0 0 0
gptid/b025582c-fa8f-11e4-b745-0cc47a18b26c ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h4m with 0 errors on Thu May 14 03:50:00 2015
config:
NAME STATE READ WRITE CKS UM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b7fdc58c-de41-11e4-a260-0cc47a18b26c ONLINE 0 0 0
gptid/b9305544-de41-11e4-a260-0cc47a18b26c ONLINE 0 0 0
errors: No known data errors
[root@freenas] ~#
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Can you enclose the output of zpool status in Code blocks?
upload_2015-5-15_13-52-54.png
 
Joined
Aug 25, 2014
Messages
89
Thanks everyone for all the good information.
If I understand everyone correctly I need 4 times or 5 times my RAM in SSD cache?
Therefore if I have 128GB RAM times the 7/8 rule then times 4 (or 5)

128GB X 0.875 =112GB 112 X 4 = 448GB or 112 X 5 = 560GB

I need SSD resources between 448GB & 560GB. I am not currently using EMLC SSDs but since I will be needing new SSDs I presume EMLC is the way to go? (this is a question)

Quick question that is probably a mute point: Does the cache RAM on the SAS HDs get used and does it help FreeNAS?

I was asked last Friday what SSDs I was using so here is my answer for what was on board last week:
Three Micron M500 6Gb/s (120GB)
One Other World Mercury Extreme Pro 6Gb/s (120GB)

What brand would the group recommend for an Enterprise Multi Level Cell SSD?

Also does anyone see a problem with striping a 400GB eMLC SSD & a eMLC 200GB to get me to 600GB? Is there any problem if I have a few too many gigabytes in my L2ARC cache?

I am getting close to production and time to add a bunch more new SAS HDDs. When delivering iSCSI to four host ESXi servers is it the consensus of the group that two 10 or 12 drive zpools would be far better than one large 20 or 24 drive zpool? (this is an important question also)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Regarding the 1:4 or 1:5 rule - that's the maximum amount of L2ARC you should have. Because L2ARC is striped, you can always start with less and add more as you see fit. I'd start with the three Micron (Crucial?) M500s as L2ARC and establish a baseline for performance before deciding to push it higher.

Regular MLC is fine for L2ARC because there's no data in peril - if a drive dies, the data just gets read from the pool. For SLOG devices though you'll want eMLC or SLC for the higher write endurance. Intel S3700/S3710 is a good choice if you need to stay SATA/SAS, otherwise look at the P3700 for a PCIe SSD. SLOG performance is affected most by latency, and the Intel units are very low, especially the P3700.

The small amount of cache on the SAS drives will get used for reads but is moot because anything that's read frequently enough to stay in 64MB of drive cache will be in ARC and read from RAM anyways.

And final answer, pool performance scales with vdev count. More drives is better, it will also let you fit more on there before pool fragmentation starts to rear its ugly head. Go with the single 20-24 drive pool, just make sure you're using mirrors the entire way.
 
Joined
Aug 25, 2014
Messages
89
ExtentError.png
Extent issue. I just deleted my LUN and shut the FreeNAS server down so I could pull out my 6 assorted SSDs and put in four 240GB eMLC SSDs for Cache and two 120GB eMLC SSDs for logs and now after the server came back up and I set up my LUN in Volume Manager and I fine tuned my Dataset as usual then moved over to Share/iSCSI and everything looks good up until I get to setting up the Extent where I get an error saying Device: This Field is required and there is nothing in the drop down. I don't actually remember putting something in that fiel before? Any ideas.
 
Status
Not open for further replies.
Top