Need help restructuring pools and NAS to increase performance

a-a-ron

Dabbler
Joined
Jun 11, 2015
Messages
11
I need help making this better... I really want to increase the performance on the 3 different pools. Each have a slightly different use case.

Lightning (SSD Pool) is my vmware datastore for my 2 ESX hypervisors. I am currently running with 3x mirror vdevs, and am thinking of going with 2 three disk raidz vdevs instead (lower read/write peformance?) I want to maximize IOPS; also, I use NFS for presenting it to vmware.

Thunder (HDD Pool w/ Log and Cache SSDs) this is my general storage, movies, music, used for dropping the kids stream videos on it.

Glacier (HDD Pool) this is slow, bulk storage. It's sitting on the SAS expander, I made the mistake when buying it thinking i could connect 2 cables to it and getting max performance. Turns out that's not how SAS works ;) So... 12 drives running on a single 4 gig link running at SATA1 speeds... oh well. Bulk storage is fine.

Hardware:
Supermicro X10SRL-F
CPU: 1x E5-2675 v3 @ 1.80GHz (16c32t)
RAM: 128G DDR4 2133
Eth: Mellanox ConnectX-2 10G fiber
HBA: 2x LSI9211-8i (IBM M1015 in IT mode); 1x LSI9207-8e
CASE: Supermicro 933T-R760B (15x3.5)
EXT-SAS: Supermicro CSE-826 (12x3.5)

Storage:
Lightning (SSD Pool)
6x Crucial 500GB SSD (MX500)
1 pool, 3 mirror vdevs

Thunder (HDD Pool w/ Log and Cache SSDs)
4x WD Red 10TB
1x OCX 64GB SSD (LOG Drive)
1x WD 250GB SSD (L2Arc Cache)
1 pool, 1 vdev, 1 log drive, 1 cache drive

Glacier (HDD Pool)
4x Hitach 4TB
8x WD Red 3TB
1 pool, 3 vdevs in raidz
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Glacier (HDD Pool) this is slow, bulk storage. It's sitting on the SAS expander, I made the mistake when buying it thinking i could connect 2 cables to it and getting max performance. Turns out that's not how SAS works ;) So... 12 drives running on a single 4 gig link running at SATA1 speeds... oh well. Bulk storage is fine.
SATA1 speed per each drive? It might be fine , but 3TB WD Red drives are probably a little faster than that. Still, there is no reason for the SATA1 link with a proper SAS expander. What SAS expander are you using?
CPU: 1x E5-2675 v3 @ 1.80GHz (16c32t)
A slow processor is not helping you any. Get a higher clock speed. You would have been better served by an X9 generation board if you had a higher clock speed.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am currently running with 3x mirror vdevs, and am thinking of going with 2 three disk raidz vdevs instead (lower read/write peformance?) I want to maximize IOPS; also, I use NFS for presenting it to vmware.
More vdevs gives more IOPS. More disks in the vdev don't improve IOPS. If you want best performance for VM use, it would probably benefit you more to increase the vdev count.
Thunder (HDD Pool w/ Log and Cache SSDs) this is my general storage, movies, music, used for dropping the kids stream videos on it.
There is absolutely no purpose in having either a SLOG or a L2ARC on a general storage pool and the kind of drives you are using have too low of a performance to make them very useful.
 

a-a-ron

Dabbler
Joined
Jun 11, 2015
Messages
11
Thanks for the feedback, much appreciated!

SATA1 speed per each drive? It might be fine , but 3TB WD Red drives are probably a little faster than that. Still, there is no reason for the SATA1 link with a proper SAS expander. What SAS expander are you using?
As for the SAS expander, it seems to be an old Quantum DXi6500, which is a rebadged Supermicro.

Quantum DXi6500 (SUPERMICRO CSE-826)
SAS DUAL PORT EXTENDER CARD GT432E002LF
SAS826EL1 BACKPLANE

I think the limit is really the backplane, in looking to upgrade to a multichannel one (SAS826EL2) is cost prohibitive. If I connect 4 drives, they all run at 6Gbps, 8 drives run at 3Gbps, and all 12 drop down to 1.5Gbps.

A slow processor is not helping you any. Get a higher clock speed. You would have been better served by an X9 generation board if you had a higher clock speed.
I will take that into account for future upgrades... I bought this before I had the new hypervisors; it was my only server at the time. Less cores, faster clocks?

More vdevs gives more IOPS. More disks in the vdev don't improve IOPS. If you want best performance for VM use, it would probably benefit you more to increase the vdev count.
So staying with the 3x mirrored vdevs is about the best I can do?

There is absolutely no purpose in having either a SLOG or a L2ARC on a general storage pool and the kind of drives you are using have too low of a performance to make them very useful.
Fair enough! This is the info I'm looking for, thank you!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If I connect 4 drives, they all run at 6Gbps, 8 drives run at 3Gbps, and all 12 drop down to 1.5Gbps.
On your current backplane? That is not how a normal SAS expander backplane runs. I have two 24 drive backplanes in my system and all drives link at 6Gbps. Since every drive is not sending data at the same moment, it works more like a network switch and all drives are able to communicate at full speed to the SAS controller. You would only overwhelm the bandwidth of the controller if you were using SSDs.
Less cores, faster clocks?
You still want to have enough cores, but something like this should be plenty:
https://www.ebay.com/itm/SR1A6-Inte...che-2-80-GHz-10-Core-8-GT-s-115W/183625850833
So staying with the 3x mirrored vdevs is about the best I can do?
No. You can put as many vdevs as you can fit in your hardware or even add more drive shelves. I have a system with 16 drives in 8 mirror vdevs. The more vdevs, the more IOPS. There are many options, depending on your budget.
You might want to look at these results with regard to testing of performance:

Testing the benefits of SLOG using a RAM disk!
https://forums.freenas.org/index.ph...s-of-slog-using-a-ram-disk.56561/#post-396630

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

SLOG benchmarking and finding the best SLOG
https://forums.freenas.org/index.ph...-and-finding-the-best-slog.63521/#post-454773
 

a-a-ron

Dabbler
Joined
Jun 11, 2015
Messages
11
On your current backplane? That is not how a normal SAS expander backplane runs. I have two 24 drive backplanes in my system and all drives link at 6Gbps. Since every drive is not sending data at the same moment, it works more like a network switch and all drives are able to communicate at full speed to the SAS controller. You would only overwhelm the bandwidth of the controller if you were using SSDs.
Thanks for forcing me to look into this again... it appears that the backplane was in fact SAS/SATA 1. In testing, I can get it to reach SATA3 (6Gbps) with only 4 drives, but again it seems to be a backplane issue. I have ordered a newer backplace that fully supports SATA3. Guess I'll have to come up with a new name for that pool ;)

You still want to have enough cores, but something like this should be plenty:
https://www.ebay.com/itm/SR1A6-Inte...che-2-80-GHz-10-Core-8-GT-s-115W/183625850833
Looking into a faster CPU; when I bought this I was looking at cores and not IPC for Plex. Now that I have new hypervisors, I will look at a new CPU with a higher IPC.
 
Top