a-a-ron
Dabbler
- Joined
- Jun 11, 2015
- Messages
- 11
I need help making this better... I really want to increase the performance on the 3 different pools. Each have a slightly different use case.
Lightning (SSD Pool) is my vmware datastore for my 2 ESX hypervisors. I am currently running with 3x mirror vdevs, and am thinking of going with 2 three disk raidz vdevs instead (lower read/write peformance?) I want to maximize IOPS; also, I use NFS for presenting it to vmware.
Thunder (HDD Pool w/ Log and Cache SSDs) this is my general storage, movies, music, used for dropping the kids stream videos on it.
Glacier (HDD Pool) this is slow, bulk storage. It's sitting on the SAS expander, I made the mistake when buying it thinking i could connect 2 cables to it and getting max performance. Turns out that's not how SAS works ;) So... 12 drives running on a single 4 gig link running at SATA1 speeds... oh well. Bulk storage is fine.
Hardware:
Supermicro X10SRL-F
CPU: 1x E5-2675 v3 @ 1.80GHz (16c32t)
RAM: 128G DDR4 2133
Eth: Mellanox ConnectX-2 10G fiber
HBA: 2x LSI9211-8i (IBM M1015 in IT mode); 1x LSI9207-8e
CASE: Supermicro 933T-R760B (15x3.5)
EXT-SAS: Supermicro CSE-826 (12x3.5)
Storage:
Lightning (SSD Pool)
6x Crucial 500GB SSD (MX500)
1 pool, 3 mirror vdevs
Thunder (HDD Pool w/ Log and Cache SSDs)
4x WD Red 10TB
1x OCX 64GB SSD (LOG Drive)
1x WD 250GB SSD (L2Arc Cache)
1 pool, 1 vdev, 1 log drive, 1 cache drive
Glacier (HDD Pool)
4x Hitach 4TB
8x WD Red 3TB
1 pool, 3 vdevs in raidz
Lightning (SSD Pool) is my vmware datastore for my 2 ESX hypervisors. I am currently running with 3x mirror vdevs, and am thinking of going with 2 three disk raidz vdevs instead (lower read/write peformance?) I want to maximize IOPS; also, I use NFS for presenting it to vmware.
Thunder (HDD Pool w/ Log and Cache SSDs) this is my general storage, movies, music, used for dropping the kids stream videos on it.
Glacier (HDD Pool) this is slow, bulk storage. It's sitting on the SAS expander, I made the mistake when buying it thinking i could connect 2 cables to it and getting max performance. Turns out that's not how SAS works ;) So... 12 drives running on a single 4 gig link running at SATA1 speeds... oh well. Bulk storage is fine.
Hardware:
Supermicro X10SRL-F
CPU: 1x E5-2675 v3 @ 1.80GHz (16c32t)
RAM: 128G DDR4 2133
Eth: Mellanox ConnectX-2 10G fiber
HBA: 2x LSI9211-8i (IBM M1015 in IT mode); 1x LSI9207-8e
CASE: Supermicro 933T-R760B (15x3.5)
EXT-SAS: Supermicro CSE-826 (12x3.5)
Storage:
Lightning (SSD Pool)
6x Crucial 500GB SSD (MX500)
1 pool, 3 mirror vdevs
Thunder (HDD Pool w/ Log and Cache SSDs)
4x WD Red 10TB
1x OCX 64GB SSD (LOG Drive)
1x WD 250GB SSD (L2Arc Cache)
1 pool, 1 vdev, 1 log drive, 1 cache drive
Glacier (HDD Pool)
4x Hitach 4TB
8x WD Red 3TB
1 pool, 3 vdevs in raidz