SRA
Dabbler
- Joined
- Jun 16, 2017
- Messages
- 11
Hi guys,
I am new to Freenas and need to setup a storage server for my team. I have setup FN but I want to confirm if its the best way to configure for performance and resiliency. I have got hardware from someone who was supposed to setup storage but its on me now.
2 x HP Proliant G9 servers with 256GB memory each
2 x SuperMicro Chassis with SAS HBA connected to each HP proliant server
Each Chassis has : 17 x 1.8 TB 12GBps SAS Drives and 17 x 400 GB SSD SAS Drives
Each Proliant has 2 x 10G Ethernet connected to switches in LACP LAGG mode.
Requirement :
The storage servers needs to serve
- iSCSI volume to ESXi hosts to be able to use Linked Clones
- NFS Shares for other linux servers
- Fairly frequent backed up
Current Setup
- 2 Volumes are setup on each Proliants : SSDVolume and SASVolume
- SASVolume is 2 x raidz3 8 x 1.8 TB SAS Drives + 1 Hot Spare + 2 x 400GB SSD as Log mirror
- SSDVolume is 2 x raidz2 6 x 400 GB SSD Drives + 1 Hot Spare + 2 x 400 GB SSD as Log mirror
- StorageA has iSCSI shared to ESXi and zvol is Remote replicated to StorageB every 10 minutes
- StorageB has NFS shares and is remote replicated to StorageA every 10 minutes
I did read that raidz3 might not be best performing but I like that 3 drives in 8 drive vdev can die without effecting pool. If i configure 2 drive mirror vdevs and 3 drives lost happen to be of same vdev the whole pool will be lost.
Currently i am getting around 29K IOPS from a VM while performing random read write using fio as explained in https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o
Is this good enough?
Thanks
I am new to Freenas and need to setup a storage server for my team. I have setup FN but I want to confirm if its the best way to configure for performance and resiliency. I have got hardware from someone who was supposed to setup storage but its on me now.
2 x HP Proliant G9 servers with 256GB memory each
2 x SuperMicro Chassis with SAS HBA connected to each HP proliant server
Each Chassis has : 17 x 1.8 TB 12GBps SAS Drives and 17 x 400 GB SSD SAS Drives
Each Proliant has 2 x 10G Ethernet connected to switches in LACP LAGG mode.
Requirement :
The storage servers needs to serve
- iSCSI volume to ESXi hosts to be able to use Linked Clones
- NFS Shares for other linux servers
- Fairly frequent backed up
Current Setup
- 2 Volumes are setup on each Proliants : SSDVolume and SASVolume
- SASVolume is 2 x raidz3 8 x 1.8 TB SAS Drives + 1 Hot Spare + 2 x 400GB SSD as Log mirror
- SSDVolume is 2 x raidz2 6 x 400 GB SSD Drives + 1 Hot Spare + 2 x 400 GB SSD as Log mirror
- StorageA has iSCSI shared to ESXi and zvol is Remote replicated to StorageB every 10 minutes
- StorageB has NFS shares and is remote replicated to StorageA every 10 minutes
I did read that raidz3 might not be best performing but I like that 3 drives in 8 drive vdev can die without effecting pool. If i configure 2 drive mirror vdevs and 3 drives lost happen to be of same vdev the whole pool will be lost.
Currently i am getting around 29K IOPS from a VM while performing random read write using fio as explained in https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o
Is this good enough?
Thanks