Recommended ZFS Layout and FN setup

Status
Not open for further replies.

SRA

Dabbler
Joined
Jun 16, 2017
Messages
11
Hi guys,

I am new to Freenas and need to setup a storage server for my team. I have setup FN but I want to confirm if its the best way to configure for performance and resiliency. I have got hardware from someone who was supposed to setup storage but its on me now.

2 x HP Proliant G9 servers with 256GB memory each
2 x SuperMicro Chassis with SAS HBA connected to each HP proliant server
Each Chassis has : 17 x 1.8 TB 12GBps SAS Drives and 17 x 400 GB SSD SAS Drives
Each Proliant has 2 x 10G Ethernet connected to switches in LACP LAGG mode.

Requirement :
The storage servers needs to serve
- iSCSI volume to ESXi hosts to be able to use Linked Clones
- NFS Shares for other linux servers
- Fairly frequent backed up

Current Setup
- 2 Volumes are setup on each Proliants : SSDVolume and SASVolume
- SASVolume is 2 x raidz3 8 x 1.8 TB SAS Drives + 1 Hot Spare + 2 x 400GB SSD as Log mirror
- SSDVolume is 2 x raidz2 6 x 400 GB SSD Drives + 1 Hot Spare + 2 x 400 GB SSD as Log mirror
- StorageA has iSCSI shared to ESXi and zvol is Remote replicated to StorageB every 10 minutes
- StorageB has NFS shares and is remote replicated to StorageA every 10 minutes

I did read that raidz3 might not be best performing but I like that 3 drives in 8 drive vdev can die without effecting pool. If i configure 2 drive mirror vdevs and 3 drives lost happen to be of same vdev the whole pool will be lost.

Currently i am getting around 29K IOPS from a VM while performing random read write using fio as explained in https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o

Is this good enough?

Thanks
 

scrappy

Patron
Joined
Mar 16, 2017
Messages
347
Looks good enough to me!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi guys,

I am new to Freenas and need to setup a storage server for my team. I have setup FN but I want to confirm if its the best way to configure for performance and resiliency. I have got hardware from someone who was supposed to setup storage but its on me now.

2 x HP Proliant G9 servers with 256GB memory each
2 x SuperMicro Chassis with SAS HBA connected to each HP proliant server
Each Chassis has : 17 x 1.8 TB 12GBps SAS Drives and 17 x 400 GB SSD SAS Drives
Each Proliant has 2 x 10G Ethernet connected to switches in LACP LAGG mode.

Requirement :
The storage servers needs to serve
- iSCSI volume to ESXi hosts to be able to use Linked Clones
- NFS Shares for other linux servers
- Fairly frequent backed up

Current Setup
- 2 Volumes are setup on each Proliants : SSDVolume and SASVolume
- SASVolume is 2 x raidz3 8 x 1.8 TB SAS Drives + 1 Hot Spare + 2 x 400GB SSD as Log mirror
- SSDVolume is 2 x raidz2 6 x 400 GB SSD Drives + 1 Hot Spare + 2 x 400 GB SSD as Log mirror
- StorageA has iSCSI shared to ESXi and zvol is Remote replicated to StorageB every 10 minutes
- StorageB has NFS shares and is remote replicated to StorageA every 10 minutes

I did read that raidz3 might not be best performing but I like that 3 drives in 8 drive vdev can die without effecting pool. If i configure 2 drive mirror vdevs and 3 drives lost happen to be of same vdev the whole pool will be lost.

Currently i am getting around 29K IOPS from a VM while performing random read write using fio as explained in https://www.binarylane.com.au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o

Is this good enough?

Thanks
This setup may very well be 'good enough'. I use RAIDZ2-based NFS storage for virtual machines with satisfactory results. But I use these sporadically, while it sounds as though you'll be working your systems pretty hard.

On the other hand, it may not be good enough as time goes by... Read @jgreco's posts in this thread. He makes the point that iSCSI performance can degrade markedly over time when it is based on a RAIDZ-n pool, even though it may seem fine when first configured.

Mirrors really are the best topology to use when providing block storage. If redundancy is a concern, ZFS offers 3-way mirrors with the caveat that (as @jgreco points out) you get horrible space efficiency -- 33 1/3%!

Also, I see that you're using mirrored SLOG devices, which is good! What type of SSDs are you using? These devices need to have power protection, low latency, and fast write speeds.

Good luck!
 
Status
Not open for further replies.
Top