Greetings!
I've been exploring FreeNAS for a couple of weeks now, and I wanted to share some results and ask for some feedback.
I obtained a pair of Netapp disk shelves on the cheap and decided to build myself a filer equivalent. Originally I had tried to pass the HBA through to a VM on my ESXi host, but it didn't work on 6.0u2 or 6.5. (Interestingly it did work on 5.5, but I do not want to roll back my environment.) When I RDM'd the disks to the VM it worked, and I had very nice performance (Credit to FreeNAS as the realized throughput was greater than the theoretical max 4 Gb multipath, must be some caching in memory.) Unfortunately there are some issues with RDM and I had a lot of read/write and checksum errors on the pool so I decided I would look into a hardware build. (After testing of course.)
The hardware:
CPU: Intel Xeon E5-2640
Motherboard: HP-Z420
RAM: 4x8 GB Hynix ECC DDR3
OS drive: 2 x Kingston USB 2.0 8GB Datatraveler
HBA: 2 x QLogic QLE2462
Storage: 2 x Netapp DS14MK2 AT (14 x 1 TB WD Enterprise Sata)
Traffic NIC: Intel 520 dual 10 GbE
Management NIC: Onboard 1 GbE
PSU: 630 Watt ATX (Rosewill)
UPS: TrippLite 1500 VA
Use Cases:
-iscsi for the main ESXi host (and potentially others)
-file server (will be SMB, though I may middleware this with a Windows VM as FreeNAS performance for SMB was below my expectations)
-backups
Thoughts:
I don't want the FreeNAS to be a hardware or software single point of failure. There is good redundancy with the disk shelves, HBAs, OS drives, etc.. but I could use some feedback. I may ultimately elect to use one of the shelves on a separate hardware system. I also need to learn more about replication; if there are any good links, please post them!
With multipathing and mirrors between the two shelves there is a tmax of 8 Gbps throughput. Is there a good means to estimate how many vdevs I should use to get close to that? I may want to put the remainder in another pool with lower performance but higher storage effeciency.
(Example: 1 pool with 6 stripes of mirrors (12 disks, 6 TB), 1 pool with 2 stripes of 8 disk z2 (16 disks, 12 TB)
Please share any feedback or recommendations!
I've been exploring FreeNAS for a couple of weeks now, and I wanted to share some results and ask for some feedback.
I obtained a pair of Netapp disk shelves on the cheap and decided to build myself a filer equivalent. Originally I had tried to pass the HBA through to a VM on my ESXi host, but it didn't work on 6.0u2 or 6.5. (Interestingly it did work on 5.5, but I do not want to roll back my environment.) When I RDM'd the disks to the VM it worked, and I had very nice performance (Credit to FreeNAS as the realized throughput was greater than the theoretical max 4 Gb multipath, must be some caching in memory.) Unfortunately there are some issues with RDM and I had a lot of read/write and checksum errors on the pool so I decided I would look into a hardware build. (After testing of course.)
The hardware:
CPU: Intel Xeon E5-2640
Motherboard: HP-Z420
RAM: 4x8 GB Hynix ECC DDR3
OS drive: 2 x Kingston USB 2.0 8GB Datatraveler
HBA: 2 x QLogic QLE2462
Storage: 2 x Netapp DS14MK2 AT (14 x 1 TB WD Enterprise Sata)
Traffic NIC: Intel 520 dual 10 GbE
Management NIC: Onboard 1 GbE
PSU: 630 Watt ATX (Rosewill)
UPS: TrippLite 1500 VA
Use Cases:
-iscsi for the main ESXi host (and potentially others)
-file server (will be SMB, though I may middleware this with a Windows VM as FreeNAS performance for SMB was below my expectations)
-backups
Thoughts:
I don't want the FreeNAS to be a hardware or software single point of failure. There is good redundancy with the disk shelves, HBAs, OS drives, etc.. but I could use some feedback. I may ultimately elect to use one of the shelves on a separate hardware system. I also need to learn more about replication; if there are any good links, please post them!
With multipathing and mirrors between the two shelves there is a tmax of 8 Gbps throughput. Is there a good means to estimate how many vdevs I should use to get close to that? I may want to put the remainder in another pool with lower performance but higher storage effeciency.
(Example: 1 pool with 6 stripes of mirrors (12 disks, 6 TB), 1 pool with 2 stripes of 8 disk z2 (16 disks, 12 TB)
Please share any feedback or recommendations!
Last edited: