FreeNAS Filer Head

Status
Not open for further replies.

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
Greetings!

I've been exploring FreeNAS for a couple of weeks now, and I wanted to share some results and ask for some feedback.

I obtained a pair of Netapp disk shelves on the cheap and decided to build myself a filer equivalent. Originally I had tried to pass the HBA through to a VM on my ESXi host, but it didn't work on 6.0u2 or 6.5. (Interestingly it did work on 5.5, but I do not want to roll back my environment.) When I RDM'd the disks to the VM it worked, and I had very nice performance (Credit to FreeNAS as the realized throughput was greater than the theoretical max 4 Gb multipath, must be some caching in memory.) Unfortunately there are some issues with RDM and I had a lot of read/write and checksum errors on the pool so I decided I would look into a hardware build. (After testing of course.)

The hardware:

CPU: Intel Xeon E5-2640
Motherboard: HP-Z420
RAM: 4x8 GB Hynix ECC DDR3

OS drive: 2 x Kingston USB 2.0 8GB Datatraveler
HBA: 2 x QLogic QLE2462
Storage: 2 x Netapp DS14MK2 AT (14 x 1 TB WD Enterprise Sata)
Traffic NIC: Intel 520 dual 10 GbE
Management NIC: Onboard 1 GbE

PSU: 630 Watt ATX (Rosewill)
UPS: TrippLite 1500 VA

Use Cases:
-iscsi for the main ESXi host (and potentially others)
-file server (will be SMB, though I may middleware this with a Windows VM as FreeNAS performance for SMB was below my expectations)
-backups

Thoughts:
I don't want the FreeNAS to be a hardware or software single point of failure. There is good redundancy with the disk shelves, HBAs, OS drives, etc.. but I could use some feedback. I may ultimately elect to use one of the shelves on a separate hardware system. I also need to learn more about replication; if there are any good links, please post them!

With multipathing and mirrors between the two shelves there is a tmax of 8 Gbps throughput. Is there a good means to estimate how many vdevs I should use to get close to that? I may want to put the remainder in another pool with lower performance but higher storage effeciency.
(Example: 1 pool with 6 stripes of mirrors (12 disks, 6 TB), 1 pool with 2 stripes of 8 disk z2 (16 disks, 12 TB)


Please share any feedback or recommendations!
 
Last edited:

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
That seems like an awfully low wattage PSU for that amount of hardware.

Whoops! It's actually a 630 watt! Edited original post, thanks!

I think it would be okay even at 430 though as there as I did not think there was too much hardware in the box, the shelves have their own redundant PSUs. I know better than to try to power 28 drives on one consumer PSU!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You should search and review the myriad of threads on building a VM store. You'll need an SLOG device (this needs to be a very fast, high endurance SSD with power loss protection). More memory would really be nice. You'll need to keep your VM pool below 50% utilization.

FN uses Samba4 for SMB functionality, which is plenty fast. However, you may need to do a bit of tuning.

I have a similar configuration, although it's all in one box. 12 drives in striped mirror for my ESXi filestore, 6 drives (with room to expand) in RAIDZ2 for my bulk file storage. Your pool configuration will depend on what your storage needs are, which you haven't listed.
 

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
You should search and review the myriad of threads on building a VM store. You'll need an SLOG device (this needs to be a very fast, high endurance SSD with power loss protection). More memory would really be nice. You'll need to keep your VM pool below 50% utilization.

FN uses Samba4 for SMB functionality, which is plenty fast. However, you may need to do a bit of tuning.

I have a similar configuration, although it's all in one box. 12 drives in striped mirror for my ESXi filestore, 6 drives (with room to expand) in RAIDZ2 for my bulk file storage. Your pool configuration will depend on what your storage needs are, which you haven't listed.

Regarding the SLOG device, my understanding is that this is only beneficial for NFS shares, my use case is iSCSI. Is this incorrect? I usually have less than a dozen active VMs, and I can configure multiple pools to keep the utilization down for any development sprawl.

I must need to tune, I wasn't able to get above 10 MBps to the SMB share. When transferring to a windows VM hosted on the iSCSI it was over 150 MBps. Can you recommend a good link to educate me on SMB tuning?

My storage needs are flexible, my active VMs are less than 1.5 TB, and my file share is less than 6 TB.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You need a SLOG for any VM datastore. With iSCSI, you also need to set "sync=always". Read jgreco's treatise here: https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

10MBps sounds like traffic over a 100Mbps network. Are you sure you don't have network issues? I tried copying a file from my FN and it's cooking along at 108MB/sec, as expected on a single gigabit link. I've done nothing special to "tune" SMB, as it's always worked fine for me. I do have the recommended additions in the SMB service config:
Code:
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no


Post the details of your SMB config and share setup.
 

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
You need a SLOG for any VM datastore. With iSCSI, you also need to set "sync=always". Read jgreco's treatise here: https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

10MBps sounds like traffic over a 100Mbps network. Are you sure you don't have network issues? I tried copying a file from my FN and it's cooking along at 108MB/sec, as expected on a single gigabit link. I've done nothing special to "tune" SMB, as it's always worked fine for me. I do have the recommended additions in the SMB service config:
Code:
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no


Post the details of your SMB config and share setup.

Great, thanks for the link, I skimmed that one over but I'll read through it in detail. My understanding was that there is always a SLOG, it is just kept in the pool. Since I have quite high performance on my pool and don't use NFS I had planned to wait a while before implementing SLOG and L2ARC hardware. I am still unsure about why iSCSI would require sync=always, but hopefully this link clarifies.

No, the switch is a 3750G at 1000 Mbps.

The windows SMB setup was a 2012 R2 VM on the iSCSI datastore. Sync writes were not enabled. For FN I don't have the SMB config anymore, but it was the default. I did not have those service config entries, let me try again with those.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
My understanding was that there is always a SLOG, it is just kept in the pool.
Every pool has a ZFS Intent Log (ZIL). By default, it's on the data disks. Some workloads indicate putting the ZIL on a Separate ZFS Intent Log device (SLOG).
I am still unsure about why iSCSI would require sync=always, but hopefully this link clarifies.
When you host a virtual machine, and that VM writes to storage, it needs acknowledgement that the write has been committed to remain consistent. By setting sync=always, you ensure that no write will be acknowledged until it's actually present on non-volatile storage. Without sync=always, a write could be acknowledged while still in volatile storage (RAM).

Without a high-performance SLOG, sync=always will kill write performance. Without sync=always, or with sync=always but no power-loss protection on the SLOG, you'll likely end up with a corrupted VM.
 
Last edited:
Status
Not open for further replies.
Top