Time for some proper storage.

Status
Not open for further replies.

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I have been playing with FreeNAS for a few years but I think its finally time to build something faster and bigger while still somewhat low power. This build will also be on the super cheap (except the 12 3TB HGST drives). I know there are much larger builds out there but, this should be more than I need for a long time.

To start, here's the parts list:
  • Supermicro X8SIL-F v1.02
  • i5 560 (will be moving to a Xeon later)
  • 16GB of ECC (Wont work as ecc until I get the Xeon I know)
  • Dell Perc H310 - Fully flashed to the LSI firmware
  • Supermicro CSE-826E1-R800LPB 12 bay 2u chassis
  • QLogic QLE2564 8GB Four Port Fibre Channel HBA (Trying this in target mode for my ESXi Cluster)
  • 12 HGST Ultrastar 3TB 7.2k Drives (2 mirrored disks per vdev should be ~14TB)
  • MAYBE just 10 drives and SSD SLOG & L2ARC drives.

Expect more to come as parts start coming in.
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The X8 gear is getting pretty old... I'd suggest X9 or newer. If you're planning this system to be a VM store, and you want reasonable performance, you *need* an uber-fast, power-loss-protected SSD for SLOG service.

With 16GB of RAM, don't even think about L2ARC.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The X8 gear is getting pretty old... I'd suggest X9 or newer. If you're planning this system to be a VM store, and you want reasonable performance, you *need* an uber-fast, power-loss-protected SSD for SLOG service.

With 16GB of RAM, don't even think about L2ARC.

Thanks for the feedback on the L2ARC, as for the SLOG, I have been running lab VMs on 4 7.2k drives in "RAID 10" with 8GB of RAM shared with Plex, Transmission, and a small Minecraft server! I Don't
expect the new system to be blazing fast but it will be less of a joke than my current system.

On the choice of older hardware, its all about cost. All the listed parts (drives aside) will be less than $275USD
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
That's 12 bays, not 24.

Nothing you've said indicates that L2ARC or SLOG would do anything for you.
Hey, counting is hard! Ill fix that.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Thanks for the feedback on the L2ARC, as for the SLOG, I have been running lab VMs on 4 7.2k drives in "RAID 10" with 8GB of RAM shared with Plex, Transmission, and a small Minecraft server! I Don't
expect the new system to be blazing fast but it will be less of a joke than my current system.

On the choice of older hardware, its all about cost. All the listed parts (drives aside) will be less than $275USD
There's a difference between running VMs locally on your FreeNAS box and using FreeNAS to expose NFS/iSCSI storage to another hypervisor (usually ESXi) on a separate server. The latter case - where you're presenting block level storage - is where you need a SLOG.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
There's a difference between running VMs locally on your FreeNAS box and using FreeNAS to expose NFS/iSCSI storage to another hypervisor (usually ESXi) on a separate server. The latter case - where you're presenting block level storage - is where you need a SLOG.
I understand. For the IO I will need, this will be more this enough. I have worked with vSphere for years and worked with both large(ish) SANs (DS8800) and hacked together storage in production environments. I am familiar with the heavy synchronous write needs of vSphere clusters.

I would never bother running VMs on my storage box. I do admittedly run a few jails as they are light duty (except Plex) and have no need for a full OS and the related overhead. Again this is for a home lab. Low IOps an not 24-7 but I do need more than the 4 drives give me now. I also nee more that one HHHL PCI slot :confused:
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Or maybe I'll just grab the V7000 work just decommissioned... Yeah there's no way I want to power that thing... 192x 300GB SAS disks...
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Ok, so you *are* running a separate hypervisor.

I've worked with large vSphere clusters myself... but I bet you, like I, haven't worked with large vSphere clusters writing to ZFS. ZFS is different, because it's a copy-on-write filesystem. If you want sync writes (you do), and you're writing to spinning rust (you are), you'll want SLOG unless you want dismal performance. Alternately, you might want to consider simply moving to a small SSD array, if your capacity needs aren't large. You will get better performance (bandwidth and IOPS) from a mirrored pair of SSDs than a 12-drive array. That would also let you put your 3TB drives into two 6-disk RAIDZ2 vdevs, giving you ~24TB usable (minus overhead) compared to ~18TB for striped mirrors.

I'd love to do this myself, but I'm running stuff like Splunk, which needs more storage than I'm willing to buy in SSD!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ok, so you *are* running a separate hypervisor.

I've worked with large vSphere clusters myself... but I bet you, like I, haven't worked with large vSphere clusters writing to ZFS. ZFS is different, because it's a copy-on-write filesystem. If you want sync writes (you do), and you're writing to spinning rust (you are), you'll want SLOG unless you want dismal performance. Alternately, you might want to consider simply moving to a small SSD array, if your capacity needs aren't large. You will get better performance (bandwidth and IOPS) from a mirrored pair of SSDs than a 12-drive array. That would also let you put your 3TB drives into two 6-disk RAIDZ2 vdevs, giving you ~24TB usable (minus overhead) compared to ~18TB for striped mirrors.

I'd love to do this myself, but I'm running stuff like Splunk, which needs more storage than I'm willing to buy in SSD!
I have a few SSDs I'll likely soft mirror and use fro SLOG but even if I dont, Ill still have 3x the dismal performance! I cant say I need a ton of space but I prefer to not think about space at all. As for ZFS being a COW FS, my understanding was that most SANs worth their weight in salt (...Thats a lot of salt...) use a COW FS. Granted its largely abstracted away by fancy dashboards etc..

Anyway, once it's all up I'll do some benchmarking and see where we sit.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
We have a Sun\Oracle SAN where I work and it uses ZFS, but it has two disk shelves of SSDs to take the the fast transactions. So, even if ZFS might be slightly slow, the SSDs make up for it.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
We have a Sun\Oracle SAN where I work and it uses ZFS, but it has two disk shelves of SSDs to take the the fast transactions. So, even if ZFS might be slightly slow, the SSDs make up for it.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
Yeah the DS8870 that we still have in production has something like 160 400GB SSDs for caching. The IBM v9000 flash array we have is just bonkers, something like 8 trays 5.7TB each!

I think two 400GB SSDs in RAID1 drives should be a good starting point for me.
 
Status
Not open for further replies.
Top