Building a FreeNAS box looking for input

Status
Not open for further replies.

kossuth

Cadet
Joined
Nov 11, 2016
Messages
5
Hey guys. I'm a man of many hats unfortunately and I'm looking to potentially recommend (after a lot of testing) a storage solution at my workplace to my supervisor. I figured I would run it by the community first to get their take on it before I round up the parts to build a test box. The short of the long story is I am helping upgrade a smallish network that when complete will consist of 10ish vms running on two ESXi hosts. Today everything is older dedicated hardware. FYI none of the older hardware is really worth saving for NAS use, already looked into that. At this point all the VMs are being developed using the onboard hard drive, but myself and the rest of the team know this can't be a production solution. Long story short ESXi does not recognize drives in a RAID configuration (PERC S330) so there is no redundancy in the event of a drive failure etc. Bad news in production obviously. Personally I would actually like to get a solution through IXsystems but at the current moment getting anything purchased with the company is difficult, and to be brutally honest if some limited funding comes up there are more important things to do with that money. Fortunately, we have spare parts and pieces around to build with.

Hardware I have been eyeballing that's sitting idle.
1. Newer Dell R630 with a H730 PERC, 8x 2.5" hot swap drive bays, and equipped with dual 16 GB SD cards. The onboard NIC daughter board is 2x 10G SFP+ and 2x 1000BaseT. Not sure of memory but I'm sure it's plenty as in 128GB+. Basically a healthy ESXi platform that is a leftover from another project.
2. 10x+ 1 TB Velociraptor drives (You know the 2.5" drives in the 3.5" Caddy). SATA
3. Number of unused 256 GB SSD Drives SATA

Possible game plan: Install FreeNAS onto the SD cards (will be mirrored), remove the 2.5 Velociraptor drive from the caddy, and install 6x of the Velociraptor drives into the server. Ensure the H730 is in HBA mode per thread I was reading https://forums.freenas.org/index.ph...dell-servers-with-perc-h730-controller.46631/ . Create datapool as RaidZ2 with 2x 256GB SSD L2ARC. This would give me roughly 3.5 TB usable which should be plenty for at least a year if not more. This datapool would be replicated off to a backup server on a somewhat regular basis. The NIC daughter board might be a challenge. Its based on the X710 Intel chipset. Didn't see it on the FreeBSD support hardware site, but I did see there are drivers from Intel themselves for the chipset. I've manually had to update drivers on FreeNAS in the past for other projects, so I'm thinking if it doesn't work out of the box I will take a swing at it and seeing what happens. Do have some quad port 1 gig NICs available as a contingency plan.

Thoughts? Concerns? Not necessarily thrilled about using SATA drives, but it's what I got laying around that can get the datapool size to what it would need to be. Also, I was thinking that 1xL2ARC might be ok. If I only used 1 that would let me add another 1TB of storage in the data pool, but not 100% sure if that's wise or not. I don't see us being terribly tight on 3.5 TB, but I'm sure we would figure out something to do with an additional TB particularly if the performance of the NAS wouldn't suffer with only 1 L2ARC. Thanks guys.
 
Last edited by a moderator:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Are you intending to use this system as an NFS or iSCSI datastore for your ESXi hypervisors? If so, I'd suggest doing some searching and reading. There's plenty to learn.

You'll need a SLOG device. This is a fast SSD (Intel S3700 being a good choice) with high endurance (10 full drive writes per day in the case of the S3700) and power-loss protection.
L2ARC only makes sense when you have lots of memory. At 128GB, you're in the ballpark.

You will need to use striped mirrors (not RAIDZ), and the pool should only be filled to 50% lest performance go to crap. So, if you're looking at 6 drives, you really only have about 1.476TiB usable once accounting for overhead. You'll also be fairly limited on performance... about 300 IOPS total.
 

kossuth

Cadet
Joined
Nov 11, 2016
Messages
5
Planning to use iSCSI due to hardware acceleration not to mention round robin load balancing etc. can't do any of that with nfs.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Ok, then all of the above factors apply. You're going to need a lot more drives. That doesn't necessarily mean that you need Velociraptors or other high-speed drives. My 12-drive pool supports more VMs than what you're trying to run (including busy VMs like Splunk, a Graylog cluster, Zimbra mail server, etc.) and does just fine. That, of course, also implies a different chassis or a JBOD external drive chassis, since the 630 is a 1U box.

Search around the forums for terms like "vmware" and "iscsi" and you'll find plenty of threads to go through.
 
Joined
Feb 2, 2016
Messages
574
Back to basics, @kossuth:

1. How many IOPS do you need? How much disk throughout do you need?

2. How many IOPS do you have now? How much disk throughput do you have now?

3. How much disk space do you need?

Knowing what hardware you have at your disposal is nice but, without knowing your load, there's really no way to tell if it will work. Details on your load and I'll give you a spitball answer as to if your hardware will work.

We have 16 VMs running on far less capable hardware. We don't have SLOG or L2ARC but we do have our VMs on SSDs. If we weren't using SSDs, we might benefit from SLOG.

Cheers,
Matt
 

kossuth

Cadet
Joined
Nov 11, 2016
Messages
5
As far as IOPS go, I'll have to look at the various systems currently in production to see if I can determine some kinda baseline. Not sure how accurate it would be given they are on dated hardware with dated OSs and such, but that might give me a ballpark to start with. I anticipate the IOPS are pretty low given the types of servers are domain controllers, WSUS server, a smaller database server that only a handful of people use and other such boxes.

Also if you have a resource as to why I would want to use stripped vs RaidZ2 please share or point me in the direction. I'm a network guy by trade but pick things up pretty quick and wonder why this would or wouldn't matter.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Striped vs. RAIDZ2 is simple... IOPS. You get the sum of the slowest drive in each vdev. So, assuming 6 Velociraptor drives in RAIDZ2, arranged as one vdev, you get 1 drive of IOPS... about 100 for that particular model. Configured as striped mirrors, you would have 3 vdevs with 2 drives a piece, for a total of 300 IOPS. 100 IOPS is sufficient for a file store, but it will start feeling very slow running multiple VMs. Consider trying to run multiple systems on one drive at the same time... that's effectively what you're doing.

As I've suggested previously, there are MANY threads on these topics. Do some searching and some reading, using terms like vmware, iscsi, and nfs... you'll pick it up quickly.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
And SSD IOPs are measured in 10,000 - 100,000s.

HD IOPS are in the hundred range.
 
Status
Not open for further replies.
Top