max number of disks?

Status
Not open for further replies.

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
Is there a max number of disks allowed in freenas? Currently setting up 93 146GB 15k SAS and also have 6 2TB SAS and 6 2TB SATA (105 total). but the system is only showing 98 disks (only 5 of the 2TB).

Also what do you think the best setup would be? The 146GB is for web hosting and VPS and the 2TB's are for backups of other servers. the drives are mounted in a pair of 60 drive infiniband 4U racks and i'm thinking keep each rack separate just incase hardware issue. So 56 drives in RAIDZ3 6x9 with 2 spares (4 blank slots)? the other 36 RAIDZ3 4x9 1 spare?

for the 2TB's is it ok to mix SAS and SATA with freenas? I know you can't mix in many hardware RAID controllers
 
Joined
Feb 2, 2016
Messages
574
If there is a disk limit for FreeNAS, it'll be a lot higher than a hundred disks.

Without knowing your hardware specifics, I can't guess as to why there are disks missing.

Going from 93 143G 15K SAS drives to a couple dozen terabyte SSDs will likely give you better performance while materially cutting your hosting electrical and rack space bill. Substituting SSDs for 15Ks could be a really quick payback. Just food for thought.

Do you need the redundancy of Z3? For 146G drives, Z1 would probably be fine especially if you have a spare or two in the cage and I'd have no trouble sleeping at night with Z2.

If you're looking for maximum performance, reliability - and ease of finding disk sets - why not mirror between the disk packs? Slot 1, Unit 1 mirrors to Slot 1, Unit 2 and so on then stripe the mirrors. Wicked fast, huge IOPS and easily expandable.

You can mix SAS and SATA. FreeNAS speaks to each disk independently. Performance may suffer if you mix two drastically different transports (USB and SATA, for example) in the same VDEV but mixing SAS and SATA are fine for most use cases.

Cheers,
Matt
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
Our goal is to replace with larger SSD's once it becomes cost effective and our needs increase. 146GB drives are basically free ($5-10 used) and buying used is the reason for Z3. We're trying to be as cost effective as possible and currently electrical and rack space isn't an issue yet.

We're planning on upgrading/replacing the disk packs in groups once our needs increase and I'm not sure if mirroring between the two would be the best. We're limited by space and we have multiple spare disk packs so we can easily add another 60 drives and setup new volume then move data to new and remove the old disk pack. Also with freenas 10 hopefully coming out I'm hopeful they'll have some active/passive clustering so we can have fully redundancy for our systems.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You're looking at purchase price, not TCO. 146GB drives are FAR from free.

First, a lot of those used drives on eBay are worn out... look for a NetApp label on many of them. They'll have 30K+ hours on them. I tried doing exactly this (sourced from a friend, not off eBay) and I spent a lot of time resilvering. Keep in mind that performance will suck mightily when the drive dies, and still suck quite a bit during resilvering. With 98 drives running, your array will spend more time degraded than it will happy.

Second, power consumption. Referring to the product manual of a fairly typical Seagate Cheetah 15K drive:
http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/cheetah/15K.5/SAS/100384784c.pdf (page 33) we find an idle power of 10.33 watts and a full load power of 12 watts. Let's figure 11 watts for an average, or 37.53 BTU/hr. With 100 drives, you're talking 3,753 BTU/hr generated by the drives themselves, plus, assuming a 90% efficient power supply, another 375 BTU/hr. In round numbers, about 4,100 BTU/hr, or a bit over 1/3 ton of AC. You're also consuming on the order of 1.2KW of power, or about 10.5MWh/year... at a fairly average $0.09/kWh, that's $946 in electricity just to power the thing... easily double that (some say triple) to account for the AC load. And that doesn't account for the servers themselves (CPUs, etc.), chassis, fans, etc.

You're also going to consume, what, 16U for all this?

So, is power and aircon free? Rack space free? What is the cost when the array rebuilds over and over again and encounters an error? What will the impact be of poor performance when a drive is dying or the array being rebuilt?

You're talking about RAID-Z3. Assuming 9 11-drive vdevs, you wind up with a whopping 7.5TB of usable space, with the total IOPS of 9 drives (9*150=1,350). RAID-Z isn't recommended for VM filestores - I'm not sure how you're running your VPS stuff, but this may be a problem for you.

Look at a system similar to the one in my signature. You can install 36 drives into hot-swap bays, plus 4 internally. So, you install two small SSDs as a mirrored boot pool, two large enterprise-grade SSDs as SLOG (if you do a VM store) and L2ARC. Figure $1K for the box, $500 for the SSDs. Buy 36 HGST 3TB NAS drives at $125/ea. Set it up as an 18 vdev pool of 2-drive mirrors. You spend $6K, you get the whole thing in one 4U box, same IOPS, and 38TB of usable storage (more than 5X what you would get with your 15K drive arrangement). You could also buy fewer drives (start with 12 drives, let's say, at a cost of $3K) and add pairs in as your needs grow.

In short, in case I've not made myself clear, I think running 100 146GB drives is absolutely insane :)

HA configurations are one of the key differentiators of TrueNAS, the paid version of FreeNAS from iXSystems. It's unlikely they will add that to FreeNAS 10.
 
Joined
Feb 2, 2016
Messages
574
running 100 146GB drives is absolutely insane

Thanks for doing the math, @tvsjr. That's pretty much what I was thinking, too.

In my data center, going from 8U to 2U and from two 20-amp circuits to one 20-amp circuit would cover cost of going all SSD in the first year. Your solution, too, would likely be an overall savings.

Cheers,
Matt
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
We get 30amp 240v service to our rack and can hold 60 3.5" drives in 4U (600 drives in 42U rack with 2U controller), so we don't have a power issue and since cooling isn't a concern as the datacenter handles that as well we're good there. We have plenty of rack space available currently that isn't in use. So for us basically electricity, cooling and space is free for now, until we start expanding our needs.

these 15k 146GB drives will have better performance and IOPS than a 3TB NAS drive (7200rpm), about 5000 vs 1500 IOPS.

I know all the major SAN players and there's a huge restructuring going on with all the companies so I'm looking for a system to hold out a couple years for the new SAN's to be released.
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
My goal is having 7TB usable running with decent performance. I can get this with 146GB drives for about $500. to get something comparable for flash i'd be looking at 14x1TB SSD's mirrored which at $500 a drive is $7k (14 times)

If i wanted to expand to 70TB usable i'd be looking around $5,000 vs $70,000. And if I'm only planning on using these drives for 2 years and have full backups of data offsite anyways the TCO becomes less important. The goal is to have the fastest and largest storage with the cheapest TCO for 2 years, then scrap the arrays and purchase 1PB+ of all enterprise flash with tiering and full active/active cluster HA.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I don't think you understand exactly how IOPS calculations work in ZFS land. You get the number of IOPS of one member drive for each VDEV. 15K drives are about 150 IOPS, so 9 11-drive RAIDZ3 vdevs would be 9*150=1,350 IOPS.

To get 5,000 IOPS, you'd need a minimum of 34 vdevs. This would only be attainable with striped mirrors. And, from the throughput perspective, you're quickly reaching a point where you'll have to consider every step in the chain. You'll easily saturate your SAS controller necessitating multiple controllers, etc. Which means you'll be spending more money for all these parts.

Your biggest issue is the fact the array will be nearly constantly degraded, trying to rebuild. This will absolutely tank your performance, and leave you open to the possibility that something just doesn't rebuild right. What's the cost to your company when the array craps itself for a few hours? What sort of IOPS performance will you get when the array is crunching away trying to repair multiple drive failures simultaneously, or, even worse, when one drive starts spitting out retries and the whole pool becomes catatonic for 10, 20, 30 seconds?

I'm not advocating all flash... the price point just isn't there yet for many people. But you're effectively trying to replace a dumptruck with a whole bunch of worn out sports cars... can it be done, sure... is it smart, not really.
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
the system runs with a pair of 20GB (4x DDR IB) infiniband SAS controllers and the disk trays are designed to handle it as well, and the SAN connects with 4GB FC cards to a 4GB FC fabric switch. I want the 4GB FC to be the bottleneck as its easily replaceable.

So the issue you see isn't with the drive size, but the reliability of used drives becoming degraded and rebuild performance?
 
Joined
Feb 2, 2016
Messages
574
what do you think the best setup would be?

Sounds like you're committed to the 15k disks, @justinm001. Fair enough.

For maximum throughput and IOPS with a reasonable amount of reliability, mirror between the disk packs. Slot 1, Unit 1 mirrors to Slot 1, Unit 2 and so on then stripe the mirrors. Wicked fast, huge IOPS and easily expandable. You can lose an entire disk pack for days without losing data or losing your ability to provide services to your customers.

When a drive burns out - and they will - rebuilding a mirror is a lot quicker and less system-intensive than rebuilding a Z3 VDEV.

As 300G or 600G SAS drives drop in price and you start replacing 146G drives, you gain the extra capacity by just replacing two drives instead of nine.

Given that 15K SAS drives are about 200 IOPS, this configuration gives you 9,800 IOPS and 7T of usable space. (You could get 6,300,000 IOPS and 7T using just 14 SSDs for under $5,000. But that's none of my business.)

Using your proposed Z3 configuration, you'd only get 2,000 IOPS and losing a disk pack could result in data loss and absolutely result in downtime.

Cheers,
Matt
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Yep. Technically, what you have will work just fine. You'll have to be careful to avoid certain missteps (saturation of SAS lanes, etc.) if you want to maximize performance. But it will work.

But, is it a good idea? How often will your pool performance be degraded because you're resilvering, or a drive is throwing retries?

As a test, get a stack of those "cheap" 146GB drives and fire them up. See how many hours they've logged. Run a destructive badblocks test followed by a long SMART test. See how many fail. I think you'll be surprised. I've been down exactly this path, trying to use "cheap" used/spare drives... and ended up buying 12 new drives and being done with it.
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
if I mirror the drives then stripe the mirrors I'll only have 1 drive redundancy for each of the mirrors, correct? so if both drives in 1 mirror fail then the whole array fails correct? I understand ill have redundancy many disk failures in seperate mirrors. Also does freenas automatically rebuild from spares?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Correct, if you lose both drives in a 2-drive mirror vdev, your pool is done. You could do 3-way mirrors if you want to be paranoid. But n-way striped mirrors will always spank the crap out of RAIDZn from a performance and IOPS standpoint. If you're expecting north of 1,000 IOPS (you mentioned 5K above... have you actually benchmarked this?), RAIDZn should only enter your mind for backup storage, not for production workloads.

I still say, grab 10 cheap drives and put them under serious workload for a week or two. Badblocks and SMART test, then mount them up and hammer them with a random workload with IOMeter or the like. See what happens.
 

justinm001

Dabbler
Joined
Oct 25, 2016
Messages
11
thats my plan. the whole project is still in testing so we have all the equipment here onsite at our office and since we still have time before the developers finish their front ends we have basically full access for testing everything. I think we'll start with 1 array with 54 mirrored and 2 spares and put them to the test both with iometer and actual test production environment so we can monitor real performance along with the processing power.
 
Status
Not open for further replies.
Top