BUILD New Plan for an SSD Build

Status
Not open for further replies.

hlnoiku

Dabbler
Joined
Jul 28, 2014
Messages
14
SuperMicro X9SRH-7TF (2x10Gbe)
4x KVR16LL11Q4/32 for 128GB RAM
Intel Xeon E5-1620v2 Quad-Core @3.7ghz
2x LSI 9207-8i + 1x onboard LSI9207-8i
SuperChassis CSE-417E26-R1400UB (72 slots)

The Plan is to keep Raid-z2 pools of 10x Samsung Evo 1tb SSDs. Expansion would be 10x 1tb SSD's at a time for up to 70TB of Raw space for 51.1TB of usable space.

Cost is coming out to ~$20k for the intial load of 30 drives giving 21.9TB of usable space (30TB raw).

Aiming to be able to keep the 10gbe pipes filled when there are about 60 nodes + 20 workstations pulling data from it.

Thoughts?
 
Last edited:

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Holy crap.

A lot of variables in there sir before anyone can estimate an answer to your question. As written, there is no answer to your question. We'd need a lot more information on what you're doing, will the typical user be mostly reading, mostly writing, both? What kinds of files? How will the files be shared out? Why? etc.etc.etc.etc.etc.

I'm not even going to attempt to answer in any case. That's a lot of money to be spending, per TB, and there are certainly a lot of variables in this whole equation, hardware, software, and user-skill.

You could put this together and not even remotely get the performance you expect. If you're serious about this, you sound like a candidate for a professional consult from ixSystems.
 

hlnoiku

Dabbler
Joined
Jul 28, 2014
Messages
14
Holy crap.

A lot of variables in there sir before anyone can estimate an answer to your question. As written, there is no answer to your question. We'd need a lot more information on what you're doing, will the typical user be mostly reading, mostly writing, both? What kinds of files? How will the files be shared out? Why? etc.etc.etc.etc.etc.

I'm not even going to attempt to answer in any case. That's a lot of money to be spending, per TB, and there are certainly a lot of variables in this whole equation, hardware, software, and user-skill.

You could put this together and not even remotely get the performance you expect. If you're serious about this, you sound like a candidate for a professional consult from ixSystems.

I'll reach out to ixSystems to see what they can do.

There'll be about 20-25 workstations all doing initial large reads (up to a few hundred mbs) to pull a project down. Writes will be much larger after the project files have been changed/worked on/updated, etc and pushed back to the array.

The 40-60 nodes will be pulling from the array hundreds of smaller files (ranging from a few hundred kbs to 80mb's).

Sharing will be via CIFS.

"That's a lot of money to be spending, per TB"

You should see some of the quotes I've gotten from vendors regarding ssd storage ($70k-150k). This is considerably lower cost hence the interest to see if it's feasible.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's probably doable, but you'd be doing yourself a favor to talk to iXsystems directly to see what they think. It'll be more expensive but then you get support.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I won't bore you with extra details, but I'd *seriously* consider those storage solutions that have price tags over DIY for something like this. SSDs have unique problems with regards to ZFS at large scales. While you might find the taste of iX's price (as well as everyone else's) outside of what you might want, it's nice for you to make someone else do the work when stuff breaks. And believe me, it will break. And so you need to make a choice.. do it yourself and deal with the consequences or let someone else sweat it out when your data suddenly becomes inaccessible.

For something like what you are doing, you should just go to iX and call it a day. You'll be glad you did.
 

hlnoiku

Dabbler
Joined
Jul 28, 2014
Messages
14
I won't bore you with extra details, but I'd *seriously* consider those storage solutions that have price tags over DIY for something like this. SSDs have unique problems with regards to ZFS at large scales. While you might find the taste of iX's price (as well as everyone else's) outside of what you might want, it's nice for you to make someone else do the work when stuff breaks. And believe me, it will break. And so you need to make a choice.. do it yourself and deal with the consequences or let someone else sweat it out when your data suddenly becomes inaccessible.

For something like what you are doing, you should just go to iX and call it a day. You'll be glad you did.

Thanks for the reply good sir, ill take the details if you're willing to write them down!

Have already reached out to Ix, just waiting on a reply :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm sorry but the details actually depend on a ton of facts about your usage and such. I can't distill them down to something that isn't 10+ pages of text. That's why I made the recommendation to go to iX or something like that. Let them deal with those problems and if things go bad you can hold them to the fire since you paid for the support. ;)
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
I hate to mention this here but it sounds like this might be better handled with Tiered Storage in 2012 R2 Storage spaces. Are you certain all your storage needs to be SSD and not a mix of SSD HDD? If you dont know much about Storage Spaces i recommend you look into it. It may save you several thousand dollars in hardware in this instance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, and when some articles are calling storage spaces "a filesystem that is still in development" I'll have to pass on that. There's other caveats to Storage Spaces that ZFS gets around, but I won't go into those as I'm probably not the best one to argue them and this isn't the "Storage Spaces" forum. ;)

I have 3 friends that have experimented with Storage Spaces. They all have since abandoned it after they lost 3 days of production as a result of faults related to spaces. So I personally wouldn't be looking at it, but that's my personal choice.
 

DannyKlenz

Dabbler
Joined
Aug 12, 2014
Messages
31
I'm no expert either and just know what was said in several keynotes. It seemed a better fit from that standpoint.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The other thing to note is that it's actually quite possible that a conventional hard drive based solution might be able to address the needs, of course highly dependent on the target usage patterns...
 

hlnoiku

Dabbler
Joined
Jul 28, 2014
Messages
14
I hate to mention this here but it sounds like this might be better handled with Tiered Storage in 2012 R2 Storage spaces. Are you certain all your storage needs to be SSD and not a mix of SSD HDD? If you dont know much about Storage Spaces i recommend you look into it. It may save you several thousand dollars in hardware in this instance.

I avoided going that route as it seems it's still in its infancy.

Also, from what I've read so far, you cannot do tiering + parity :(
 
Status
Not open for further replies.
Top