Updated : found hardware, is it compatible to freenas

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
UPDATE: found hardware need your help me make sure it will support freenas
see post 15 click get there



we are going for a new project at work that will require a high capacity server with decent speed, i am looking for a price estimation for read intensive server

i was thinking of raid60 (or similar ) on pools of 10 or 15 HDDs i thinks that will work well.
the storage will grow in chunks of 10/15 HDD each time up to a total of 45/60 hdd (depend on chassis)
there are some chassis with up to 60 HDSS so total of 60*10 (600TB Raw and approx 400 redundant ) a can give lots of spare capacity

i don't have experience with such serves\capacities, i dont know what prices i should take in consideration
this is was i assume:
server will cost approx 10K ( a single 4u chassis is preferred with 60*3.5 hdd )
i can get a good hdd at approx 420 (just ordered few HUH721010ALE600, and the sas hdd from the same generation price is similar)

10k$ + approx 400$ each hdd

first pool with server will cost 16k$ and then each new pool approx 4k$ more

is my estimation accurate ? or over/under priced?
 
Last edited:
Joined
Feb 2, 2016
Messages
574
1. Are you buying new or are happy with picking up discount hardware on eBay or other used/refurbished sources?

2. Please put a metric to 'decent speed' or explain your use case.

3. Make sure you have ample UPS capacity to keep your server online: that many drives are very thirsty.

Your case/motherboard/HBA(s)/power/network are going to be fairly inexpensive compared to the rest of the build. Drives and RAM are going to be your primary cost driver.

If you're adding 10TB drives, 15 at a time as RAIDZ2 (130 TB useable) and need fast reads, you're going to need lots of RAM. To start, at least 128GB and probably 256GB? Fully-populated, with 60 drives and 520 TB of useable storage? Not less than 512GB and probably more.

Find a motherboard that will meet your RAM needs. At 60 drives, your case choices are fairly limited. Depending on your motherboard and case, you'll know how many HBAs you'll need.

Cheers,
Matt
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
1. Are you buying new or are happy with picking up discount hardware on eBay or other used/refurbished sources?
hdd must be new, (cassis can be refurbished , depends in the price we might got for new )


2. Please put a metric to 'decent speed' or explain your use case.
this is will be used as data feed for simulations (each simulation will read a chunk of continuous data of few gb ),
2GBs for each 10HDD pool should give approx 10-12GBs Read

3. Make sure you have ample UPS capacity to keep your server online: that many drives are very thirsty.
we have a 3kva UPS that connected to two servers and 5 switches we might replace it for 10kva

Your case/motherboard/HBA(s)/power/network are going to be fairly inexpensive compared to the rest of the build. Drives and RAM are going to be your primary cost driver.

If you're adding 10TB drives, 15 at a time as RAIDZ2 (130 TB useable) and need fast reads, you're going to need lots of RAM. To start, at least 128GB and probably 256GB? Fully-populated, with 60 drives and 520 TB of useable storage? Not less than 512GB and probably more.
the data is continuous and caching does not help cause each time different chunk of data is read, so why do i need much ram?


Find a motherboard that will meet your RAM needs. At 60 drives, your case choices are fairly limited. Depending on your motherboard and case, you'll know how many HBAs you'll need.

any suggestions ?
what are the price difference between new and refurbished?
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Lots of stuff going on here. I tend to shy away from RAID-Z2 for a variety of reasons. I only use it on a remote backup server. Everything production I use mirrored vdevs.

For example, you have to take into considering resilvering performance and stuff like that. Yes, you lose some storage using mirrored vdevs, but you gain in some cases considerable performance. I did a TON of research on this and the general notion I got was "almost always use mirrors."

The problem with such a large chassis is that you'll need to have a ton of PCI-E slots for HBAs depending on what you get. Without doing a ton of research on this, I'd probably go with something like this. Looks like you can do 24 drives per card but you need an x8 PCI-E slot per card. I've seen a few server mobos with several x8-x16 slots (kind of rare I think) so to have 60 drives you'd need three slots / cards.

I'd probably use the on-board SATA ports for mirrored OS SSDs. You think you won't need any caching but you might so I'd take into consideration two slots for OS and maybe 2-4 for cache / log devices. I always try to leave room for changes and growth just in case.

The old rule of thumb was "one GB of memory per TB of pool" or something like that, but I don't think that holds true any more. Here's what I'd do: when you find the mobo you want, find out how much it can hold in memory. Then find out where the prices sky rocket, probably on 16GB DIMMs.

Then I would use the highest capacity DIMM you can get for the best price. Start with something reasonable like 32GB esp. if you don't think you'll have a lot of cached hot data. You can always put in more ram. Leave lots of room for potential growth.

Be sure to check eBay for ram too. You can usually find great deals depending on the type and size DIMM. Burn them in with memcache++ or similar. Oh and just in case: make sure you use ECC ram.

Mind you, I'm not 100% sure on sizing ram for ZFS for something this massive. But I *think* you'll be fine starting with 32GB and going from there.

Definitely consider refurbished / white label drives on eBay (search "10tb enterprise" for example). You can save a considerable amount. Just burn them in (there's a whole burn in procedure stickied in this forum I think) and order a few spares of course.

In fact, check eBay for everything if you're trying to keep prices down. But don't skimp too much obviously or you will be sorry.

I want to emphasize taking time to burn everything in, especially the drives, even if they're new. This is important.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
what about 6048R-E1CR60N
it look decent. does it work with freenas ?
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
what about 6048R-E1CR60N
it look decent. does it work with freenas ?

Looks great. I'm sure it'll work. Check out Thinkmate. I just configured that system with 256GB (max 768GB) ram and 30x10TB (min. drives you can get) for $18,800.

Also check out 45drives though they seem more expensive than Thinkmate. Mind you, I didn't spend a LOT of time messing with this. Just a couple of quick builds.
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Be sure to do a lot of research on the memory sizing. There might be other factors I'm not considering.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
Be sure to do a lot of research on the memory sizing. There might be other factors I'm not considering.
ram is is the easiest problem to solve.
just looking for decent hardware with good freenas compatibility. does this onboard raid controller work with freenas and support all 60 hdds ?
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
ram is is the easiest problem to solve.

Then order as much as you can possibly afford from the start.

just looking for decent hardware with good freenas compatibility. does this onboard raid controller work with freenas and support all 60 hdds ?

Ooh hrm. At first glance, I'm thinking this won't work. I am in the middle of something. I'll dig into this deeper in a bit.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
thanks I can wait.. I'm not in a hurry. it is for ordering in month or two .. just looking for the best options

still thinking this vs server server with two full rows of 2.5 ssd ( tough ssd will be used but at high grade )
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
OK check this out:

https://www.thinkmate.com/system/stx-nl-xe36-2460v4-sas3/340008

18 x 10TB HDD (up to 36)

You can start at any number of disks of course.

Then I was thinking you could expand to more disks if you need it by adding this card to the x16 slot:

https://www.broadcom.com/products/storage/host-bus-adapters/sas-9302-16e#overview

Then one of these:

https://www.thinkmate.com/system/stx-jb-je44-0420-sas3

In fact, I think you can add several of those enclosures with that one card.

I left memory at a minimum of 64GB ram based on your requirements and this post:

https://linustechtips.com/main/topic/738402-zfs-memory-requirements/

Some well meaning people years ago thought that they could be helpful by making a rule of thumb for the amount of RAM needed for good write performance with data deduplication. While it worked for them, it was wrong. Some people then started thinking that it applied to ZFS in general. ZFS' ARC being reported as used memory rather than cached memory reinforced the idea that ZFS needed plenty of memory when in fact it was just used in an evict-able cache. The OpenZFS developers have been playing whack a mole with that advice ever since.

I am what I will call a second generation ZFS developer because I was never at Sun and I postdate the death of OpenSolaris. The first generation crowd could probably fill you in on more details than I could with my take on how it started. You will not find any of the OpenZFS developers spreading the idea that ZFS needs an inordinate amount of RAM though. I am certain of that.

The thread goes into a lot of discussion about this.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
currently i thinks this is my best option for now
ebay link
ill save some money on the chassis and spend it on refurbished enterprise ssds
(i am not intend to buy it from him, ill buy from local reseller that gives warranty)

do i need to add 3 *SAS 9305-24i for it ? or cables?
ill prefer to use 24 sas in card to save some pci for 40gb network card, so 2 pci are left spare (3 sas, 1 network,2spare)
i guess i can to under 5k without the ssds and spend the spare on the ssds

what do you think ?
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
This looks like a bare chassis so you'll have to probably ask the reseller for details on the different configurations provided. You might have to buy your own motherboard, CPU, RAM, etc. and install it yourself.

Looks like the chassis has a built in SAS expander so you probably won't need more than one card. But again, you'll need to contact the reseller and get more details on this. I'd be wondering about the type of card I'd need, especially considering you'll be using SSDs now.
 

iliak

Contributor
Joined
Dec 18, 2018
Messages
148
i am thinking of getting something like
ebay link


will it freenas?
do i need to add another lsi raid cards?

Code:
SAS3 12Gbps single-expander backplane


Performance Specs:
Processor: 2x Intel Xeon E5-2670 V2 Deca (10) Core 2.5Ghz
Memory: 256GB DDR3 16 x 16GB - DDR3 - REG PC3-10600R (1333MHZ)
Hard Drives: None

RAID: 2x LSI 9300-8i HBA JBOD FREENASS UNRAID 1P2GBS
NIC: * Integrated Intel X540 Dual Port 10GBase-T

Secondary Chassis/ Motherboard specs:
Supermicro 4U 72x 2.5" Drive Bays
Server Chassis/ Case: CSE-417BE1C-R1K28LPB
Motherboard: X9DRH-iTF
* Integrated IPMI 2.0 Management
Backplane: 3 Backplane BPN-SAS3-216EL1 24-port 2U SAS3 12Gbps single-expander backplane, support up to 24x 2.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Low Profile 1 PCI-E 3.0 x16, 6 PCI-E 3.0 x8
HD Caddies: 72x 2.5" Supermicro caddy (No rear 2.5" hard drive slot)
Power: 2x 1280Watt Power Supply PWS-1K28P-SQ
Rail Kit: Supermicro Rev B Rail Kit 4U
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Looks like this has everything you need. I'm curious how the backplanes are connected to the cards though. You might have to double check that when/if you get it. Worst cast: if you need to add more 9300-8is, you'll have plenty of open slots to do so.
 
Top