24 bay FreeNAS

Status
Not open for further replies.

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Hi,

I'm planning a new 24 bay FreeNAS with the following components:

  • Supermicro 24 bay chassis with 12Gb SAS Backplane
  • redundant power supply
  • X10SRL-R Mainboard, Intel c612 chipset
  • IPMI 2.0 with KVM and virtual media over LAN
  • Xeon E5-1620V4 CPU
  • 4 x 32GB DDR4 2666 reg. ECC DIMM RAM
  • LSI SAS9207-8I SAS HBA
  • Intel X550-T2 2x10GbE network card
  • 25 HGST HUH721008AL5200
  • Samsung SM963, 480GB NVMe U.2 SSD PCIe 3.0x4, min 3153 TBW
  • Maybe a small consumer 2,5" SSD for system instead of USB stick
I'm not sure about the Intel network card X550-T2. Alternative would be a QLogic QLE3442.
Chelsio and Emulex 10GbE NICs are not available.



So will it FreeNAS?
Tell me you comments they are very welcome!

Thanks a lot
tazinblack
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
I'd have to wait on one of the members to verify the HBA, but it looks good. That is a very expensive setup though. Do you need all of that? Getting an X10 or X9 generation board, processors, and RAM will save you a bundle which can then be used to plan for a replacement system a few years down the line. There are a lot of good deals on eBay for used equipment, especially in combo packs like the one I found and is detailed in the signature and link.

What are your requirements? Are you running FreeNAS alone? What types of jails do you plan to use? What type of content are you storing, backing up, serving, etc?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Just doing a quick eyeball, yeah. What's your use case/workload? Is the SM963 for SLOG or L2ARC?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Samsung SM963, 480GB NVMe U.2 SSD PCIe 3.0x4, min 3153 TBW
I wonder what this is for?
LSI SAS9207-8I SAS HBA
This is a 6Gb/s SAS controller. Why are you buying a chassis with a 12Gb/s expander backplane?
HGST HUH721008AL5200
I see, it is for these 12Gb/s SAS hard drives, but the SAS controller is still not going to go faster than 6Gb/s
Maybe a small consumer 2,5" SSD for system instead of USB stick
This is a good idea. No problem there.
I'm not sure about the Intel network card X550-T2
Should be fine but overall, I think that it would be good to know what is the purpose the system will be put to?
Is it for business or personal / home use? Will you be using it as storage for virtual machines?
We need more information to give you better feedback.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
  • X10SRL-R Mainboard, Intel c612 chipset
  • IPMI 2.0 with KVM and virtual media over LAN
  • Xeon E5-1620V4 CPU
  • 4 x 32GB DDR4 2666 reg. ECC DIMM RAM
PS. You could save some serious money on this if you are willing to consider an X9 series board instead. The registered DDR3 memory is significantly less expensive.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
HGST HUH721008AL5200
PPS. I looked up the specs on these drives, even with 12Gb/s SAS interface, the actual transfer is not going to exceed a 237 MB/s. That is about the theoretical max of SATA-2... I am seeing about that speed from the SATA Red Pro drives we have here. So you could probably go with 6Gb/s SAS, use SATA drives and save some cash. Depending on how you plan to use it, I suppose.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
What he (Chris) said. I will have to put a bunch of helpful links in my signature as Chris has done and write up a guide. If you read through some of the posts in this forum, the resources section (along the top menu bar links), and the hardware forum there are lots of good posts. If you have more specific questions I am sure someone here can help you.
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Thank you so far!

This box or better these boxes, I'm going to buy two of them for this project, are going to be used in a business case.
They will be used mainly for NFS connection for about 20 linux / unix workstations / servers all using NIS. At the moment nobody of the involved guys can tell if they will be using sync NFS or not.
The problem usually is, if they find out these boxes to be very handy, they will use them for anything you can imagine.
Therefor I'm planing this SM963 SSD, So it is possible to use it as SLOG or as L2ARC or if it won't be needed for something completely different.
The reason why I plan a 6 Gb/s SAS HBA is that they are rock stable and I'm not sure if the newer generation with 12 Gb/s is buggy these days.
Sure I could safe some money but I plan this boxes to be used about 5 - 7 years. I hope the newer hardware will be longer available.
Also I now have a budget and in comparison to solutions located in the cloud or with NetApp hardware the costs of these two boxes are really really small.
The second box will be used to replicate the data to it, which will be located at a different location for disaster recovery reasons.

If these boxes will work fine I'm going to buy another pair. They will then be used for CIFS, NFS and also as VMware storage pools.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
CIFS, NFS and also as VMware storage pools.
Keep in mind these are potentially completely different work loads and may need a different configuration at minimum.

No one size fits all in storage despite what vendors tell you.

Keep in mind you can have multiple pools with different vdevs. You could use one with mirrors and slog for VMware and the other RAIDZ2 for cifs and no slog/l2arc.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Plenty of people in the forum have been using the LSI 9300 series of cards (12Gb) and my understanding is that they are working reliably.
Hardware details will be slightly different for different workloads. I will post some suggestions when I get to my office.

As this is for a business, am I safe to assume that the hardware must be new?

Have you looked at the FreeNAS Certified systems that iXsystems sell?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Yes, you're right, the hardware has to be new. Also I do not want to buy 50 pieces from 50 different dealers. I looked at IXsystems some time ago and at that time they did not have resellers in Europe or Germany and my mails to ask where to buy were never answered :(
But the NAS systems I build are usually not the main storage. Mostly I use them as test system or write backups there and keep versions of them. So I do not need professional support. I am able to switch to the replica and repair the broken one.
These two boxes are going to be used to save raw data of finished simulations and construction data. Most of them will probably newer be used again but need to be held just for the case.
So the performance here will not be a problem. I think most access will be sequential.
The requirements here are NFS access, at minimum 100 TB usable size and some type of protection against hardware failure, which is implemented by using two boxes in two different locations with snapshots and replication. The next two boxes however will be used for a more complex workload. So this is a good way to check what performance this setup can handle.

I'm not sure how to bundle the disks to vdefs right now, but I think it is going to be something like 2 vdevs with 12 disks each in raidz3 or 3 vdevs with 8 disks each at raidz2.
As far as I understand right now the one with the three vdevs will be faster so I would prefer this one. Since I can not insert 25 disks with one as hot spare I will buy two extra disks as cold spare so I can replace a broken disk in a short time.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
3 vdevs with 8 disks each at raidz2
There is a slight improvement in performance with this configuration because of having more vdevs and less disks per vdev dedicated to redundancy. With those HGST drives, you will probably be fairly safe. The annual failure rate on them is around 2% if I recall correctly, so you are not likely to see more than one failure out of 50 drives in the first year and you might not even have one. I commissioned a storage server last year with 60 of the Western Digital Red Pro 6TB drives and had 3 drives fail in the first 6 months. It had me worried, but there hasn't been a problem since and the drives that did fail were just bad sectors, not catastrophic.
Since I can not insert 25 disks with one as hot spare I will buy two extra disks as cold spare so I can replace a broken disk in a short time.
Sounds like a good plan. That is how I handle it also.
Samsung SM963, 480GB NVMe U.2 SSD PCIe 3.0x4, min 3153 TBW
This bit is not needed at all on the backup storage servers, you can delete it and save some cost.
LSI SAS9207-8I SAS HBA
It is really your call about using this or going to the 12Gb/s SAS controller. It should not change the performance overall because the mechanical drives are the limit to performance. If you were going to do SSDs, it would be different.
Otherwise, this looks like a fine build for bulk storage. You could probably save a couple thousand dollars on the cost if you could go with surplus components, but I have the same situation where I work. They want new gear and they want it built by the vendor, not as parts that we assemble in-house.

The same hardware should work fine for the VM storage with a few changes. I am guessing that will be iSCSI? You will want to have the pool configured as mirrors. This will give you 12 vdevs in the pool with each vdev being a mirror of two disks. You might want to choose smaller drives, based on the amount of storage you will need, but the larger drives are generally also faster. Based on the specs I read yesterday on these HGST 10TB drives, this should give you around 3680 IOPS. I did the math based on 50% read, 50% write, and it is just an estimate, so your observations may be different when you actually get the system and start testing it. The latency of spinning disks will still make VMs slow unless you add a SLOG and the one I would suggest is this:
Intel SSD DC P3700 Series SSDPEDMD400G401 400GB, 1/2 Height PCIe 3.0, 20nm, MLC
https://www.newegg.com/Product/Product.aspx?Item=9SIA8PV5VV1499
I am sure you can source that locally or get a vendor to integrate it. There isn't much point in having a larger drive as ZFS will only use an amount of SLOG storage equal to the amount of data that can be transferred in via the network connection within 5 seconds. I am going from memory on that, so it might have changed or I might not remember it correctly. Hopefully someone will correct me if I am mistaken.
You might also benefit from L2ARC, but you can do some testing with your workload and monitor ARC usage to see if you think it is needed. It can easily be added later.
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Looks like my dealer can't deliver the Samsung SM963 and the Intel SSD DC P3700 is end of life as he tells. He suggests a Micron 7100 MAX 400GB NVMe which I think is a joke.
Very slow and you can't find any value of TBW in the datasheet. Don't know why they build such slow devices with NVMe support.
I would buy the SM963 somewhere else but they are only sold as OEM without warranty for end users. So I hope they find an alternative.
 
Last edited by a moderator:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Keep in mind you can have multiple pools with different vdevs. You could use one with mirrors and slog for VMware and the other RAIDZ2 for cifs and no slog/l2arc.

Just wanted to point out here that people often want to partition a SLOG or L2ARC devices and use them for several pools. That divides up the bandwidth available and is usually self-defeating. With fast NVME devices, it might actually be practical, but I'm not aware of any testing.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Just wanted to point out here that people often want to partition a SLOG or L2ARC devices and use them for several pools. That divides up the bandwidth available and is usually self-defeating. With fast NVME devices, it might actually be practical, but I'm not aware of any testing.
For large and performance critical applications there is some truth to this.

Keep in mind the SLOG is normally extremely small, most times 1GB is way more than it needed as it will never need to be larger that 3 transaction groups, and a transaction group (by default) lasts 5 seconds. Therefore in almost all cases you slog will only need to accommodate 5 seconds of the maximum throughput TO the box and only if those writes are sync writes. Some back of the napkin math, 2x10gb = 20gb = 2.5GB and 2.5 x 5 = 12.5GB at the most you would need. What matters here are IOPs and latency, throughput, not so much. If your from the networking world think PPS not BPS.

The SLOG is also in normal operation write only. It's only ever read if the system crashes and what was in RAM did not get written.

The L2ARC is only written to from the ARC as pages are evicted so depended on your read rate and patterns, this could be slow to fill and should not drastically (if at all) impact the write performance of the SLOG. The L2ARC is (hopefully) read intensive and not so much write intensive.

Partitioning an NVMe for SLOG and L2ARC does not divide bandwidth, it shares it. The Samsung SM963 uses 4 PCIe 3.0 lanes providing 3940 Megabytes/Sec of bandwidth is installed in a matching slot. Unless you expect that kind of sustained bandwidth through your NAS or at least 900 MBps sustained sync writes (still not how that works), you are wasting cash and not engineering a solution.

Still don't believe me? Check your L2ARC and SLOG throughput under normal loads
Code:
zpool iostat -v


Personally, I plan to use two 400GB SSDs in a soft raid1 and partition the resulting dev into 1GB and 60GB for SLOG and L2ARC respectively. This is for an iSCSI or down the road Fibre Channel VMFS datastores of about 7TB each. (more LUNs = more iSCSI connections = better load balancing & more IO queues to the backend)

Now if you are running VMs on this box that's another can of worms and pitfalls.

EDIT: Lots of edits, I got side tracked, I'm not a writer, I'm arguing on the internet, More edits.
 
Last edited by a moderator:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Partitioning an NVMe for SLOG and L2ARC does not divide bandwidth, it shares it.
I think we are saying the same thing from different viewpoints. Bandwidth to any particular SSD is fixed. If that SSD is partitioned to serve as multiple SLOG or L2ARC devices or both, they all contend for that same total bandwidth. This might not matter unless performance is an issue, but SLOG and L2ARC devices aren't added unless performance is an issue.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Looks like my dealer can't deliver the Samsung SM963 and the Intel SSD DC P3700 is end of life as he tells. He suggests a Micron 7100 MAX 400GB NVMe which I think is a joke.
I just looked on the Intel site:
https://www.intel.com/content/www/u.../dc-p3700-series/dc-p3700-400gb-aic-20nm.html
They don't say that they have discontinued it. There are newer, higher capacity models, but you don't need capacity. This is about speed.
I would buy the Intel SSD DC P3700 separately and integrate it myself. I don't know how the Samsung drive compares in price, but the Intel drive can be had for around $650 in the US and it can be ordered retail from many vendors. I looked up the Samsung and they only list it as a "bulk" item. I am not sure how they expect anyone to order that.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I think we saying the same thing from different viewpoints. Bandwidth to any particular SSD is fixed. If that SSD is partitioned to serve as multiple SLOG or L2ARC devices or both, they all contend for that same total bandwidth. This might not matter unless performance is an issue, but SLOG and L2ARC devices aren't added unless performance is an issue.
Again, the SLOG is less about bandwidth and more about IOPS. When you start looking at IOPS, it is important to profile your workload. This is important to do because if you don't have number based performance targets all you can do is say "This is the best my budget can do so I hope it works" and once it is built, ask "Does this feel fast enough?"

This seems a systemic issue in this community. We have set minimums "Must use ECC" "Must have separate SLOG and L2ARC" "Always add as much RAM as you can afford" and we impose this thinking on others without talking about the details. We should not just recommend what we are familiar with and works for us but ask detailed questions about workloads and do the math. Do you need an NVMe SLOG for a media server? No. It probably will never use it. Do you need dual hexa core CPUs for 20 VMs running on another host? Depends on the workloads and interconnects.

To the OP, sorry to hijack your thread. Will it FreeNAS? YES! Though I have head many vendors prefer the Chelsio over intel 10gbe NICs in FreeBSD but I am sure that is chipset dependent. You can check the FreeBSD 11 release notes and see if your card is supported (I'm sure it is)

My question is: What do you need this box to do? What are your goals?
 
Last edited by a moderator:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
My warning of a common misconception was not meant to be read as a rule, and I apologize if it was read that way. The guidelines we have here are for safety, and most of them came about because of people getting burned. I encourage you to post the performance tests of your configuration in a new thread.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My question is: What do you need this box to do? What are your goals?
The OP was (as I understand) looking to get two servers for mass storage of backups and two servers that would be used by way of iSCSI to host virtualization. It is for a business and the OP is aware that the backup servers do not need SLOG or L2ARC and that servers that will host virtualization will need SLOG, but we had not discussed L2ARC or the possibility of using the same NVMe SSD to host both functions.
 
Status
Not open for further replies.
Top