Specific build components list - up to 32GB RAM

Specific build components list - up to 32GB RAM

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What is the best option when building a large system - 60 drives or a small one with 8 drives?
You could start with eight drives and add another eight when you need more storage. You can keep adding vdevs to your pool over time and even add more external storage shelves. One of the systems I manage for work started out with only the internal drives and we added external SAS attached storage shelves over time to where it is running 124 drives on six shelves now.

This is not the same brand we have, but it is a similar product:
https://www.amazon.com/RAID-Machine-Expander-Rackmount-Enclosure/dp/B073BVYGRD
 
  • Like
Reactions: BKG

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
This is a power point presentation that does a fair job of introducing ZFS: https://1drv.ms/p/s!AipQGpAyAeDjgVrOZeXxYNhvq6WX
It is what I used to get started. There is a LOT to ZFS and it takes a while to take it all in.

The short form of the answer, in ZFS, the pool is a collection of vdevs (virtual devices) all the vdevs in the pool are striped together like a RAID-0. So any data sent into the pool is split among all vdevs, so if a single vdev fails, the entire pool fails. Redundancy in ZFS is done at the vdev level, so a vdev might be a mirror (kind of like RIAD-1), or it could be RAIDz (kind of like RAID-5, 1 disk worth of parity), or it could be RAIDz2 (kind of like RAID-6, 2 disks worth of parity), or it could be RAIDz3 (kind of like RAID-6+, 3 disks worth of parity). For storing video (surveillance right?) you probably want RAIDz2 vdevs, and to ensure you have fast enough IO rate to deal with your data, you will want many vdevs. I would suggest going with either six or eight drives per vdev (probably six) so you can have more vdevs. It is generally not a good idea to have more than about ten drives in a vdev, but I have heard of a system where they put 45 drives all in a single vdev. The thing about vdev performance is that (for random IO) each vdev is generally equivalent to a single physical disk. All data written into the pool is automatically checksumed by ZFS so that it can be monitored for errors, regardless of the type of redundancy selected. General rule of thumb, if you need very high IOPS, for virtualization for example, you would use mirror vdevs so that you can have more vdevs without needing a massive amount of drives.

Here is a link to a capacity calculator that I like when I am trying to decide on the number of drives and the capacity of the drives for a particular project:
https://wintelguy.com/zfs-calc.pl

Here is an example with 1TB drives with 6 drives per vdev and 6 vdevs in RAIDz2, just ignore the price.

View attachment 29240
Chris this is extremely well-worded description of the system architecture backed up by examples. I'd say the best by far. This should be placed on the home page under - "ZFS In The Nutshell" . I don't think I have any other questions. The rest is just technical getting-used-to when working with the interface. Many thanks for your help!
 

jctepl

Cadet
Joined
Mar 16, 2019
Messages
2
Looking at the RAM suggested, I can't figure out if it's on the verified memory list. Is it?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Looking at the RAM suggested, I can't figure out if it's on the verified memory list. Is it?
It is compatible, but not tested by Supermicro. They only have two vendors on the tested list:
https://www.supermicro.com/support/...D650E3EC19339&prid=0&type=0&ecc=0&reg=0&fbd=0

If you want to go that way, here are links to where you can buy it:

Supermicro MEM-DR380L-HL01-EU16 Hynix
https://www.amazon.com/Supermicro-MEM-DR380L-HL01-EU16-Memory-DDR3-1600MHz-Buffered/dp/B00A6GIFZA
or
Supermicro MEM-DR380L-SL01-EU16 Samsung
https://www.amazon.com/Supermicro-Certified-MEM-DR380L-SL01-EU16-Samsung-Buffer/dp/B00A74PF9K

They want more money for that, just to get the name brand. It is up to you if you want to pay more for the same kind of memory.
 

jctepl

Cadet
Joined
Mar 16, 2019
Messages
2
It is compatible, but not tested by Supermicro. They only have two vendors on the tested list:
https://www.supermicro.com/support/...D650E3EC19339&prid=0&type=0&ecc=0&reg=0&fbd=0

If you want to go that way, here are links to where you can buy it:

Supermicro MEM-DR380L-HL01-EU16 Hynix
https://www.amazon.com/Supermicro-MEM-DR380L-HL01-EU16-Memory-DDR3-1600MHz-Buffered/dp/B00A6GIFZA
or
Supermicro MEM-DR380L-SL01-EU16 Samsung
https://www.amazon.com/Supermicro-Certified-MEM-DR380L-SL01-EU16-Samsung-Buffer/dp/B00A74PF9K

They want more money for that, just to get the name brand. It is up to you if you want to pay more for the same kind of memory.
Thanks for the response and the build guide.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Hi Chris, can you please look at the attached PDFs. I didn't find an answer for this question neither in the manual nor in the power point(or perhaps didn't pay attention when read it). But question is how many ZPOOLs I can. I attached two PDFs one of them being a 64 drive system with one ZPOOL and the other being multi ZPOOL where each ZPOOL is its own enclosure - is it possible? Thank you.
 

Attachments

  • 64 DRIVE SYSTEM.pdf
    36.3 KB · Views: 472
  • MULTI ZPOOL.pdf
    48 KB · Views: 487

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi Chris, can you please look at the attached PDFs. I didn't find an answer for this question neither in the manual nor in the power point(or perhaps didn't pay attention when read it). But question is how many ZPOOLs I can. I attached two PDFs one of them being a 64 drive system with one ZPOOL and the other being multi ZPOOL where each ZPOOL is its own enclosure - is it possible? Thank you.
You don't need to put drives in an external enclosure into a separate pool. You can, if you want to be able to detach the pool without interrupting other pools, but you don't need to do that. A single ZFS pool can span multiple enclosures. I have one at work that spans five five enclosures. I picked a different server with only two enclosures... Sorry for any confusion.
Here is what the output of zpool status would be for that:
Code:
zpool status
  pool: Pogo-60x10TB
state: ONLINE
  scan: scrub repaired 0 in 0 days 17:00:17 with 0 errors on Sat Jan 19 00:26:25 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        Pogo-60x10TB                                    ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/4b380e51-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4c2a3fa0-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4d316c90-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4e279c43-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4f3b17f7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5048ec53-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/51a5cb98-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/52b0c4f5-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/53c02fc8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/54b2adad-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/55c0f47a-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/56c14c23-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/584f5745-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5956d7f8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5a5a6070-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5b564fc1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5c630294-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5d6a2431-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-3                                      ONLINE       0     0     0
            gptid/5ef1faf6-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5ffb8961-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6105cc1d-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/62120300-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/631e28a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/642dd1ea-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-4                                      ONLINE       0     0     0
            gptid/65d9d859-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/66de6408-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/67ee6f2c-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6905e892-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6a1e7078-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6b2a5922-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-5                                      ONLINE       0     0     0
            gptid/6cf142e9-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6e039cc3-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6f1bdf1d-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7026b5ed-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/712a87a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/723bc10e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-6                                      ONLINE       0     0     0
            gptid/7416f42f-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7531a5f8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/76428144-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7760f346-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7866427c-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/798d143e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-7                                      ONLINE       0     0     0
            gptid/7b875ee7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7c9c17f7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7db105ca-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7ed28e84-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7ff52ad1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/80faf9ba-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-8                                      ONLINE       0     0     0
            gptid/831d73e2-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/843431a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/85534614-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/866f8059-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8791862e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/88abe418-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-9                                      ONLINE       0     0     0
            gptid/8ad8a1e1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8bf8c94e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8d14e578-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8e2c0e1e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8f4586ee-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/905d5231-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
        logs
          gptid/df8c688c-1b50-11e9-bd4a-ac1f6b418926    ONLINE       0     0     0
        cache
          gptid/e1f09019-1b50-11e9-bd4a-ac1f6b418926    ONLINE       0     0     0

errors: No known data errors

This is the real output from a server with 60 drives across two 30 drive enclosures. Each vdev has a redundancy of RAIDz2 and vdevs are counted from zero upward.

I do not manage it, and didn't have any involvement in setting it up, but I know about a server that had 480 disks divided among 8 enclosures configured as four separate pools, each pool having 15 vdevs with 15 disks in each vdev, at RAIDz2 redundancy. That gave them four pools attached to one server with each pool able to hold a maximum of about 321 TB of data for a total of just under 1 PB of total storage capacity and they could still add more disks if they needed to do so. At that time, it was using 2 TB drives, so I would expect they have migrated to larger drives by now as this was about three years ago.

I do not advocate for 15 drives in a vdev. I would say that 10 drives in a vdev of RAIDz2 is a maximum and 11 drives for a vdev of RAIDz3, but I don't see RAIDz3 being needed unless the system is not being monitored on a regular basis. With RAIDz2 and cold spare drives on hand, you should be able to recover from a single disk failure. If you are not able to monitor the system closely, for example it is in a remote location, then RAIDz3 is certainly better.

Although ZFS has few limits, before you go for a system that large you might want to consider other factors, for example, how long will it take to backup that data? What is the backup strategy? It might be a better plan to have several servers instead of one massive server.

I don't know if this answered your question or not, or if it created more questions. Let me know. It is worth talking about.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Chris, I think you did answer my question with provided examples. I am not going to build a system as large as the one that I attached in PDF(multi ZPOOL) it was more for understanding system's limitation and design. I am planing to build a system with FreeNASS that should hold video data for a year-long time and will be using RAIDz2. The hardware for the start will be two 24-bay-10TB cases where one of them is the FN server. So, "to dot the I" if i create two ZPOOLs I end up paying for the storage but on the hand can now afford to loose an entire ZPOOL? Am I correct? And by building RAIDz2 I can afford to loose two drives max per VDEV. I will now build a hardware solution and upload it here in case some one is interested.
 

Attachments

  • 24 DRIVE SYSTEM.pdf
    34.3 KB · Views: 428

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
if i create two ZPOOLs I end up paying for the storage but on the hand can now afford to loose an entire ZPOOL?
Correct, each pool would be independent such that you could take one pool offline without influencing the other pool. I have taken pools offline for administrative purposes, but I have not had a pool fail, thankfully.
Am I correct? And by building RAIDz2 I can afford to loose two drives max per VDEV.
That is correct. Each vdev is protected against two drives of failure. This plan will give you a pretty easy expansion model if you use 24 bay expansion shelves to add capacity to the pool.

Have you already selected hardware for this?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just need to find all the parts - new.
The price will be much higher for new. Please be prepared for a shock.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Looks good! Just need to find all the parts - new. And two 64GB SSDs. Thanks for the link!
If you are going for new hardware, you should contact ixsystems.com for a quote.
You can email them at: info@ixsystems.com or call them at: 1 (855) 473-7449 or 1 (408) 943-4100
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Here is an example from their site:
1552948684301.png


You just need to talk to them for a quote.
https://www.ixsystems.com/ix-server-family/rackmount-servers/?ix-server=4224-2
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This is a significantly less valuable case. It has no hot-swap power supply included with it AND it also does not include a SAS Expander in the drive backplane, so there would need to be a separate SAS expander purchased and installed in the system. It is also (speaking of the materials) poorly made by comparison with the Supermicro chassis. It would be worth paying $400 more for shipping to get the Supermicro chassis alone.
If you are able to get the system board included with the chassis, that would make the Supermicro chassis worth another $600, just by not needing to pay that exorbitant price for a system board.
So, between the price of that system board, and the value difference for the chassis, the Supermicro chassis with included system board is worth at least $600 over their asking and as much as $1000 more, especially if the vendor can put a proper HBA in it for you.

As for the HBA. You would want one like this for the internal drives:
https://www.newegg.com/Product/Product.aspx?Item=9SIADP08R39851
and one like this for the externally attached drives:
https://www.newegg.com/Product/Product.aspx?Item=9SIAEWP6S29066

The one you were pointing out is a 12Gb SAS controller and there is just no reason for spending that much money for spinning disks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It might even be worth paying a third party shipper like this: https://www.shipito.com/en/
We had a couple people in Australia that paid to have US gear shipped, "down under," because they just couldn't get it any other way.
I really hate to see you buy sub-standard gear just because local vendors don't carry what you need.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
This is a significantly less valuable case. It has no hot-swap power supply included with it AND it also does not include a SAS Expander in the drive backplane, so there would need to be a separate SAS expander purchased and installed in the system. It is also (speaking of the materials) poorly made by comparison with the Supermicro chassis. It would be worth paying $400 more for shipping to get the Supermicro chassis alone.

If you are able to get the system board included with the chassis, that would make the Supermicro chassis worth another $600, just by not needing to pay that exorbitant price for a system board.
So, between the price of that system board, and the value difference for the chassis, the Supermicro chassis with included system board is worth at least $600 over their asking and as much as $1000 more, especially if the vendor can put a proper HBA in it for you.

As for the HBA. You would want one like this for the internal drives:
https://www.newegg.com/Product/Product.aspx?Item=9SIADP08R39851
and one like this for the externally attached drives:
https://www.newegg.com/Product/Product.aspx?Item=9SIAEWP6S29066

The one you were pointing out is a 12Gb SAS controller and there is just no reason for spending that much money for spinning disks.
By all means I will reach out to them. With the quote request. I'm not giving up on Supemicro setup just listed some cheaper alternatives (case) in case it gets complicated. As an option I might even look into getting a used Supemicro chassis here and stuff it with new parts. I will see what those guys come back to me with. Thanks for the links.
 

BKG

Dabbler
Joined
Mar 11, 2019
Messages
18
Hi Chris, I reached out to ixsystems but haven't heard anything from them yet. In the meantime I think I have a very solid chance to find supermicro chassis here in Canada the one that I'm looking for. Now if find a used one depending on the backplane it comes with it may change my setup with the cards that you suggested. I attached a PDF with the drawing for two different backplane setups. Can you please correct me if I'm wrong. Thank you.
 

Attachments

  • SAS HBA.pdf
    33.2 KB · Views: 474

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I attached a PDF with the drawing for two different backplane setups. Can you please correct me if I'm wrong. Thank you.
While having two servers connected to a single expansion shelf is possible with HA configurations, I don't think FreeNAS is equipped to support that at a software level, so you would likely not be able to do that.

Three controllers are not needed for the internal drives because you can obtain a SAS expander as an independent device. Please watch this video as the person making the video explains it very well, with illustrations:

Explaining the IBM SAS-2 expander and how to do 24xHDD setup with only 2-port SAS controller
https://www.youtube.com/watch?v=qccpopxc_Uo
 
  • Like
Reactions: BKG
Top