24 bay FreeNAS

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Intel X550-T2 2x10GbE network card
This is the kind of 10Gb card that iXsystems sells for use in their systems:
https://www.amazon.com/FreeNAS-Dual-Port-Upgrade-Ports-Twinax/dp/B011APKCHE
It is a OEM version of the Chelsio 520-SO-CR
upload_2018-3-7_19-56-19.png
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The OP was (as I understand) looking to get two servers for mass storage of backups and two servers that would be used by way of iSCSI to host virtualization. It is for a business and the OP is aware that the backup servers do not need SLOG or L2ARC and that servers that will host virtualization will need SLOG, but we had not discussed L2ARC or the possibility of using the same NVMe SSD to host both functions.
I was getting at the IOPS requirements for his VMs and other quantitative goals.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This box or better these boxes, I'm going to buy two of them for this project, are going to be used in a business case.
They will be used mainly for NFS connection for about 20 linux / unix workstations / servers all using NIS. At the moment nobody of the involved guys can tell if they will be using sync NFS or not.
<snip>
If these boxes will work fine I'm going to buy another pair. They will then be used for CIFS, NFS and also as VMware storage pools.

I was getting at the IOPS requirements for his VMs and other quantitative goals.
Sorry @kdragon75 , the details were not provided. At this point, from what the OP said, I don't think the details are even available. Perhaps @tazinblack will check back in and give more detail.
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
The OP was (as I understand) looking to get two servers for mass storage of backups and two servers that would be used by way of iSCSI to host virtualization. It is for a business and the OP is aware that the backup servers do not need SLOG or L2ARC and that servers that will host virtualization will need SLOG, but we had not discussed L2ARC or the possibility of using the same NVMe SSD to host both functions.
Yes that's right. The two boxes for mass storage will not need SLOG or L2ARC and I think 128 GB of RAM should be a good layout.
Also I'm aware of SLOG and L2ARC. The problem is that at this point I can not say how much performance I'm going to need coming out of the second pair. These boxes will be used for backups, a lot versions of these backups in form of snapshots. And some space for VM storage pools. Since this will not be the primary storage it will be used for testing and I think it's always good to have some extra power. I know my company and I usually appreciate some more flexibility.
In comparison to the costs of the primary storage it does not matter if I add these NVMe SSDs or not. The point is they are state of the art now so why should I buy SAS or SATA SSDs.
So everything good here. I already have some FreeNAS boxes and I'm aware that it's not easy to have the perfect layout when you don't know in which direction you are going.
 
Last edited by a moderator:

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
So now with the problem that you can't find any small NVMe SSD with power loss protection available on the market here and your advise that it would be very overdesigned. I finally asked another distributor for quote. The system will look like this:

  • SuperMicro Chassis 24 x 3,5" 2 x 2,5"
  • SuperMicro Board X10SRi-F Intel 2011-3 Single Socket
  • Intel Xeon E5-1620v4
  • CPU Heatsink
  • 128 GB RAM
  • Supermicro SATA DOM 32 GB
  • LSI HBA 9300-8i SAS 12 GB/s (as you told the support for it should be stable by now)
  • Intel X550-T2 2 x 10 GB/s Ethernet
  • 25 HGST HUH721008AL5200 (one for cold spare)
  • 2 Intel SSD S4600 240 GB SATA for SLOG or L2ARC since NVME is not available at the moment and would be over designe
I'm flexible with the use of the 2 SSDs. Will depend on what workload I will see.

So for me everything looks good now. Feel free to grumble!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So for me everything looks good now. Feel free to grumble!
I don't see anything to grumble about.
Please do some testing and let us know what kind of results you get.

Have a look at these threads. They might give you some ideas:

Initial Setup/Design:
https://forums.freenas.org/index.php?threads/multiple-volumes-or-not.45545/#post-308867

A bunch of test results for reference:
https://forums.freenas.org/index.php?threads/slow-writes-on-ixsystems-hardware.46032/page-3
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
So, got the new hardware at last ;)
I think it would have been easier to get an audience with the pope :D
It's the configuration shown above.
I installed 11.1-U5 and play with different zpool designs.
I tend to have 2 or 3 vdevs in one big pool with Z2.
Since these two boxes are going to be used as a big archive with replication I prefer capacity over iops.
Main connectivity will be NFS.
But however I would be interested in your advice.

If you want performance tests let me know what and how.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
I did some benchmarking in the meantime.
Here are my results in short:

Scenario:
2 identical freenas boxes, config as shown somewhere above.
1 connected via 2 x 1 GBit/s lacp link aggregation.
1 connected via 2 x 10 GBit/s lacp link aggregation.
1 test VM running Gentoo Linux under ESX 6.5 server connected via 2 x 10 GBit/s.

I used sysbench with the theese command:
Code:
sysbench --test=fileio --file-total-size=4G prepare
sysbench --test=fileio --file-total-size=4G --file-test-mode=rndrw --max-time=240 --max-requests=0 --file-block-size=4K --num-threads=4 --file-fsync-all run



(Values in MiB/s)

----------------------------------------------------------------------------
without ZIL

1 vdev 24 x 8 TB Z3 @1 GBit/s Network:
read 0.84 write 0.56
1 vdev 24 x 8 TB Z3 @10 GBit/s Network:
read 0.83 write 0.55

2 vdevs 12 x 8 TB Z2 @1 GBit/s Network:
read 0.91 write 0.61
2 vdevs 12 x 8 TB Z2 @10 GBit/s Network:
read 0.90 write 0.60

3 vdevs 8 x 8 TB Z2 @1 GBit/s Network:
read 0.95 write 0.63
3 vdevs 8 x 8 TB Z2 @10 GBit/s Network:
read 0.94 write 0.63

----------------------------------------------------------------------------
with ZIL

1 vdev 24 x 8 TB Z3 ZIL 10 GB @1 GBit/s Network:
read 35.82 write 23.88
1 vdev 24 x 8 TB Z3 ZIL 10 GB @10 GBit/s Network:
read 52.63 write 35.08

2 vdevs 12 x 8 TB Z2 10 GB ZIL @1 GBit/s Network:
read 33.22 write 22.15
2 vdevs 12 x 8 TB Z2 10 GB ZIL @10 GBit/s Network:
read 50.20 write 33.47

3 vdevs 8 x 8 TB Z2 ZIL 10 GB @1 GBit/s Network:
read 21.94 write 14.62
3 vdevs 8 x 8 TB Z2 ZIL 10 GB @10 GBit/s Network:
read 50.99 write 33.99

----------------------------------------------------------------------------

Since I haven't had the time and resources to build up a whole separate testing environment I used our production switches and ESX servers.
But I think the results show a the same trends as they would with complete separate hardware.

The ZIL brings a lot of more write throughput as expected, since I only testes sync writes.
I'm a bit surprised about the different read throughput, since everything should be in memory.
Also the "more power" that the lower latency of the 10G connection brings is great.

So now I have to think a bit about it, but I tend to have only 1 vdev with 24 x 8 TB Z3 with 10 GB ZIL. Looks like this setup brings reasonable performance, enough redundance, and also the most usable space and flexibility.
 
Last edited:

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Good news, the second pair of these boxes were ordered yesterday. Since these will mostly be used for archiving backups I ordered them with one Intel 900P 280 GB instead of two S4600. I know 900P has no power loss protection. But I will use NFS from vmware only for testing scenarios with these boxes. So no problem here.
They talked about 20 days delivery time.
I'm very curious about the benchmarks and will share it with you
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
So here they are my new boxes. I have only some little time left before my holiday. Let's see how far I come...
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
Ok, back from holiday. Admittedly not a lot time but I did some tests just to get a feeling about the performance and I have to say that I am impressed.
I configured the 900P with one partition 20G for SLOG and 200G for L2ARC. The remaining space is left empty to give the device some space to replace broken cells or whatever.
I know that it's not recommended to use the same device for both SLOG and L2ARC but I think nowadays with NVME on PCIE this should not be a problem and since the durability of the optane should be a lot better than with usual SSDs I see no reason not to.

With this configuration everything connected to 10G ethernet I achieve up to 370 MB/s while moving a single VM from my NetApp to this FreeNas box.
This is about factor 100 compared with not using SLOG and something between 5 and 10 times faster than with some other SSD connected with SATA or M.2 SSD with SATA protocol.

Would have been nice to compare against a regular SSD with NVME over PCIE but I have none of these and I think most of the power comes from the very low latency of the optane.
So as conclusion I think this is my solution for these kind of setups. The onyl thing I'm missing is a power loss protection on this card. The performance satisfies my needs.

Thanks for all your advice!
 
Status
Not open for further replies.
Top