First implementation of FreeNAS in production environment

Status
Not open for further replies.

GREBULL

Cadet
Joined
Dec 10, 2018
Messages
4
Hi all.
A year ago mount a FreeNas in the company for Backup and it is working great.
We have now decided that our NetApp booth is running out of bounds to update the storage system with FreeNas

This implementation must meet the following requirements:
* - Storage for ESXI Essentials Plus Kit with 3 Host with about 25 Virtual Machine. (ISCSI)
* - Most are Linux so they consume little.
* - SQL Database with 50 Users
* - Storage for Ofimatica, SolidWorks, Autocad, EPLAN.
* - You must have 24TB effective.

The hardware we have chosen is the following:
Supermicro SuperStorage Server 5029P-E1CTR12L - 12x SATA / SAS - LSI 3008 12G SAS - 8x DDR4 - 800W Redundant
Super Base Plate X11SPH-nCTF
SKL-6130 XEON 6130 INTEL PROCESSOR 16 CORES 2.1Ghz 22MB CACHE
128 Gb DDR4S16GBECCR DDR4 16GB 2666MHz ECC REG 1.20V
8x HD4TBSAShus5204 4TB SAS ENTERPRISE ISE 512e SE P3 (Candle) (RAIDZ1)
4x SSDD3S4510960GB SSD INTEL D3 S4510 960GB 2.5 "SATA 6Gb / s, TLC, 7mm (RAIDZ2)
2x Intel DC P4510 1TB NVMe PCIe 3.1x4 3D TLC 2.5 "15mm 1DWPD (CACHES / MIRROR)
2x SATA DOMs 128GB SATA 6.0Gb / s Disk on Module (MLC) (Vertical) (BOOT / MIRROR)
Supermicro AOM-SAS3-8I8E-LP SAS 3.0 12Gb / s 8-port Host Bus Adapter
Dual LAN with 10GBase-T with Intel® X722 + X557

Configuration for storage:
2x SATA DOMs Reflected for BOOT
2x Intel DC P4510 1TB NVMe PCIe Reflected for Caches
8x SAS 4x2 Disk RAIDZ1
IOPS R / W: 320
SPEED R / W: 1530 Mb / s
Effective Storage: 16Tb
Tolerance: 1x VDEV
4xSSD 2x2 Reflected Discs
R / IOPS: 380,000 IOPS
W / IOPS: 72,000 IOPS
R / Speed: 2240 / Mbs
W / Speed: 1020 / Mbs
2TB Effective Space
Fault tolerance: 1x VDEV

This is where I have doubts.
As you can see I do not reach the effective 24Tb.
What do you recommend to upload hard drives to 6 TB?
Put more 4 TB discs with another Supermicro box?
Make another RAID configuration?
The idea is to use SSDs for virtual machines and SQL, SAS disks for storage.
We use for storage NTFS, ISCSI, CIFS.

I would appreciate recommendations before buying the Hardware

Thank you all.
Greetings.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Storage for ESXI Essentials Plus Kit with 3 Host with about 25 Virtual Machine. (ISCSI)
That is not insignificant.
* - Most are Linux so they consume little.
* - SQL Database with 50 Users
* - Storage for Ofimatica, SolidWorks, Autocad, EPLAN.
* - You must have 24TB effective.
This is a lot to ask from a 2U server with only 12 drives.
Supermicro SuperStorage Server 5029P-E1CTR12L - 12x SATA / SAS - LSI 3008 12G SAS - 8x DDR4 - 800W Redundant
Just my opinion, but I think this is too small of a chassis. It does not give you enough disk bays to allow for the required storage capacity and IOPS.
The IOPS are more an issue than the storage capacity because you can go to larger disks for the additional capacity but that does not increase the IOPS. The IOPS only increase with additional vdevs and vdevs take physical disks. If this is going to be a business critical system, I would suggest you contact the sales team at iXsystems and they will have an engineer work with you to develop a system that you can rely upon to do the things you need it to do.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is not insignificant.

This is a lot to ask from a 2U server with only 12 drives.

Just my opinion, but I think this is too small of a chassis. It does not give you enough disk bays to allow for the required storage capacity and IOPS.
The IOPS are more an issue than the storage capacity because you can go to larger disks for the additional capacity but that does not increase the IOPS. The IOPS only increase with additional vdevs and vdevs take physical disks. If this is going to be a business critical system, I would suggest you contact the sales team at iXsystems and they will have an engineer work with you to develop a system that you can rely upon to do the things you need it to do.

No, the real issue here is the use of RAIDZ2. You can absolutely do what is wanted here on a smaller machine, but RAIDZ2 is going to be a killer.

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

If you desire 24TB, the ideal configuration here means that you need to have a 48TB pool to remain under 50% utilization, which is where steady state performance begins to seriously degrade.

https://forums.freenas.org/index.php?threads/zfs-fragmentation-issues.11818/

So to get 48TB of pool space, you need 96TB of raw disks in mirrors. 96 TB / 12 bays = 8TB HDD's needed. This should be reasonably zippy.

I don't have the time to do an in-depth analysis of the proposed system, but this is the single biggest likely issue.
 

GREBULL

Cadet
Joined
Dec 10, 2018
Messages
4
Thank you for your answers.

That is not insignificant.

yes, I know. I tried to dimension the disk NVme that have avery high cost to solve the performance in the caches. I alsodimension to 128Gb Ram to give resources to the ISCSI syncing.

Just my opinion, but I think this is too small of a chassis. It does not give you enough disk bays to allow for the required storage capacity and IOPS.
The IOPS are more an issue than the storage capacity because you can go to larger disks for the additional capacity but that does not increase the IOPS. The IOPS only increase with additional vdevs and vdevs take physical disks. If this is going to be a business critical system, I would suggest you contact the sales team at iXsystems and they will have an engineer work with you to develop a system that you can rely upon to do the things you need it to do.

When I dimensioned the chassis I was thinking about it. I've always had the feeling of being short.
Currently we have 7 TB occupied in the current NETAPP booth, the effective 24Tb is planning to 5-10 years from now.
I have thought of contact with IxSystem but in Spain I have not found any partner. It's a shame.

No, the real issue here is the use of RAIDZ2. You can absolutely do what is wanted here on a smaller machine, but RAIDZ2 is going to be a killer.

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

If you desire 24TB, the ideal configuration here means that you need to have a 48TB pool to remain under 50% utilization, which is where steady state performance begins to seriously degrade.

https://forums.freenas.org/index.php?threads/zfs-fragmentation-issues.11818/

So to get 48TB of pool space, you need 96TB of raw disks in mirrors. 96 TB / 12 bays = 8TB HDD's needed. This should be reasonably zippy.

I don't have the time to do an in-depth analysis of the proposed system, but this is the single biggest likely issue.

Thanks you jgreco for links information.
I'm still surprised by the performance drop from 50% of freespace. Does this also happen with Estorage's current solutions?

As comment to Chris currently we have 7 TB occupied, that's 30%. According to your recommendation I have to rethink the design of the space. I estimate that we reach 50% in 3-4years.

In your recommendation you do not mention the Vdev of SSD, you mean 12 Disks of 8 TB.
My intention was to provide the VMS and SQL IOPS with an SSD disk pool.
What's the motive?

I understand that the hardware is properly dimensioned but Ihave to work the storage configuration.

That's right?
 
Status
Not open for further replies.
Top