10 x 1Tb SSD - Best practise?

Status
Not open for further replies.

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
All,

I am building a FreeNAS 11.1U5 box with 10x1Tb SSD's. These are to be used as storage for 2 ESXi 6.7 hosts.

I also have some 6Tb drives available (2 of).

I intend on using the 6Tb drives for storage of media/files etc and the SSD's for VM's including Exchange.

Any recommendations on setting up the disks? I don't mind if I have one large pool for the SSD's or some smaller pools.. I am after resiliency, but backups will also be placed on the 6Tb drives from the SSD's.

This is a home setup which I tinker with for fun but use for live email and some web services so want it to be pretty reliable!

FreeNAS is booting off it's own 120Gb SSD.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
We have had a couple of other users do that recently and the reports indicated that RAIDz2 can provide adequate performance depending on the exact hardware.
What are the details of the hardware?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
We have had a couple of other users do that recently and the reports indicated that RAIDz2 can provide adequate performance depending on the exact hardware.
What are the details of the hardware?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
Chris,

Hardware is as follows (home brew white box):

Asus cs-b motherboard
i5-4590 cpu
16Gb RAM
8 port sata card (not RAID) with 8 of the SSDs attached
The other 2 SSDs and boot SSD are connected to sata ports on the motherboard
2x6Tb drives connected to the sata ports on the motherboard

Think that’s all..
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
Anyone with any advice?!
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
SSD's for VM's including Exchange.

I can't speak for Exchange specifically, but... Internet Email, specifically ESMTP, is O_SYNC on write per RFC 2821. If you're doing any kind of volume, I would try to avoid layering the Exchange VM filesystem, and go straight to the NAS via iSCSI if possible.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
I can't speak for Exchange specifically, but... Internet Email, specifically ESMTP, is O_SYNC on write per RFC 2821. If you're doing any kind of volume, I would try to avoid layering the Exchange VM filesystem, and go straight to the NAS via iSCSI if possible.

rvassar,

Thanks, I will be using iSCSI to present my LUNs to my ESX hosts. I was actually after best practises on how to setup the 10x1Tb SSD's - i.e. Z2/mirror/have a spare drive etc..
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I was actually after best practises on how to setup the 10x1Tb SSD's - i.e. Z2/mirror/have a spare drive etc..


Your motherboard doesn't appear to support ECC memory, and the i5-4590 certainly doesn't. Since this appears to be a build with serious intent behind it, I have to point that out. I'm not a big proponent of "scrub of death" fearmongering, but if you're the lucky winner, it will proceed with amazing speed on SSD hardware.

Beyond that... There's probably some specific advice I should be giving you, but this is one of those areas where I'd need to do some thinking. You haven't mentioned much about the workloads that will be presented. So I can't make any recommendation on ZIL's, etc... The kinds of things I'm pondering are:

1. Wear levelling.

Writing to flash memory wears it out. Drives are over-subscribed with extra memory so the flash controller can side-line worn out blocks and substitute good blocks. When the controller runs out of blocks to substitute, errors bubble up to FreeNAS, triggering a hot-spare, etc...

If the drives are all the same brand, and all the same age, they will all have the same amount of over-subscription, and in a RAIDz2 config, they will all therefore be likely to wear out at the same time. A hot spare will not save you from this. RAID by it's nature requires every device in the vdev to receive a write event when the vdev is written to. Where I get stuck is determining if a mirrored configuration would be any better. Mirrored vdev's with pairs of drives still have the problem of both devices have to get written to. That doesn't necessarily mean the other vdev's in the pool get touched, but ZFS will try and spread out the activity. I suspect a mirrored config will give you a chance at having hot and cold vdev's, which will result in drives wearing at different rates and reduce the risk of a multiple failure LOD event. But I'm not convinced enough to recommend one over the other. I'll just suggest constructing a pool with multiple vdev's. You're also going to want to watch the SMART data, and be prepared to actively address any failures. Once one device hits it's wear limit, the remainder will likely be close behind it.


2. I/O path capacity.

10 x 1Tb SSD have quite a lot of IOPS capacity. So much that you're probably going to want to give some consideration to what they get connected to. By this I mean, a single controller might become a bottleneck, and your motherboard appears to be rather limited in PCIe lanes & SATA ports. The manufacturer's web site lists:


1 x PCIe 3.0/2.0 x16 (beige)
1 x PCIe 2.0 x16 (x4 mode, black)
1 x PCIe 2.0 x1 (brown)
1 x PCI (beige)
1 x mini-PCIe 2.0 x1 (full-length, black) *

So you have one slot configured PCIe x16, and then have another x16 slot that's really an x4. Each of your SSD drives is likely capable of 500mb/sec. Each PCIe 2.0 lane is 500MB/sec. I don't have a lot of experience with the current crop of 10GbE NIC's, but I'm under the impression they need more than an x4 slot. So... You appear to have to pick which gets the limited slot, the NIC or the HBA.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Your motherboard doesn't appear to support ECC memory, and the i5-4590 certainly doesn't. Since this appears to be a build with serious intent behind it, I have to point that out. I'm not a big proponent of "scrub of death" fearmongering, but if you're the lucky winner, it will proceed with amazing speed on SSD hardware.
ECC is more about keeping a memory error from causing the system to crash, which is always bad, or the possibility that good data is corrupted in memory before being recorded to disk. If the data is corrupted in flight, before ZFS can checksum it, then you could have bad data that ZFS is not aware of and not able to protect you from.
Asus cs-b motherboard
i5-4590 cpu
16Gb RAM
8 port sata card (not RAID) with 8 of the SSDs attached
The other 2 SSDs and boot SSD are connected to sata ports on the motherboard
2x6Tb drives connected to the sata ports on the motherboard
If you are going to invest all that money in SSDs, why would you cheap out on the system board and CPU so that you are not even able to use ECC memory? That doesn't add up in my mind.
There is a British saying I always loved, "penny wise, pound foolish." which I understand to mean, "making decisions over small amounts of money (pennies) that end up making bad sense with larger amounts of money (pounds, as in the money they used in Great Brittan before the Euro).
That brings me to the comment that @rvassar made about your I/O.
You need a system board that has more PCIe lanes and has them allocated to slots in a manner that is more logical for the use of a server.
The following is meant to be an example, especially as some of these auctions are over and the items are not available. I just want to point you at the kind of hardware you should be looking at.

SuperMicro X9SRL-F Motherboard - listed as new - photos show a CPU cooler, but they don't say in the description
https://www.ebay.com/itm/202340882106
Price: US $279.99

SAMSUNG 16GB PC3L-12800R DDR3-1600 ECC Registered 1.35V RDIMM...
https://www.ebay.com/itm/302110582298
Price: US $54.95 x 8 = $439.60 for 128GB of RAM !!

Dynatron R27 Side Fan CPU Cooler 3U for Intel Socket LGA2011 (Narrow ILM)
https://www.ebay.com/itm/401284811045
Price: US $39.59

Intel Xeon E5-2650 V2 2.6GHz 8 Core 20MB 8GT/s SR1A8 LGA2011 (CM8063501375101) Processor
PassMark score of 13073... If you are wondering...
https://www.ebay.com/itm/283019094038
Price: US $96.95

For the drive controller, I would suggest a SAS controller, there are 4 SCA ports on the system board in addition to the SATA ports, but only two of the SATA ports are SATA III, so you will still be a little short.

SAS PCI-E 3.0 HBA LSI 9207-8i P20 IT Mode for ZFS FreeNAS unRAID
https://www.ebay.com/itm/162862201664
Price: US $69.55

Also, you will need forward breakout cables from the controller to the drives. I like these because they are a bit more durable than some of the slim ones that I have accidentally broken before:

Lot of 2 Mini SAS to 4-SATA SFF-8087 Multi-Lane Forward Breakout Internal Cable
https://www.ebay.com/itm/371681252206
Price: US $12.99

You might need to read this:

Don't be afraid to be SAS-sy
https://forums.freenas.org/index.php?resources/don't-be-afraid-to-be-sas-sy.48/

I would suggest one of these for the boot drive. It will last as long as the server, if not longer:

Intel-SSD-DC-S3500-Series-2-5-SATA-6Gb-s-20nm-MLC-80GB-SSDSC2BB080G4
https://www.ebay.com/itm/283017365544
Price: US $29.99

If you are using this for iSCSI, you are going to need a SLOG device and there are a number of discussions about that on the forum.
Here is one: https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thanks, I will be using iSCSI to present my LUNs to my ESX hosts. I was actually after best practises on how to setup the 10x1Tb SSD's - i.e. Z2/mirror/have a spare drive etc..
To get the most capacity with the SSDs (to preserve your investment) you probably want to put them in RAIDz. I would say that the reliability of SSDs is such that needing to use RAIDz2 or even have a hot spare is pretty unlikely. You might want to have a cold spare on hand.
The speed of the SLOG is what will dictate the responsiveness of the pool with regard to write and the SSDs should read fast enough that you will not need a L2ARC.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
ECC is more about keeping a memory error from causing the system to crash, which is always bad, or the possibility that good data is corrupted in memory before being recorded to disk. If the data is corrupted in flight, before ZFS can checksum it, then you could have bad data that ZFS is not aware of and not able to protect you from.

If you are going to invest all that money in SSDs, why would you cheap out on the system board and CPU so that you are not even able to use ECC memory? That doesn't add up in my mind.
There is a British saying I always loved, "penny wise, pound foolish." which I understand to mean, "making decisions over small amounts of money (pennies) that end up making bad sense with larger amounts of money (pounds, as in the money they used in Great Brittan before the Euro).
That brings me to the comment that @rvassar made about your I/O.
You need a system board that has more PCIe lanes and has them allocated to slots in a manner that is more logical for the use of a server.
The following is meant to be an example, especially as some of these auctions are over and the items are not available. I just want to point you at the kind of hardware you should be looking at.

SuperMicro X9SRL-F Motherboard - listed as new - photos show a CPU cooler, but they don't say in the description
https://www.ebay.com/itm/202340882106
Price: US $279.99

SAMSUNG 16GB PC3L-12800R DDR3-1600 ECC Registered 1.35V RDIMM...
https://www.ebay.com/itm/302110582298
Price: US $54.95 x 8 = $439.60 for 128GB of RAM !!

Dynatron R27 Side Fan CPU Cooler 3U for Intel Socket LGA2011 (Narrow ILM)
https://www.ebay.com/itm/401284811045
Price: US $39.59

Intel Xeon E5-2650 V2 2.6GHz 8 Core 20MB 8GT/s SR1A8 LGA2011 (CM8063501375101) Processor
PassMark score of 13073... If you are wondering...
https://www.ebay.com/itm/283019094038
Price: US $96.95

For the drive controller, I would suggest a SAS controller, there are 4 SCA ports on the system board in addition to the SATA ports, but only two of the SATA ports are SATA III, so you will still be a little short.

SAS PCI-E 3.0 HBA LSI 9207-8i P20 IT Mode for ZFS FreeNAS unRAID
https://www.ebay.com/itm/162862201664
Price: US $69.55

Also, you will need forward breakout cables from the controller to the drives. I like these because they are a bit more durable than some of the slim ones that I have accidentally broken before:

Lot of 2 Mini SAS to 4-SATA SFF-8087 Multi-Lane Forward Breakout Internal Cable
https://www.ebay.com/itm/371681252206
Price: US $12.99

You might need to read this:

Don't be afraid to be SAS-sy
https://forums.freenas.org/index.php?resources/don't-be-afraid-to-be-sas-sy.48/

I would suggest one of these for the boot drive. It will last as long as the server, if not longer:

Intel-SSD-DC-S3500-Series-2-5-SATA-6Gb-s-20nm-MLC-80GB-SSDSC2BB080G4
https://www.ebay.com/itm/283017365544
Price: US $29.99

If you are using this for iSCSI, you are going to need a SLOG device and there are a number of discussions about that on the forum.
Here is one: https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521


Chris,

I'm from Great Britain and we still use the Pound, have never used the Euro!

I have had the SSDs for a while, got them very cheap so haven't spent an awful amount on them. The motherboard I have (and CPU) are the same as what run my ESXi hosts and they do not struggle there. I have no need to go out and purchase a proper dedicated server - this is for home use only and hosts my personal email and some websites.

I already have a seperate SSD for the boot drive. I have some SAS-SSD's at home but will be selling them as they need a RAID card to run and I don't want to go down that route.

I'm after a low power, reasonably well performing storage box. The system will have a maximum of 4 users hitting it at any one time so it's not going to be breaking a sweat.

I was just after how to setup the drives, RAIDz vs mirror etc.

I'll have a read about the SLOG drive, thanks.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
To get the most capacity with the SSDs (to preserve your investment) you probably want to put them in RAIDz. I would say that the reliability of SSDs is such that needing to use RAIDz2 or even have a hot spare is pretty unlikely. You might want to have a cold spare on hand.
The speed of the SLOG is what will dictate the responsiveness of the pool with regard to write and the SSDs should read fast enough that you will not need a L2ARC.

Chris,

Thanks, that's more what I was after! I may just go with 8 or 9 of the SSDs installed and keep the other 1 as a cold spare and use the 10th as a SLOG drive after I have read more about what it does.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
and use the 10th as a SLOG drive after I have read more about what it does.
A regular SSD is no good for a SLOG mainly because it isn't fast enough. To be useful, it needs to be faster than the storage pool, so a SSD of the same type as what you are using in the pool might be better than not having one, but only a little.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm from Great Britain and we still use the Pound, have never used the Euro!
I have not kept up with it. Last time I was over there, 1994, I don't think anyone was using the Euro. It may not have even been a glimmer on the horizon at that time.
I already have a separate SSD for the boot drive. I have some SAS-SSD's at home but will be selling them as they need a RAID card to run and I don't want to go down that route.
If you get a SAS HBA like I suggested, you can use those. I use SAS SSDs for the SLOG on my iSCSI pool. The HGST model I have is significantly faster than the SATA SSDs I tested.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
I have not kept up with it. Last time I was over there, 1994, I don't think anyone was using the Euro. It may not have even been a glimmer on the horizon at that time.

If you get a SAS HBA like I suggested, you can use those. I use SAS SSDs for the SLOG on my iSCSI pool. The HGST model I have is significantly faster than the SATA SSDs I tested.

Chris,

Pretty sure I have a SAS card and some cables at home now I've thought about it. I have 4 x 400Gb SAS SSD's - would you use all of those for the SLOG?
 
Joined
Dec 29, 2014
Messages
1,135

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Chris,

Pretty sure I have a SAS card and some cables at home now I've thought about it. I have 4 x 400Gb SAS SSD's - would you use all of those for the SLOG?
FreeNAS will only utilize about 16GB of each device, if I recall correctly, so if you do some manual partitioning you could probably carve off part of each device for SLOG and part for L2ARC.
This is a thread where they discuss this type of thing and I have seen it mentioned a number of times. I want to do it myself, but have not taken the time to make the changes.
https://forums.freenas.org/index.ph...-partitioned-for-two-pools.62787/#post-449356
To do that, it would need to be done from the command line and this post lists out the commands:
https://forums.freenas.org/index.php?threads/speeding-up-slog-on-the-cheap.62949/#post-450203
 
Status
Not open for further replies.
Top