SSD recommendation for NFS/ESXI

Status
Not open for further replies.

scott2500uk

Dabbler
Joined
Nov 17, 2014
Messages
37
I currently have 8 x Corsair CSSD-F60GBLS 7mm 60GB Force Series LS drives in ZFS RAID 10 giving me about 230GB of usable storage.

I have been running them through a IBM M1015 flashed in IT mode on a box with 16GB ECC RAM and Intel XEON E3-1230v3

Over a period of a 6 months I've now had 4 of the drives just drop from the array and become unavailable.

The first time two of them dropped it trashed the array. Because I take snapshots and do zfs replication I was able to restore but this time the 6 remaining drives I set in raid6 so I had the same amount of storage. Another two drives have become unavailable and is at a very critical point.

No matter if I power cycle the box, switch the SSDs into different port FreeNAS refuses to see the unavailable drives. If I take the failed drives out and plug them into a PC they can be read fine and see to be in working condition.

I can only presume that ZFS is not liking these drives and is marking them as failed or something.

Anyway I want to move away from cheap SSD and want to move to something more reliable. Can anyone recommend a SSD? I need about 230GB of storage and thinking of moving to getting two 240GB SSDs and mirror them.

I'm thinking that SanDisk Extreme PRO 240GB looks like a good replacement but I'm really not sure what would be better.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
intel pro ssd.

anyway, read the forum stickies, you did provide your setup here.
and you need to learn how a cow filesystem works. and how a ssd writes the data to the cells.


oh, btw: there is no raid6 on freenas or zfs.

you need to be forced to read the manual. even the part with the space... you are far away from the truth.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I currently have 8 x Corsair CSSD-F60GBLS 7mm 60GB Force Series LS drives in ZFS RAID 10 giving me about 230GB of usable storage.

I have been running them through a IBM M1015 flashed in IT mode on a box with 16GB ECC RAM and Intel XEON E3-1230v3

Over a period of a 6 months I've now had 4 of the drives just drop from the array and become unavailable.

The first time two of them dropped it trashed the array. Because I take snapshots and do zfs replication I was able to restore but this time the 6 remaining drives I set in raid6 so I had the same amount of storage. Another two drives have become unavailable and is at a very critical point.

No matter if I power cycle the box, switch the SSDs into different port FreeNAS refuses to see the unavailable drives. If I take the failed drives out and plug them into a PC they can be read fine and see to be in working condition.

I can only presume that ZFS is not liking these drives and is marking them as failed or something.

Anyway I want to move away from cheap SSD and want to move to something more reliable. Can anyone recommend a SSD? I need about 230GB of storage and thinking of moving to getting two 240GB SSDs and mirror them.

I'm thinking that SanDisk Extreme PRO 240GB looks like a good replacement but I'm really not sure what would be better.

If you need 230GB of storage you will not get away with a 240GB pool, much less a pool made up of two "240GB" drives. Look at 500GB SSDs, two pairs of mirrors or RAIDZ2.
Ideally, you're looking at Intel S3500 and up. You might get away with high-end consumer stuff, like Samsung 850 Pro.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
1) Please provide a more detailed description of your hardware. Occasionally things like the specifics of the motherboard, power supply, etc., are important.

2) Please present the output of "camcontrol devlist", preferably in CODE tags

3) Please present the contents of /var/run/dmesg.boot, preferably in CODE tags

4) Please post SMART test status for all your SSD's, preferably in CODE tags, and preferably including the wear leveling count

5) Please indicate what the LSI BIOS shows while booting

As Ericloewe indicated, you are greatly exceeding the amount of storage you should be using. ZFS is a copy-on-write filesystem, and if presented with severe fragmentation, you might be doing nasty things to your SSD's.

Normally I suggest no more than 60% capacity on a pool that is storing virtual machines, and RAIDZ2 is an especially bad choice because it is hitting the drives harder for writes, which is just going to cause premature failure.

Back in the 2010-2011 era, quality SSD's were more than we were willing to pay (much greater than $2/GB), so we had fairly good luck with pairing up a SandForce based SSD with another cheap SSD in our ESXi nodes using hardware RAID1. I can tell you that even given relatively light usage, they have a tendency to fail after a few years. The good part is that now the replacement cost is around 50c/GB.

For a more reliable setup, providing 230GB of usable space, get yourself five Intel 530 240GB's, each around $120. You set up two vdevs of two drives each, mirrored. One is a spare (hot spare is fine). If performance starts to suffer due to fragmentation, you then have an option to add an additional vdev of two more drives in the future.

The 530 is a compromise drive. I come from the era that defined "RAID" as redundant array of _INEXPENSIVE_ disks; using quantity to make up for quality. But you can also expect the inexpensive disks need to be replaced a bit more often. The S3500 is a nicer drive but costs around twice as much. I'd be likely to do three 530's in a three way mirror over two S3500's in a two way mirror, because it's still cheaper to do that.
 
Status
Not open for further replies.
Top