Intel S3700 as SLOG, viable?

Status
Not open for further replies.

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Hi guys,

I'm building a new FreeNAS box and we will get three Intel S3700 200GB SSDs. My idea was to put two for L2ARC and one for SLOG.

I know that SLOGs should be SLC's, but we can't find any SLC drive in our market to do the job. Since there are some "gold" MLC SSDs from Intel, I would like to know if someone had any good experiences with this SSDs.

Thanks in advance,

PS: I accept recommendations too.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi Vinicius,

Can you tell us more about the rest of the system, such as CPU, memory, connectivity, and use case? Many users don't need or won't be able to fully utilize L2ARC/SLOG devices. Considering that you're building a system with 3x DC S3700s, I imagine you're speccing correctly; but just to confirm for anyone else who stumbles upon this.

That said - the S3700 is perfectly good for use as an SLOG due to its consistent performance levels and high endurance.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Hy HoneyBadger, for sure.

1x SuperServer SSG-6047R-E1R36L: http://www.supermicro.com/products/system/4U/6047/SSG-6047R-E1R36L.cfm
2x Xeon Ivy Bridge E5-2620V2
8x 16GB DDR3-1600 2Rx4 ECC REG (128GB in total)
1x 4-port Gigabit Standard LP NIC Card, OEM and Bundle only (8 Gigabit interfaces in total, including the onboard ones)
24x Seagate NL-SAS 2TB 128MB Cache: HDD-A2000-ST2000NM0023
1x 8GB SATADOM for FreeNAS Installation

The ideia is to LAGG four network interfaces exclusively for iSCSI, two interfaces for common network traffic (including NFS for random files), two interfaces for management. Disks will be configured as stripe of mirrors with 12 vdevs of 2 disks, with 24TB of usable space. The remaining disk slots will be used in the future for expansion, and theres 8 RAM slots free for future expansion too.

In the iSCSI connection I'll put LVM to host some XenServer VM's with sync=standard or sync=always. I'll study this first...

EDIT: I've almost forgot... the onboard disk controller will be used.
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
I agree with HB, the Dc3700 is very good for this. Has a capacitor etc...

In fact, the new trend seems to be eMLC for metadata heavy workloads etc... Intel looks to be phasing out SLC in fact.

Also look at OWCs Enterprise PRO line, eMLC, capacitor etc... While using Sandforce controllers which had some issues a few years back, those bugs seemed to have been ironed out.

OWC has the SSD Pro Enterprise which I use for ZIL and there PCIe cards which I would like to use for L2Arc. They present as AHCI making them ubiquitous. Just some options.

Oh and there is this monster;

http://www.techspot.com/news/55430-sandisk-puts-flash-memory-on-a-ram-stick-with-ulltradimm-ssd.html

Holy mother of the bobble head Jesus.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
It's a shame but I never used MPIO and I don't know exactly how to implement it. I know what is MPIO, but it only makes sense for me if I would have more than one Switch. Anyway I can change the project to put two 24 ports switchs in the rack instead of one 48 ports.

Well it's an option, if you can elaborate more on this aspect I really appreciate this.

Just let-me ask one more thing: I'm considering two SSDs for SLOG, but I know that this is isn't necessary anymore with zpool V28. Should I've put a mirrored SLOG or just ignore it?

Thanks in advance,
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
In my opinion, keep it simple, 1 is fine.

Just ensure that 1 SSD is enterprise caliber. For testing purposes, you can always do a RAM disk based ZIL to gauge benefits. Compare it to a cheap SSD you have lying around. Its a good exorcise seeing ZIL quality impacts.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
In a perfect world you would be able to dedicate a pair of switches entirely to storage traffic (iSCSI/NFS) in order to avoid link contention with regular data traffic. Storage traffic does not handle loss or out-of-order well at all, and a big burst of regular network traffic on the same switches (even if VLANned) can cause that. "Dedicated network" is the name of the game here - so have switches that are for storage only, subnets and VLANs for storage only, and network ports on your hosts and storage (which you already have planned for).

I would switch to two 24-port switches, don't bridge or trunk them, and put two of those 1Gbps ports to each. Give the FreeNAS ports IP addresses in two different ports based on the switch they're connected to (eg: 192.168.1.1 and 192.168.1.2 for switch A, 192.168.2.1 and 192.168.2.2 for switch B) and create an iSCSI portal that contains them all. Then put one (or more) port from the XenServer host machines to each switch, with a respective IP address (eg: 192.168.1.100 for switch A and 192.168.2.100 for switch B) Now even if someone trips over a plug, commits a bad change, or reloads a switch accidentally, you don't lose storage connectivity because each server has four individual paths to the storage:

Switch A
192.168.1.100 -> 192.168.1.1
192.168.1.100 -> 192.168.1.2

Switch B
192.168.2.100 -> 192.168.2.1
192.168.2.100 -> 192.168.2.2

Some additional reading that might help is:
ISCSI on doc.freenas.org - check the section under "Portals" for MPIO-specific discussion
I'm not a big XenServer guy so I'm not sure if there's a better source, but the Citrix guidelines around multipathing to storage are herehttp://doc.freenas.org/index.php/ISCSI#Portals

Regarding the SLOG mirror, having non-mirrored SLOG opens up a very tiny window of risk (we're talking about 99.99% safe vs 99.999% here) - in that if you had your SLOG device fail, and then immediately had a hard crash before the transaction group in RAM could be committed to disk ... then you might lose the contents of that group and would have to roll back. But that's it. Very small risk.

Sorry about the wall of text. Hope this helps!
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
Thanks for the wall of text :)

Well I think I will get more cheaper switches just for iSCSI traffic. I've considered the HP 2530-48G as default switch. It's fully manageable layer 2. I was thinking in something unmanaged for iSCSI. But I don't know if it's a good choice.

There will be three servers with 6 network interfaces each. I was thinking in two cables for iSCSI, two for Networking and two for management.
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
In a perfect world you would be able to dedicate a pair of switches entirely to storage traffic (iSCSI/NFS) in order to avoid link contention with regular data traffic. Storage traffic does not handle loss or out-of-order well at all, and a big burst of regular network traffic on the same switches (even if VLANned) can cause that. "Dedicated network" is the name of the game here - so have switches that are for storage only, subnets and VLANs for storage only, and network ports on your hosts and storage (which you already have planned for).

I would switch to two 24-port switches, don't bridge or trunk them, and put two of those 1Gbps ports to each. Give the FreeNAS ports IP addresses in two different ports based on the switch they're connected to (eg: 192.168.1.1 and 192.168.1.2 for switch A, 192.168.2.1 and 192.168.2.2 for switch B) and create an iSCSI portal that contains them all. Then put one (or more) port from the XenServer host machines to each switch, with a respective IP address (eg: 192.168.1.100 for switch A and 192.168.2.100 for switch B) Now even if someone trips over a plug, commits a bad change, or reloads a switch accidentally, you don't lose storage connectivity because each server has four individual paths to the storage:

Switch A
192.168.1.100 -> 192.168.1.1
192.168.1.100 -> 192.168.1.2

Switch B
192.168.2.100 -> 192.168.2.1
192.168.2.100 -> 192.168.2.2

Some additional reading that might help is:
ISCSI on doc.freenas.org - check the section under "Portals" for MPIO-specific discussion
I'm not a big XenServer guy so I'm not sure if there's a better source, but the Citrix guidelines around multipathing to storage are here

Regarding the SLOG mirror, having non-mirrored SLOG opens up a very tiny window of risk (we're talking about 99.99% safe vs 99.999% here) - in that if you had your SLOG device fail, and then immediately had a hard crash before the transaction group in RAM could be committed to disk ... then you might lose the contents of that group and would have to roll back. But that's it. Very small risk.

Sorry about the wall of text. Hope this helps!

Perhaps it is kind of off (original) topic, but I like your idea better then SLOG thingy.
I'm building a very similar network segment between NAS and ESXi/XEN hypervisors.

I'm running Mikrotik as a main router, so I put each of two Switches on a different subnets.

[NAS] ETH1 192.168.100.10 -via-> Switch A connected directly to a router on interface/subnet 1
[NAS] ETH2 192.168.200.10 -via-> Switch B connected directly to a router on interface/subnet 2

[Hypervisor] ETH1 192.168.100.20 -via-> Switch A connected directly to a router on interface/subnet 1
[Hypervisor] ETH2 192.168.200.20 -via-> Switch B connected directly to a router on interface/subnet 2

So there are 2 physically isolated subnets, 2 dedicated networks per se, each configured respectively as shown above.
Now if Hypervisor machine requests 30 GB file size, such transfer would run via two different routes, thus increasing total speed of 1 Gbps link (ideally 2 Gbps speed in this scenario).

Any suggestions, except adding more NIC's in order to increase bandwidth?
 
Status
Not open for further replies.
Top