NAS config verification

Status
Not open for further replies.

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I've been wanting to create a homemade NAS for quite some time. I finally got the chance to make one.

SPECS:
CPU: Intel Xeon E5-2603 @ 1.8Ghz
RAM: 48GBs
DISK: 18 - 3TB 5400rpm RED NAS drives
DISK: 2 - 256GB Plextor M6S SSDs
HBA: LSI SAS 9201-16i
ENCLOSURE: NORCO RPC 4220
OS: booting FreeNAS off a PNY USB stick.

So I configured a ZFS2 array with all 18 drives, and created a ZIL using the SSDs. I just did a quick configuration to see this guy run. Upon reading on the forums, I need to learn my terms and if this truly is a ZIL I setup.

I got about 68.2MB/sec transfer speed transferring from a Server 2008 R2 box to the NAS. I was just curious how this compared to others on the forums.

I'm still pretty new to the scene, and see that I have some reading to do in regards to ZIL, L2ARC, ect. I'm not sure if the NAS is configured correctly, respectively. For now this is just a test box. I just wanted to start somewhere and see what others thought.

Cheers,
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
It's suggested not to go over a single 11disks vdev and u are planning an 18disks?
If i were u i would consider 2 9disks raidz2 vdev
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I would be wondering why you aren't maxing out 1GBe. You are pretty much at half speed. There is a chance this is the result of moving small files?

Read cyberjocks zfs primer. Take a look at the performance and benchmark threads. 18 spindles on an e5 should scream. I'd go even further and split to 3x 6 drive vdevs. This would be at the cost of space for iops. Keep testing you have a ways to go. You also didn't list nic, network tests, etc so maybe there is a bottleneck elsewhere.

Have fun.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Definitely should be saturating GbE several times over. I don't see any obvious bottlenecks, so something is wrong - probably something to do with the extra-wide vdev (not sure how it would slow down so much in normal, early use, however).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You need to break that single vdev into multiple. There's a very strong possibility that your excessively-wide vdev is responsible for your performance problems. I'd break it into 2 or 3 vdevs, depending on how you intend to use the pool.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
As others have said: Good hardware, config needs some work.

You need to break that single vdev into multiple. There's a very strong possibility that your excessively-wide vdev is responsible for your performance problems. I'd break it into 2 or 3 vdevs, depending on how you intend to use the pool.

Definitely 3 vdevs in my mind; my question to OP is "do you plan on hosting VMs on this storage"? If so, I'd have 12 of those drives (in two 6-drive RAIDZ2 vdevs) dedicated to your "bulk storage" and the remaining 6 in a separate zpool as mirror (maybe even 3-way mirror?) vdevs for your "VM storage." Those aren't ideal SSDs for SLOG (no power fail protection) but you could still use them for that.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
First of all, I just wanted to say thank you to you guys. This forum is pretty damn awesome! You get so much help and direction, without all that other bs you see these days on the forums. So I just wanted to say thanks, it's a pleasure to be here.

I had a feeling creating one large vdev was a bad idea. I have a Debian 7 box with same hardware, and I created 3 vdevs on, 6 disks per vdev. On those boxes I have a two pair of SSDs for ZIL and L2ARC. Surprisingly, I'm getting a little slower speeds on the Debian box than the FreeNAS box, even with the 3 vdev configured.

I forgot to mention the NIC. I'm currently using the onboard NIC off the ASRock X79 EXTREME6. I also have an quad HP NC364T nic installed. I was planning on configuring Link Aggregation on this NIC.

What network tests do you recommend? I typically use iPERF when testing the LAN for network performance.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
As others have said: Good hardware, config needs some work.



Definitely 3 vdevs in my mind; my question to OP is "do you plan on hosting VMs on this storage"? If so, I'd have 12 of those drives (in two 6-drive RAIDZ2 vdevs) dedicated to your "bulk storage" and the remaining 6 in a separate zpool as mirror (maybe even 3-way mirror?) vdevs for your "VM storage." Those aren't ideal SSDs for SLOG (no power fail protection) but you could still use them for that.
I'm planning on using this array for Veeam backups, and data storage. I was thinking about using it for VM storage in the future.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I forgot to mention the NIC. I'm currently using the onboard NIC off the ASRock X79 EXTREME6. I also have an quad HP NC364T nic installed. I was planning on configuring Link Aggregation on this NIC.

What network tests do you recommend? I typically use iPERF when testing the LAN for network performance.

Looked up the specs on that board and yuck, Broadcom. That's probably contributing to the poor performance right there. That quad-port NIC is based off the Intel 82571 and will do much, much better.

Regarding link aggregation I assume you're going to present NFS mount points?

iperf should be fine to test raw throughput, but bear in mind that LACP doesn't inherently increase bandwidth per-session, so you'll still be limited to 1Gbps per TCP stream. If you have multiple clients connecting you'll see higher use.

If you plan on putting some VMs directly on this storage I'd go with my suggestion of two 6-drive RAIDZ2 vdevs in a zpool for bulk (Veeam backup/file service) and a 6-drive mirror zpool for responsive.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I reconfigured the NAS using 3 vdev's, 6 disks per vdev along with a pair of SSDs for the ZIL. I also configured the HP NIC. I'm no longer using the Broadcom NIC.

Server to NAS: 87.3MB/s (bounces a little above and under, but primarily holds 87MB/s)
NAS to Server: 134MB/s for about 10seconds, then it starts dropping and holds at 67MB/s
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
So you picked up a little bit of speed. But still no where near where you should be. If you were to dd your pool. 18 reds should be well over 1GBps on sequential reads. That is over 10x your network bandwidth. Your server to NAS writes went up a little, likely due to increased iops. The NAS to Server numbers look like you are fine until you run out of cache, then your 2008 storage chokes on the writes. If you briefly see 134MB/s there is a good chance the network itself is fine. iperf will tell you. Have you verified the 2008 server will max out 1gbe at will? Can it handle writing your workload at 125 MBps?

Is this one big file? Or a backup load? It is possible that you are on the money and choking on 'overhead'. But a big file should move at 112+ MBps both ways all day. The workload details matter, as does the target.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
Per tests with iPERF, it can hold 845MB/s. I'm currently using a 6GB ISO file to test with right now.

So when you say I'm choking on overhead, what exactly do you mean? Thanks for providing the details for me, I'm already learning a ton!! This is great!! I'm determined now to find the issue.

As I'm typing this update, I see my alert icon flashing.
CRITICAL: The volume (ZFS) status is DEGRADED.
For brand new hardware, that was fast :(
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
Now that I've looked into it, looks like the serial number isn't being read by NAS, and claiming a failure. I would like to reseat the drive, but I didn't write down the serial number to slot. I found this recommendation on the forums after the fact. So, basically I won't know which disk failed now? I think I was to used to my SANs and the nice bright red light on the array. :/
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Now that I've looked into it, looks like the serial number isn't being read by NAS, and claiming a failure. I would like to reseat the drive, but I didn't write down the serial number to slot. I found this recommendation on the forums after the fact. So, basically I won't know which disk failed now? I think I was to used to my SANs and the nice bright red light on the array. :/

If the serial numbers aren't being read by the NAS, then neither is SMART. That's not a "SMART" (pun intended) configuration to be in.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
If the serial numbers aren't being read by the NAS, then neither is SMART. That's not a "SMART" (pun intended) configuration to be in.
lmao. So the NAS saw the drive, as it was accounted for in the vdev, but SMART doesn't see the "smartness" in it? What in the world would cause that? I would like to just reseat the drive.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I was hoping you were using a bunch of small backup files or something that was hurting your speed. Writing many tiny files 'hurts'. Unfortunately writing a 6GB iso is a best case load. You have a problem somewhere. I'd be pissed and grabbing one ssd build a single disk pool. Grab another ssd throw it in a fast workstation. Ideally your 2008 server. And move that big file. You should peg ~115+ MBps and just stay there. 1GBe is trivial to saturate. You should get some actual pool numbers from dd or iozone. You should be able to make a serious dent in 10GBe and yet you are slower than my desktop 2007 gear collecting dust. :)

Keep hunting.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If your controller doesn't support SMART passthrough, then all hard drive SMART functions won't work. In particular, the SMART functions are used to get the serial numbers you see in the WebGUI. This is why we can tell when someone isn't using appropriate hardware for FreeNAS based on a screenshot of their disks. If no serials are listed then we know they've got bigger problems to deal with.

I don't have any experience with the 9201-16i, so I can't really tell you if it's supposed to work, if it's not supposed to work, if its a setting problem, etc.

You're kind of on your own unless someone else has the answer.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
You can verify the smart info on the drive through the CLI. There has been a couple of other posts about the s/n not being passed to the gui. The drive should be listed as daX (X being 0-17). The command is "smartctl -a /dev/daX" the s/n will be in the smart info. I have a LSI 9201-16i and it passes smart info to freeNAS.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
The LSI 9201-16i definitely does pass SMART functions, because I see them listed for all the other drives. It's just odd that only one drive doesn't list the serial number, which I believe is what's preventing SMART from verifying that drive. Correct me if I'm wrong on that statement.

I'm still hunting on the performance issue. I'll post updates soon. Thanks guys!
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
You can verify the smart info on the drive through the CLI. There has been a couple of other posts about the s/n not being passed to the gui. The drive should be listed as daX (X being 0-17). The command is "smartctl -a /dev/daX" the s/n will be in the smart info. I have a LSI 9201-16i and it passes smart info to freeNAS.
Heya Mlovelace,

I wasn't able to get the 'smartctl -a /dev/daX' command to work. And just curious, what numbers are you pulling with your LSI board?

Do you guys think my SSDs could be a performance factor? I wouldn't think so, as the specs online look solid, but I could be wrong. I saw a video online called 'Fun with ARC and ZFS on FreeNAS 9.2.1.3' where he has some serious numbers! 350MB/s+ granted the specs are different, but those are some sick speeds!
 
Status
Not open for further replies.
Top