FreeNas newb Config check

Status
Not open for further replies.

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
Hey All,
I've been lurking on here for awhile checking out freenas hoping to get some ideas.
I've finally decided to implement ZFS in our Dev environment. Hopefully its good enough to eventually implement in production too.
We have a HP dl180 G6 with 48GB's of RAM.
What would be the best way to carve out a 14 x 1TB drive configuration ?
Looking for performance and speed.

Any help would be much appreciated !


edit: forgot to add, its for a vmware environment with a mix of DB servers and webapps.
Storage network would be iSCSI.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
For a VMware datastore, a pool of 7 mirrored pair vdevs would give you the most IOPS and the best performance versus the most likely alternative, i.e., a pool comprised of a pair of 7 x 1TB RAIDZ2 vdevs.
 
Joined
Apr 9, 2015
Messages
1,258
You may also want to look for a HBA since that likely comes with a raid controller like the HP Smart Array P410/256MB Controller (RAID 0/1/1+0/5/5+0) which will not play nice with FreeNAS.
 

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
Thanks for the replies Spearfoot and nightshare00013.
I have an IBM 1015 with p20 IT firmware in there. its working pretty well connected to the hp sas backplane.
I have in total 16 x 1 TB drives, but only 14 are available right now. The other 2 are in another server holding data that will be moved to this new server.
So far i've just finished up the burn-in process. Still gotta check each of the drives, noticed some have 1-3 re-allocated sectors. is that bad ?

RE vdev configuration, which would be the most ideal to allow me to expand (if i need to) later on.
Would a ZIL or SLOG on a DC3700 help any ?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
ZIL or SLOGs should generally not be added pre-emptively, as their presence can marginally increase risk and/or harm performance if they are not strictly needed.

That being said, you may have a configuration/workload here that would benefit from it, based on what you said.

Try it without. See how your performance vs. expectation is.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks for the replies Spearfoot and nightshare00013.
I have an IBM 1015 with p20 IT firmware in there. its working pretty well connected to the hp sas backplane.
I have in total 16 x 1 TB drives, but only 14 are available right now. The other 2 are in another server holding data that will be moved to this new server.
So far i've just finished up the burn-in process. Still gotta check each of the drives, noticed some have 1-3 re-allocated sectors. is that bad ?
I replace disks when they begin showing re-allocated sectors. Typically they have >10,000 hours on them before this happens. But the choice is up to you, and a drive with a few reallocated sectors may very well be usable for 100's or 1000's of additional hours of service before failure.

RE vdev configuration, which would be the most ideal to allow me to expand (if i need to) later on.
Would a ZIL or SLOG on a DC3700 help any ?
Mirrors are easy to expand: replace both drives in one mirrored pair with larger drives and the capacity of your pool will increase accordingly. With RAIDZ2, you have to replace every drive in a vdev before you gain any capacity.

In a production environment, you probably will benefit from a ZIL SLOG device. I know that on my system, NFS VMware datastores are unusable without either disabling synchronous writes (not recommended!) or adding a SLOG device. But I don't use iSCSI, so you may want to search the forum for discussion about this subject.

The Intel DC S3700 SSD is a good choice for an entry-level SLOG device; I use these in my home lab systems. For a production environment, though, you will be better served by the PCIE-based Intel DC P3700.
 

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
These are a couple of drives that have re-allocated sectors:
Code:
[root@freenas] ~# smartctl -A /dev/da10
smartctl 6.4 2015-06-04 r4109 [FreeBSD 10.3-RELEASE amd64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   069   063   044    Pre-fail  Always       -       9261377
  3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       10
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       1
  7 Seek_Error_Rate         0x000f   079   060   030    Pre-fail  Always       -       4395619329
  9 Power_On_Hours          0x0032   073   073   000    Old_age   Always       -       24018
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       10
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   075   070   045    Old_age   Always       -       25 (Min/Max 17/28)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       8
193 Load_Cycle_Count        0x0032   096   096   000    Old_age   Always       -       8686
194 Temperature_Celsius     0x0022   025   040   000    Old_age   Always       -       25 (0 17 0 0 0)
195 Hardware_ECC_Recovered  0x001a   105   099   000    Old_age   Always       -       9261377
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0


[root@freenas] ~# smartctl -A /dev/da11
smartctl 6.4 2015-06-04 r4109 [FreeBSD 10.3-RELEASE amd64] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   082   063   044    Pre-fail  Always       -       183628886
  3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       11
  5 Reallocated_Sector_Ct   0x0033   100   100   036    Pre-fail  Always       -       2
  7 Seek_Error_Rate         0x000f   075   060   030    Pre-fail  Always       -       39116225
  9 Power_On_Hours          0x0032   076   076   000    Old_age   Always       -       21635
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       11
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   074   069   045    Old_age   Always       -       26 (Min/Max 20/29)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       9
193 Load_Cycle_Count        0x0032   091   091   000    Old_age   Always       -       19109
194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (0 20 0 0 0)
195 Hardware_ECC_Recovered  0x001a   118   099   000    Old_age   Always       -       183628886
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0



What do you guys think ?
I know they have a lot of hours on them, but this is supposed to be a proof of concept implementation.
Just to see how well freenas works in our environment, before we make recommendations to replace the dell's and emc's in prod.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
What do you guys think ?
I know they have a lot of hours on them, but this is supposed to be a proof of concept implementation.
Just to see how well freenas works in our environment, before we make recommendations to replace the dell's and emc's in prod.
I think I get your drift... Yeah, go ahead and use the 2 drives with reallocated sectors in your proof-of-concept system if they pass a good burn-in test, preferably with a badblocks run and an extended SMART test.

Just don't put them both in the same mirrored pair! :smile:
 

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
I've run the burn-in test with badblocks testing over the long weekend. Took forever ! Gotta wait for the smartctl -long to complete.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I've run the burn-in test with badblocks testing over the long weekend. Took forever ! Gotta wait for the smartctl -long to complete.
Ha, I know! I burned in a pair of WDC 2TB Red drives last week - took nearly two days!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
What would be the best way to carve out a 14 x 1TB drive configuration ?
You've specified performance and easy expandability, so I agree that striped mirrors make sense. Depending on your uptime requirements, you might even consider 3-way mirrors.
 

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
You've specified performance and easy expandability, so I agree that striped mirrors make sense. Depending on your uptime requirements, you might even consider 3-way mirrors.

3-way mirrors would be awesome, but unfortunately I need the disk-space.
My current minimum is ~4.1 TiB... With a mirror'd vdev, that brings me close to 70% filled (~6.5TiB raw)
If I lose 2 disks - worst case scenario would mean I'd lose all my data. I'm not too keen with playing those odds - especially with older drives like these.
I could keep 2 hotspares for the array .. but would that be enough ?
That raidZ2 stripe is looking like a safer option.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I could keep 2 hotspares for the array .. but would that be enough ?
Define "enough". You haven't specified uptime or MTTDL requirements. Whatever you do, you must have backups of everything you care about.
That raidZ2 stripe is looking like a safer option.
It is safer, and will not perform as well.

It's hard to offer useful suggestions when your requirements keep shifting. So far, you've specified:
  1. Cheap (repurposed gear).
  2. Performance.
  3. Easy expansion.
  4. Capacity.
  5. Reliability.
Unfortunately, you can't have all the above in one box.
 

nev_neo

Dabbler
Joined
May 18, 2016
Messages
12
True.
I'm just trying to make the most of what I have, without shooting myself in the foot and having to listen to the "I told you so" BS.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I wouldn't waste time and resources on hot-spares. FreeNAS will not automatically roll over to a hot spare. Since this is for testing (based on what you said) not production; I would trust the drives to not fail in the short-term. Once your "proof of concept" is complete, hopefully, you will have the budget to obtain new hardware and you can go with the three way mirror which will outperform any configuration of RAID-Z.
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
FreeNAS will automatically roll over to a hot spare actually.

Note that with VMs you really want mirrors and not RAID-Zx.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
FreeNAS will automatically roll over to a hot spare actually.

I recall reading that it did not work in an earlier version. It may work now. Things change. I am always learning.
 
Status
Not open for further replies.
Top