Major Performance Issues after 70 days of working great

Status
Not open for further replies.

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
Ok so

Board is Asus H87M 8GB of RAM.
Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz
Using Flash Drive for OS
6X 3TB Seagate ST3000DM001
Realtek RTL8111/8168B

After hours of reading I've learned that.


1. Apparently 8GB of RAM isn't enough
2. Realtek is evil
3. It might be a smart idea to add a SSD to use as a cache

Currently all 6 drives are setup in giant zdev, then using ISCSI file extents
Today was torture, the entire environment was barely even moving.
TWO Vm's but there are about 60 or more users.
Exchange, file server and Terminal Server.

It appears that we're being killed by Random Reads and writes.
Right now no one is here and I'm able to hit pretty high speeds, but today during production nothing would move. The only thing is we've been using this for over 90 days without even a slight hiccup.

But now VMWARE is freaking out about every min stating I/O latency is high.
Based on that information can anyone point out any "Yeah dude that was a dumb idea" on our setup?

Do you think RAM is the issue?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
TWO Vm's but there are about 60 or more users.
With that hardware?
You're pulling our leg, right?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Besides the fact it's consumer grade hardware and you have far not enough RAM I can think of two things if it worked before and not work anymore: 1) you filled the pool too much, 2) one or more drive has some problems

1) What is the used space on the pool?

2) Can you post the output of smartctl -q noserial -a /dev/adaX for each drive (replace adaX by the right device label) please?
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
And if the hardware is bad, what is the problem? Where is the bottleneck?
Disks look like they are barely doing anything. Memory on the SAN isn't that high. CPU usage is almost nothing.
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
Here is the reply from ADA0

All the other disks are exactly the same.

[root@freenas] ~# smartctl -a /dev/ada0 -q noserial
smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p12 amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST3000DM001-1ER166
Firmware Version: CC43
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon Mar 16 19:34:38 2015 CDT

==> WARNING: A firmware update for this drive may be available,
see the following Seagate web pages:
http://knowledge.seagate.com/articles/en_US/FAQ/207931en
http://knowledge.seagate.com/articles/en_US/FAQ/223651en

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 80) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 316) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x1085) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 116 099 006 Pre-fail Always - 104914424
3 Spin_Up_Time 0x0003 097 097 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 3
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 080 060 030 Pre-fail Always - 103529671
9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 2481
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 3
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0
189 High_Fly_Writes 0x003a 097 097 000 Old_age Always - 3
190 Airflow_Temperature_Cel 0x0022 068 063 045 Old_age Always - 32 (Min/Max 21/37)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 17
194 Temperature_Celsius 0x0022 032 040 000 Old_age Always - 32 (0 21 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 2264h+00m+19.820s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 12771017211
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 249535912539

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 2479 -

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

[root@freenas] ~#
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Please post the result of
zpool status
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
[root@freenas] ~# zpool status
pool: ASIL
state: ONLINE
scan: scrub repaired 0 in 3h14m with 0 errors on Sun Feb 8 05:14:43 2015
config:

NAME STATE READ WRITE CKSUM
ASIL ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/4f12eb12-937b-11e4-93d2-7824af33189f ONLINE 0 0 0
gptid/4f893bb7-937b-11e4-93d2-7824af33189f ONLINE 0 0 0
gptid/50099260-937b-11e4-93d2-7824af33189f ONLINE 0 0 0
gptid/508877ce-937b-11e4-93d2-7824af33189f ONLINE 0 0 0
gptid/5105930a-937b-11e4-93d2-7824af33189f ONLINE 0 0 0
gptid/517c915e-937b-11e4-93d2-7824af33189f ONLINE 0 0 0

errors: No known data errors
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
You got an Intel NIC you can pop in to an expansion slot?
Maybe your Realtec NIC went south...
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I just now noticed that it's using RAID6 UGH
oh:( That's not good, FreeNAS no likey RAID
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
You got an Intel NIC you can pop in to an expansion slot?
Maybe your Realtec NIC went south...

I already thought that, but this has TWO NICS. One on board, one separate.
I'm moving Mailboxes from one DB to another right now.
Currently the DB is on a volume that is apart of the VMDK.
They are moving to a volume that has the VM talking directly to the SAN.
Right now I'm getting between 20 and 40 MB/S
I'm trying to see how I can generate some random reads and writes.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Well, used space is very low, let's exclude that.

But, in the whole life of this drive there is only on short test executed, not good. You really must schedule automated tests (longs and shorts) to monitor the drives. For now we can't know for sure the status of the drive with only a short test, can you execute a long test smartctl -t long /dev/adaX and then repost the output of the smartl -a command once it's finished please? be careful because a long test is very long (many hours, about 5.5 hours estimated for this drive) and it's not recommended to do it with high load on the drive.

"RAIDZ2 which is basicly Raid6" RAID-Z2 is NOT RAID6, please read the terminology thread (link is in my sig).
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
Well, used space is very low, let's exclude that. In the whole life of this drive there is only on short test executed, not good. You really must schedule automated tests (longs and shorts) to monitor the drives. for now we can't know for sure the status of the drive with only a short test, can you execute a long test smartctl -t long /dev/adaX and then repost the output of the smartl -a command once it's finished please? be careful because a long test is very long (many hours, about 5.5 hours estimated for this drive) and it's not recommended to do it with high load on the drive.

Yes I noticed that today and they have been scheduled now.
I manually ran the short ones today just to get rid of that idea.
Drive failure is highly unlikely although possible.

1. they're all brand new
2. Now that people are gone I'm getting fast speeds again.

I'm still thinking Random Reads and writes is what killed us, and will kill us if we have another day like today.
Can someone help me test that theory? Would more RAM help with that? I can add plenty.

I'm all for changing ANYTHING to fix this problem properly.
But right now I'm looking for ANYTHING that help get us through the day tomorrow.

ALso thank you all for your help.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ok, so it's just under-dimensioned system, you need more RAM, far more ;)

And for the future you should think about using proper server grade hardware and ECC RAM, look at the hardware recommendations and ECC RAM threads (links are in my sig) :)
 
Last edited:

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
Well, used space is very low, let's exclude that.

But, in the whole life of this drive there is only on short test executed, not good. You really must schedule automated tests (longs and shorts) to monitor the drives. For now we can't know for sure the status of the drive with only a short test, can you execute a long test smartctl -t long /dev/adaX and then repost the output of the smartl -a command once it's finished please? be careful because a long test is very long (many hours, about 5.5 hours estimated for this drive) and it's not recommended to do it with high load on the drive.

"RAIDZ2 which is basicly Raid6" RAID-Z2 is NOT RAID6, please read the terminology thread (link is in my sig).

According to this it is.

http://en.wikipedia.org/wiki/ZFS

Allows the failure of TWO disks


BUT I think I may have found the problem.

This SAN has 6 3TB drives.
There is ONE volume (which is a vdev right? )

There is about 10TB of space available.

Isn't it the case that with ONE vdev you get the WRITE IOPS of ONE DISK but you'd get the READ IOPS of 6 DISKS with our current setup?
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
BTW here is the section that says RAIDz2 is similar to RAID6

Unlike traditional file systems which reside on single devices and thus require a volume manager to use more than one device, ZFS filesystems are built on top of virtual storage pools called zpools. A zpool is constructed of virtual devices (vdevs), which are themselves constructed of block devices: files, hard drive partitions, or entire drives, with the latter being the recommended usage.[28] Block devices within a vdev may be configured in different ways, depending on needs and space available: non-redundantly (similar to RAID 0), as a mirror (RAID 1) of two or more devices, as a RAID-Z group of three or more devices, or as a RAID-Z2 (similar to RAID-6) group of four or more devices.[29] In July 2009, triple-parity RAID-Z3 was added to OpenSolaris.[30][31] RAID-Z is a data-protection technology featured by ZFS in order to reduce the block overhead in mirroring.[32]
 
Status
Not open for further replies.
Top