how silence release 9.2 constant alert on 4k drive alignment?

Status
Not open for further replies.
D

diablodale

Guest
Was there implemented a way to silence the constant zpool alert that 1 or more of the drives in my pool are not 4k aligned?

I fully intend to have a few drives in my pool not in alignment at this time. Therefore, this constant alert creates a scenario where a useful status (or problem) is masked due to an alert that does not apply to my configuration.
 
D

dlavigne

Guest
Do you mean in the Alert GUI? If so, uncheck the box next to the alert.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I could be mistaken, but isn't there a checkbox next to the alert in the WebGUI?
 
D

diablodale

Guest
Unfortunately, that doesn't clear the condition on which the system is alerting. Last night I got the daily run output email from the system. As I understood, these email are only sent in release 9.2 if there is a problem. In this email, is reported...

Code:
Backup passwd and group files:
no /var/backups/master.passwd.bak
no /var/backups/group.bak
 
Verifying group file syntax:
/etc/group is fine
 
Checking status of zfs pools:
NAME           SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dale-mirror1  2.72T  1.42T  1.30T    52%  1.00x  ONLINE  /mnt
 
  pool: dale-mirror1
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0 in 4h26m with 0 errors on Sat Dec 28 04:36:25 2013
config:
 
        NAME                                            STATE     READ WRITE CKSUM
        dale-mirror1                                    ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/9aa63aef-489d-11e2-8112-002522b3b35f  ONLINE       0     0     0
            gptid/9b68ec0b-489d-11e2-8112-002522b3b35f  ONLINE       0     0     0  block size: 512B configured, 4096B native
          mirror-1                                      ONLINE       0     0     0
            gptid/bfbd5037-eb9e-11e2-81b5-002522b3b35f  ONLINE       0     0     0  block size: 512B configured, 4096B native
            gptid/a4b12b55-4694-11e3-9eec-002522b3b35f  ONLINE       0     0     0  block size: 512B configured, 4096B native
 
errors: No known data errors
 
Checking status of 3ware RAID controllers:
Alarms (most recent first):
+++ /var/log/3ware_raid_alarms.today    2014-01-06 03:01:02.000000000 +0100
@@ -0,0 +1 @@
+
 
-- End of daily output --
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Add the following sysctl and reboot:
[panel]variable: vfs.zfs.vdev.larger_ashift_minimal
value: 0[/panel]
Please remember to remove the sysctl when you upgrade to next version of FreeNAS. Also remove the sysctl and reboot before creating new pools.
 
D

diablodale

Guest
Naturally, I already had that set.
The vfs.zfs.vdev.larger_ashift_minimal=0 sysctl does not stop the alert.
That sysctl allows usage of the drives/pools....fantastic.
Unfortunately, it still alerts every night even with that set. :-(
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So you are getting the 3am email that something is wrong? Can you post the email and remove any personal info on the email?
 
D

diablodale

Guest
1st night email is above (which includes the initial 3ware warning). Below is what I now get every night:
Code:
Checking status of zfs pools:
NAME          SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
dale-mirror1  2.72T  1.44T  1.27T    53%  1.00x  ONLINE  /mnt
 
  pool: dale-mirror1
state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0 in 4h26m with 0 errors on Sat Dec 28 04:36:25 2013
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        dale-mirror1                                    ONLINE      0    0    0
          mirror-0                                      ONLINE      0    0    0
            gptid/9aa63aef-489d-11e2-8112-002522b3b35f  ONLINE      0    0    0
            gptid/9b68ec0b-489d-11e2-8112-002522b3b35f  ONLINE      0    0    0  block size: 512B configured, 4096B native
          mirror-1                                      ONLINE      0    0    0
            gptid/bfbd5037-eb9e-11e2-81b5-002522b3b35f  ONLINE      0    0    0  block size: 512B configured, 4096B native
            gptid/a4b12b55-4694-11e3-9eec-002522b3b35f  ONLINE      0    0    0  block size: 512B configured, 4096B native
 
errors: No known data errors
 
-- End of daily output --
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Naturally, I already had that set.
The vfs.zfs.vdev.larger_ashift_minimal=0 sysctl does not stop the alert.
That sysctl allows usage of the drives/pools....fantastic.
Unfortunately, it still alerts every night even with that set. :-(
The sysctl helps in 9.2.0 in the case when you have 512B devices in an ashift=9 pool. However, now I understand you have real 4096B sector devices in an ashift=9 pool. The sysctl won't help in this case :(.
So you are getting the 3am email that something is wrong? Can you post the email and remove any personal info on the email?
It's the output of the periodic 404.status-zfs script. It runs zpool status -x and includes the output in the daily email if it is anything else than "all pools are healthy" or "no pools available".
@diablodale you may want to open a bug report here: https://bugs.freenas.org/projects/freenas/issues
If you want to silence the email in this version probably the easiest solution is to edit /conf/base/etc/periodic.conf and change daily_status_zfs_enable="YES" to "NO". You first need to make the root filesystem writable by "mountrw /", reboot after you do the edit. However, this is not optimal as it will completely disable the periodic ZFS checks. You then need to rely on the FreeNAS alerter (the code that drives the stop light in the GUI) to email you if there is a problem with a pool -- alerter will email you when a pool becomes DEGRADED. On the other hand, the alerter checks the pool status every 5 minutes, periodic runs once a day.
(The more complex solution is to modify the 404.status-zfs script to ignore this one condition, but still generate output for any other problem.)
 
Status
Not open for further replies.
Top