Excessive time mounting local filesystems

Status
Not open for further replies.

gaston917

Cadet
Joined
Jul 30, 2013
Messages
5
It's currently taking about 3 hours to reboot my freenas box, the majority of the time is spent at the step "Mounting local file systems"

The largest pool is a raidz1 at 6T with a 32G cache vol

- FreeNAS 9.1 RC2
-system memory is 16GB
-Boot from USB

can anyone tell me if this is normal behavior? Is it running a file system check or scrub every time i reboot?
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
This is certainly not normal. Scrubs are usually only configured to run once a month or so.
But if a scrub was in progress when you rebooted the machine, it will immediately continue after a reboot and may cause delays in startup. But I personally don't believe that it is accountable for a 3 hour delay.

I'd check the SMART attributes of your disk and the pool status to see if there are any problems with the disks.

Did you use a 8.x version before?
 

gaston917

Cadet
Joined
Jul 30, 2013
Messages
5
i appreciate the response, thanks for the insight.

I did not - i hopped straight into the 9 betas because it was just something i was easing into. I've walked it through a few upgrades to the latest rev but i cant remember offhand at which point it started taking as long to mount the fs. It's consistently exhibiting this behavior- even through clean shutdowns. It's configured for weekly scrubs and they usually finish in about an 90-120 min so if it ran one it was definitely not intended.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
If the disks don't show any problems you could try a clean installation on a new USB stick and see if that resolves it - sometimes USB sticks get corrupted.
 

gaston917

Cadet
Joined
Jul 30, 2013
Messages
5
following some more investigation: after detaching the primary zpool (vol01) and rebooting, the issue ceased. however, during the re-import of "vol01" i am seeing behavior similar to what i was experiencing during the reboot. is it possible that the zpool could be corrupted?
 

prenger745

Cadet
Joined
Jan 5, 2012
Messages
6
I, too, am experiencing this. It was not an issue until last night when I went to 9.1. I thought it was just stuck so I rebooted and tried other things. It would load fine on a fresh install but once I imported my settings, and it rebooted it would hang at the same place. It has been 2 hours now and it is still stuck there. I am going to let it go (this is not anything critical..just my home stuff) and see if it finally mounts.
 

gaston917

Cadet
Joined
Jul 30, 2013
Messages
5
My issue seems to be be resolved - i suspect it was related to a problem with the L2ARC drive that was attached to the volume that was hanging. i confirmed the suspected volume by detaching, rebooting with no issue, and importing it back. once i identified that it was that zfs pool i removed the cache volume, ran another scrub and then was able to reboot without the delay.
 

prenger745

Cadet
Joined
Jan 5, 2012
Messages
6
Can you help me figure out how to do that? I have it back up and auto-imported the ZFS. I haven't made any changes that would require a reboot. How do I "remove the cache volume"...?

Thanks
Dan
 

gaston917

Cadet
Joined
Jul 30, 2013
Messages
5
if you dont know you have one you probably don't so that particular path may be a non-issue. you would've had to create it manually

type "zpool list -v" and press enter and look for something that says "cache"

did you experience the delay in booting with the pool detached?
 

prenger745

Cadet
Joined
Jan 5, 2012
Messages
6
With the pool detached it boots fine. Auto-importing takes about 20 minutes. So I thought maybe I need to do a scrub and went to start one off, and it said one was already running. Then I ssh'd in and did a zpool status and it said something like 20%,5 more hours to go. I checked it 5 hours later and it hadn't moved too much. I let it go overnight and while I was at work today and just now it is saying 80%, 6hours left. So it IS moving but SLOWLY. Then I went down to the actual server was and looked at the screen and on the screen I see:

Solaris: WARNING: Disk /dev/gptid/00ee... has a block alignment that is larger than the pool's alignment.

It shows that four times.

I also see

Unable to scrub Storage: cannot scrub Storage: currently scrubbing; use zpool scrub - to cancel current scrub
(ada0:ata2:0:0:0:0)READ_DMA48. ACB: 25 00 19 0f f1 40 46 00 00 00 ab 00
CAM status: ATA Status Error
ATA status: 51 (DRDY SERV ERR), error: 84 (ICRC ABRT)
RES: 51 84 19 0f f1 46 46 00 00 6a 00
Retrying command

It seems to repeat this. I don't want to cancel the scrub because it IS moving...but any idea what all of that mean? I am pretty sure it is not good news...

Dan
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
I have the same problem, apart from I didn't wait 3 hours for it, but seems I have to now.

It started on a reboot, after an upgrade, but I think it is not related. tried it on fresh install, auto import takes more than 20mins, so I killed it. Mounting file systems takes more than 20 mins, so I killed it several times.

It does not seem related to controller as I swapped several M1015 s and cables, made no difference.

Could be my drives, it is raid - Z2 , I don't want to pull random drives to degrade it, I am sure not all drives are broken, but it s very frustrating it takes this long to mount/fault a drive. I am waiting for it to mount while see if anybody comes across a solution.

The HDDs are thrashing when the system is stuck, so it is doing something, but what it is doing?
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
after maybe 1 hour, it mounted fine and reported no error. I went ahead with a scrub, 75 hours left .
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
after maybe 1 hour, it mounted fine and reported no error. I went ahead with a scrub, 75 hours left .
75hours??? [emoji33][emoji33][emoji33]
I guess you really have some problems...never heard of such a long time for a scrub!
What is your hw configuration? What is your M1015 firmware version?
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
75hours??? [emoji33][emoji33][emoji33]
I guess you really have some problems...never heard of such a long time for a scrub!
What is your hw configuration? What is your M1015 firmware version?
I know, it must have some problem as they should mount really quick in the first place, it is a average i7 3770 with 32GB non-ecc ram and firmware is 9, with a quad port intel VT card.

It was not a much difference with 96GB ECC RAM quad core xeon with M1015 firmware 19 passed through running as a VM before.

So ram was not the problem which was the problem for most people had this issue.


Here is the scrub output:

scan: scrub in progress since Sun Jan 4 20:44:09 2015
120G scanned out of 4.14T at 15.6M/s, 75h12m to go
0 repaired, 2.82% done
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
I read somewhere that with freenas 9.3, the M1015card gives some warnings about the P19 firmware, while the best performances/reliability is with the P16 version
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
I read somewhere that with freenas 9.3, the M1015card gives some warnings about the P19 firmware, while the best performances/reliability is with the P16 version
You are correct, but it has always been the case, the warning just didn't get displayed in earlier versions. So it does not make any sense, I was on 9.3 and updated by the automatic update thing, it was not caused by the update, as I rolled back VM to oringal one as well as tried it with the old USB stick. must be the drives.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
If it was me, i would do 2 things:
1) check the SMART info of each disk, looking for a sign of a possible failure;
In case the drives are ok
2) flash the M1015 to the P16 firmware version. But it's only my opinion
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
just did a smart on each disk, all 7 disks have 0 bad sector and pending sectors, temperature is in the 34-38 Celsius range, thresh hold is 43 Celsius.

Not sure on the firmware, I will flash it sooner or later but it did not have any problem before, even on 9.3 version with current firmware.

Didn't notice the above 2 lines didn't not get posted until this this morning, the scrub finished in 11 hours. repaired 0 error.
 

wtbdeath

Cadet
Joined
Dec 21, 2014
Messages
8
Did a reboot, and reimport, imported in no time. freenas/zfs must did something to the drives, but hidden away from the user.
 
Last edited:
Status
Not open for further replies.
Top