SOLVED Boot hangs on mounting local filesystems

Status
Not open for further replies.
Joined
Aug 13, 2015
Messages
6
First off the issue:
I was deleting data from a samba share on the array when I lost connectivity (no pings, no GUI, no shares). When I was able to get to the console I discovered the machine not responding to keyboard inputs. I did a hard reboot and when it came up it hung on "Mounting Local Filesystems". I let it run 8+ hours with no change, and have tried this several times. When booting into single user and running
Code:
# zpool import -o readonly=on Data

all datasets return with error "failed to create mountpoint"

Code:
# zpool status  -D Data
pool: Data
state: ONLINE
scan: scrub repaired 0 in 28h55m with 0 errors on Mon Aug 10 04:55:37 2015
config:

NAME  STATE  READ WRITE CKSUM
Data  ONLINE  0  0  0
raidz1-0  ONLINE  0  0  0
gptid/8086f63e-bddb-11e4-be2f-00188b31c874  ONLINE  0  0  0
gptid/80ea0c8f-bddb-11e4-be2f-00188b31c874  ONLINE  0  0  0
gptid/81515939-bddb-11e4-be2f-00188b31c874  ONLINE  0  0  0
gptid/822f97ca-bddb-11e4-be2f-00188b31c874  ONLINE  0  0  0

errors: No known data errors

dedup: DDT entries 26848633, size 793 on disk, 176 in core

bucket  allocated  referenced
______  ______________________________  ______________________________
refcnt  blocks  LSIZE  PSIZE  DSIZE  blocks  LSIZE  PSIZE  DSIZE
------  ------  -----  -----  -----  ------  -----  -----  -----
1  21.6M  2.67T  2.65T  2.65T  21.6M  2.67T  2.65T  2.65T
2  4.02M  508G  504G  503G  8.14M  1.00T  1017G  1017G
4  16.6K  1.26G  1.01G  1.03G  81.5K  5.62G  4.49G  4.61G
8  3.32K  281M  251M  255M  33.1K  2.75G  2.45G  2.49G
16  741  47.5M  37.2M  38.6M  17.2K  1.09G  872M  909M
32  583  29.9M  15.4M  16.9M  26.1K  1.53G  809M  868M
64  39  2.47M  1.15M  1.24M  2.91K  188M  82.3M  89.1M
128  19  28K  22K  110K  3.32K  5.12M  3.84M  19.3M
256  3  3.50K  3.50K  17.4K  1.11K  1.42M  1.42M  6.45M
512  1  512  512  5.81K  515  258K  258K  2.92M
1K  6  640K  20.5K  34.9K  10.4K  1.15G  37.5M  60.5M
2K  3  3K  3K  17.4K  7.91K  8.63M  8.63M  46.0M
16K  1  1.50K  1.50K  5.81K  17.7K  26.5M  26.5M  103M
Total  25.6M  3.17T  3.14T  3.14T  29.9M  3.68T  3.65T  3.65T

Code:
# zpool list
NAME  SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
Data  7.25T  4.36T  2.89T  60%  1.16x  ONLINE  -


Next the specs:
Server Dell Poweredge 2950
Build FreeNAS-9.2.1.6-RELEASE-x64 (ddd1e39)
Platform Intel(R) Xeon(R) CPU 5130 @ 2.00GHz
Memory 24554MB
Disks 4x 2TB
RAID RAIDZ1 with lz4 compression and dedupe enabled

I will be happy to post any further messages or outputs upon request
 
Last edited:
Joined
Oct 2, 2014
Messages
925
What are you using as the boot device? Have you tried reinstalling FreeNAS to a separate USB and importing your config?
 
Joined
Aug 13, 2015
Messages
6
The boot device is currently a 1GB USB 2.0 (actual product manufacturer eludes me ATM)
I checked space on it to ensure that was not causing an issue
Code:
# df -m 
Filesystem  1M-blocks Used Avail Capacity  Mounted on 
/dev/ufs/FreeNASs1a  926  702  149  82%  / 
devfs  0  0  0  100%  /dev 
/dev/md0  4  3  0  79%  /etc 
/dev/md1  0  0  0  0%  /mnt 
/dev/md2  149  45  91  33%  /var 
/dev/ufs/FreeNASs4  19  5  12  32%  /data 
/dev/md3  1917  1  1762  0%  /var/tmp/.cache


I have not tried a reinstall yet, but I can certainly give it a shot. I will post back with results.
 
Joined
Oct 2, 2014
Messages
925
Joined
Aug 13, 2015
Messages
6
Doing further research I now realize that the stick is not a 1GB (my mistake was looking at df for my sizing instead of checking the partition table) Upon checking the partition table it is actually an 8GB usb.

As for the dedupe, I realize that that was not the greatest of ideas. It was something I wanted to play with and looking back, not a good choice.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
As for the dedupe, I realize that that was not the greatest of ideas. It was something I wanted to play with and looking back, not a good choice.

Wow. 27,000,000 blocks in the DDT's. That's using around 8 gigs of ram for a dedupe ratio of only 1.16:1.

And by default, metadata (which dedupe tables are) can only be 1/4 of ARC. So with 24 gigs of ram, you're normally limited to 6 gigs of metadata. To dedupe this amount of data, I'd say you'd want 64 gigs of ram to be comfortable. The 500 gigs of disk savings isn't really worth it.

I would get rid of the deduped data asap. If dedupe tables get too big for your current amount of ram, there can be issues mounting the pool if there's ever an unclean shutdown. Create a dataset without dedupe, and move all the data to the new dataset.
 
Joined
Aug 13, 2015
Messages
6
Question is this then; will I be able to get it to boot if I add 8GB RAM into it? This is a hardware limitation with the poweredge 2950, max memory being 32 GB. Because alternatively finding another machine with 64GB that I can boot into will be a bit of a challenge. Honestly I just need to get it up long enough to get the data off of it. Based on the calculations ~8GB dedupe table, I might be able to squeak by, get the data off and rebuild minus dedupe?

I know this is not an ideal scenario by any accounts, but I would at least like to try rather than give up and consider the data lost... even though there is an excellent chance of that being the case
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Ahh, I didn't see that the machine did have an unclean shutdown. Yes, all of the dedupe table must fit into ram to import the pool.

I'd start with a fresh install so that it doesn't try to import the pool when it boots. Then try importing from the commandline. Your zpool status looks like the pool name is 'Data', but your import command shows 'raid'.

Whatever the poolname, I'd try 'zpool import -o readonly=on -R /mnt poolname'
 
Joined
Aug 13, 2015
Messages
6
zpool status line fixed, transcription error...

So a summary, I have ordered 8GB more memory. When that comes in I will be performing a reinstall of freenas and perform a CLI import of the data. Estimated delivery of the memory is next week, I will check back in then with results. Thank you all for the help so far!
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Just a question out of curiosity: when it comes to importing a pool with a large deduplication table, in order to rescue data rather than make much practical use of the server, can ZFS use swap space for the deduplication data, if there isn't enough RAM?
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Just a question out of curiosity: when it comes to importing a pool with a large deduplication table, in order to rescue data rather than make much practical use of the server, can ZFS use swap space for the deduplication data, if there isn't enough RAM?

As I understand it, no. l2arc or swap space can be utilized for ddt's during normal use of the pool, or when importing a previous cleanly exported pool. Pool performance will suffer in these two cases, but it'll still work.

The minute you have a 'dirty' pool you need to import, the ddt's MUST fit in ram. That's why I would never ever rely on l2arc / swap space for dedupe tables. You're basically admitting that if the server crashes / panics, whatever, that the pool is no longer needed. Not just the deduplicated parts, but the entire pool.

I remember reading about someone who had enabled dedupe pool wide on a >10tb pool. After an unclean shutdown, the pool wouldn't mount of course. I think he had to expand to 256gb of ram or something in order to gain access to the pool again.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you post a debug file? System -> Advanced
 
Joined
Aug 13, 2015
Messages
6
OK, I'm back with news. I went ahead and upgraded and followed titan_rw's steps with a new installation of FreeNAS and a manual CLI mount. I'm happy to report I now have access to the data to move it off the dedupe datasets

Thanks you everyone for helping me through this! I was completely stumped on this one.
 
Status
Not open for further replies.
Top