Resizing a UFS volume

Status
Not open for further replies.

Philippe

Cadet
Joined
Jun 11, 2013
Messages
2
I'm using FreeNas 8.3.

I have HW raid and I choose using this rather than ZFS mostly for my ability to grow and rebuild my raid6 setup which ZFS cannot do within the same array according to documentation.
I started with 5 drives of 1.5Tb in raid 6 giving 1 disc of 4 Tb as far as Freenas is concerned. I created a UFS volume and filled it with data.
Then I added 6 more drives and rebuilt the array. This gave me a 12Tb drive but my volume under freenas was still 4Tb which is to be expected as I still had to extend the partition and file system which apparently is not something people do.

Some of you might be critical of the choices above but what I'm really looking for here rather than what I should have done is what I should do now.

Here's what I did after reading posts like this one:​
I booted in single user​
I ran some checks with gpart which told me the table was corrupted​
I recovered with gpart:​
#gpart recover mfid2​
Then I resized the partition:​
#gpart resize -i2 mfid2​
Ran a check and now have a full 12Tb partition​
Tried to extend the UFS FS:​
#growfs mfid2​
Which returned the error:​
superblock not recognized​

That's where I am at now, when rebooting the volume does not mount anymore.
The data on the disc could not possibly be conveniently backed up due to the size. I backed what could not be replaced so it is not ultra critical for me to recover it but it would be very nice to and would save me a lot of time.
Anyone has an idea of how can recover from this sorry state and how to grow properly? I have room for 8 more discs and even if I say goodbye to my data and start from scratch I will want to grow in the future.

Thanks for any light you guys can shed.
P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My opinion on this is now several years out-of-date but in the FreeBSD 5/6/7 era UFS/FFS had extreme troubles with sizes above about 1TB, large filesystems had mysterious problems being ported back and forth between major revs and/or architectures, growfs didn't work beyond 1TB (IIRC), and generally speaking UFS was showing some signs of age. That and the ever-increasing practicality of just throwing CPU and memory at ZFS was sufficiently compelling to convince me to move on to ZFS for large filestores.

You've not provided too much data about the state that this has actually left your system in. Is it able to see your filesystem at all? I'm assuming that the syntax you used for "growfs" is bad, since you would have needed the partition in there (growfs /dev/mfid2p2?) What does "df /dev/mfid2p2" return? "fsck -n /dev/mfid2p2" (should be safe to run with -n)?
 

Philippe

Cadet
Joined
Jun 11, 2013
Messages
2
Thanks for that.
Indeed my use of growfs was obviously wrong.
I did:
Code:
#growfs /dev/mfid2p2
growfs: we are not growing (1097375467 ->71949547)


Now that's strange. A couple of things, my system is 64 bit AMD machine and the disk was full.
I read there was some bugs a while back with 64 bit systems and that you need free space in the first cylinder but the numbers above don't make sense.

Code:
#df /dev/mfid2p2
df: mkdtemp (/tmp/df.68gUro") failed: Read-only file system


I setup FreeNas on the USB drive wich might explain but when I do:
Code:
#diskinfo /dev/mfis2p2
/dev/mfid2p2    512    13488844881408 263445400159  0  2147549184  1639925  255  63


And:
Code:
#fsck /dev/mfid2p2
fsck: Could not determine file system type


But:
Code:
#gpart show mfid2
=>  34  26349594557  mfid2  gpt  (12T)
    34            94  - free -  (47k)
    128      4194304      1  freebsd-swap  (2.0G)
4194432  26345400159      2  freebsd-ufs  (12T)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It'd be very handy to get df's estimation of the state of things. It is somewhat less strict than some of the other utilities and can be a quick measure of whether or not there's anything vaguely sane-looking still out there.

I just created a 44TB UFS partiti0n which appears to have worked, amazingly enough. Mostly intended as a sanity check that it's even possible to build a large UFS filesystem.

But basically there's only two routes forward that I see:

1) df on a properly-booted system indicates something bad (like "df: /dev/mfid2p2: Invalid argument") in which case you're left with dredging around on the disk device to see if you can sleuth out what's happened

2) df shows a filesystem out there, which means it might be repairable/retrievable, so you might try force mounting it, or when that fails, trolling about with "fsdb /dev/mfid2p2" to see how much more damage can be inflicted... but you're not too likely to actually *fix* this, maybe just get it into a state where you could read data off the filesystem in order to save it on another disk.
 
Status
Not open for further replies.
Top