shinseiryu
Cadet
- Joined
- Nov 8, 2014
- Messages
- 2
(correction: I put 0.9.2 in the subject but actually meant 9.2.1.8)
Hello,
I had a system running an old 0.7.X freenas build and it ran into some problems with the OS drive and/or motherboard/cpu. I managed to get the data off my RAID1 and RAID5 volumes and it is currently sitting on external usb3 hdd.
I have upgraded the hardware in this computer to the following:
Supermicro X10SLL+-F
Intel Xeon E3-1240v3
16GB of Crucial DDR3 ECC (CT2KIT102472BD160B)
8GB HP USB flash drive
4x 3.0TB Western Digital RED
The system boots up fine and ran through a few hours worth of memtextx86 without issue.
As the drives had some remnants of the RAID5 array I had issues getting Freenas to build a new pool using ZFS RAIDZ2. I googled some on it and ended up running this on ada0,ada1,ada2,ada3
sysctl kern.geom.debugflags=0x10
dd if=/dev/zero of=/dev/ada0 bs=1m
After what seems like forever it finally finished and after rebooting I was able to create the RAIDZ2 volume and install 2 plugins. I did a restart of the system (having issues getting these plugins/jails working and wondered if a reboot would help) and after the restart Freenas is giving an alert message "WARNING: The volume RaidZ2 (ZFS) status is UNKNOWN". I have pasted below some output from commands I have found and a small snippet from dmesg that looks odd (RAID5 is still on my drives?) Any help in prepping these drives properly so that I can run RAIDZ2 or whatever is best for a 4x 3.0TB configuration would be appreciated.
Hello,
I had a system running an old 0.7.X freenas build and it ran into some problems with the OS drive and/or motherboard/cpu. I managed to get the data off my RAID1 and RAID5 volumes and it is currently sitting on external usb3 hdd.
I have upgraded the hardware in this computer to the following:
Supermicro X10SLL+-F
Intel Xeon E3-1240v3
16GB of Crucial DDR3 ECC (CT2KIT102472BD160B)
8GB HP USB flash drive
4x 3.0TB Western Digital RED
The system boots up fine and ran through a few hours worth of memtextx86 without issue.
As the drives had some remnants of the RAID5 array I had issues getting Freenas to build a new pool using ZFS RAIDZ2. I googled some on it and ended up running this on ada0,ada1,ada2,ada3
sysctl kern.geom.debugflags=0x10
dd if=/dev/zero of=/dev/ada0 bs=1m
After what seems like forever it finally finished and after rebooting I was able to create the RAIDZ2 volume and install 2 plugins. I did a restart of the system (having issues getting these plugins/jails working and wondered if a reboot would help) and after the restart Freenas is giving an alert message "WARNING: The volume RaidZ2 (ZFS) status is UNKNOWN". I have pasted below some output from commands I have found and a small snippet from dmesg that looks odd (RAID5 is still on my drives?) Any help in prepping these drives properly so that I can run RAIDZ2 or whatever is best for a 4x 3.0TB configuration would be appreciated.
Code:
[root@freenas ~]# zpool status no pools available [root@freenas ~]# gpart show ada0 gpart: No such geom: ada0. [root@freenas ~]# gpart show ada1 gpart: No such geom: ada1. [root@freenas ~]# gpart show ada2 gpart: No such geom: ada2. [root@freenas ~]# gpart show ada3 gpart: No such geom: ada3. [root@freenas ~]# glabel status Name Status Components ufs/FreeNASs3 N/A da0s3 ufs/FreeNASs4 N/A da0s4 ufs/FreeNASs1a N/A da0s1a gptid/65a07e07-6346-11e4-8f9e-0025904657eb N/A raid5/Raid5p1 gptid/65b0c9de-6346-11e4-8f9e-0025904657eb N/A raid5/Raid5p2 [root@freenas ~]# graid status [root@freenas ~]# zpool status no pools available
Code:
Trying to mount root from ufs:/dev/ufs/FreeNASs1a [ro]... GEOM_RAID5: Module loaded, version 1.1.20130907.44 (rev 5c6d2a159411) GEOM_RAID5: Raid5: device created (stripesize=131072). GEOM_RAID5: Raid5: ada3(2): disk attached. GEOM_RAID5: Raid5: ada2(1): disk attached. GEOM_RAID5: Raid5: ada1(0): disk attached. GEOM_RAID5: Raid5: ada0(3): disk attached. GEOM_RAID5: Raid5: activated (need about 76MiB kmem (max)). GEOM: raid5/Raid5: the secondary GPT table is corrupt or invalid. GEOM: raid5/Raid5: using the primary only -- recovery suggested. ZFS filesystem version: 5 ZFS storage pool version: features support (5000)
Last edited: