Having a problem with our Freenas box.
Our server is a Supermicro X7DBE+ Dual L5420, 16GB RAM, LSI 9211-8i. We are using 8x Seagate 2TB Barracuda ST2000DM001 drives. From the boot process, it says it's FreeBSD-RELEASE-p4 #0 r262572+17a4d3d
On boot it says:
after sitting there for a few minutes, it reboots.
We had an issue a few weeks back where the freespace ran out. I was able to clear that up and the server was working fine. The other day, I happen to be on the server, and noticed the alerts indicated one of the drives was bad. It had no errrors, just reporting it had some retries on it from what I could remember. I attempted to go into the GUI to see if we bring the disk offline and use the spare disk to rebuild while the "bad" disk is replaced. I remember being able to pull teh information up, and seeing the option to edit/offline the disk. Wasn't quite sure, and decided I would handle this when I was back at the office. When I came back to the office, I was not able to see any of those options in the GUI. After a reboot, the system gets stuck, at a point where it is looking for l2arc_no_rw. If I disconnect the drives, the system boots, but of course complains about missing volume1.
I was wondering if there's a way of clearing the volume from the freenas system through the shell, and attempt to reimport the pool. I've come to grips with the fact that our pool may be done. However, I'd like to see if anything is possible to attempt to recover the data. It's mostly off-site backups, so it's just a matter of re-seeding the backups. Would prefer to not, but if we must, we must.
Thanks,
Carlos.
Our server is a Supermicro X7DBE+ Dual L5420, 16GB RAM, LSI 9211-8i. We are using 8x Seagate 2TB Barracuda ST2000DM001 drives. From the boot process, it says it's FreeBSD-RELEASE-p4 #0 r262572+17a4d3d
On boot it says:
Code:
Trying to mount rot from ufs:/dev/ufs/FreeNASs1a [ro]... WARNING: /data was not properly discoumted Leading early kernel modules: GEOM_RAID5: Module loaded, version 1.1.20130907.44 (rev 5c6d2a159411) /dev/ufs/FreeNASs4: 12 Files, 144459 used, 26068 free (36frags, 3254 blocks 0 % fragmentation) ** /dev/ufs/FreeNASs4 ** Last Mounted on /data ** Phase 1 - Check Blocks and sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 12 files, 14459 used, 26068 free (36 frags, 3254 blocks, 0.1% fragmentation) ***** FILE SYSTEM IS CLEAN ***** savecore: /dev/dumpdev: No such file or directory Settings hostuuid: 53d19f64-d663-xxxxxx Seting hostid: 0x7396de3a ZFS filesystem version: 5 ZFS storage pool version: features support (5000) No suitable dump device was found. Entropy harvesting: interrupts ethernet point_to_point kickstart. Starting file system checks: /dev/ufs/FreeNASs1a: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/FreeNASs1a: clean, 200300 free ( /dev/ufs/FreeNASs3: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/FreeNASs3: clean 2829 free /dev/ufs/FreeNASs4: FILE SYSTEM CLEAN; SKIPPING CHECKS /dev/ufs/FreeNASs4: clean, 26068 free Mounting local file systems:. ... scrolls too fast ... File "/usr/local/lib/python2.7/site-packages/django/db/backends/util.py:. line 53, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.7/site-packages/django/db/utils.py:. line 99, in _exit_ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/usr/local/lib/python2.7/site-packages/django/db/backends/util.py:. line 53, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.7/site-packages/django/db/backends/sqlite3/base.py:. line 450, in execute return Database.Cursor.execute(self, query, params) django.db.utils.OperationalError: no such column: system_advanced.adv_system_pl (scrolled off screen) net.inet.tcp.sendbuf_max: 2097152 -> 2097152 kern.ipc.maxsockbuf: 2097152 -> 2097152 net.inet.tcp.recvbuf_max: 2097152-> 2097152 vfs.zfs.l2arc_headroom: 2-> 16 vfs.zfs.l2arc_noprefetch: 1 -> 0 vfs.zfs.l2arc_write_boost: 8388608 -> 400000000 net.inet.tcp.delayed_ack: 0 -> 0 vfs.zfs.l2arc_write_max: 8388608 -> 400000000 vfs.zfs.l2arc_norw: 1 -> 0
after sitting there for a few minutes, it reboots.
We had an issue a few weeks back where the freespace ran out. I was able to clear that up and the server was working fine. The other day, I happen to be on the server, and noticed the alerts indicated one of the drives was bad. It had no errrors, just reporting it had some retries on it from what I could remember. I attempted to go into the GUI to see if we bring the disk offline and use the spare disk to rebuild while the "bad" disk is replaced. I remember being able to pull teh information up, and seeing the option to edit/offline the disk. Wasn't quite sure, and decided I would handle this when I was back at the office. When I came back to the office, I was not able to see any of those options in the GUI. After a reboot, the system gets stuck, at a point where it is looking for l2arc_no_rw. If I disconnect the drives, the system boots, but of course complains about missing volume1.
I was wondering if there's a way of clearing the volume from the freenas system through the shell, and attempt to reimport the pool. I've come to grips with the fact that our pool may be done. However, I'd like to see if anything is possible to attempt to recover the data. It's mostly off-site backups, so it's just a matter of re-seeding the backups. Would prefer to not, but if we must, we must.
Thanks,
Carlos.