How to tell if import disk is still running?

Status
Not open for further replies.
Joined
Jul 10, 2018
Messages
6
Hi all,

So I've been running a single-disk FreeNAS 9.x server for years and decided to build out a new NAS and migrate the data. After a lot of reading, I decided - for better or worse - that I'd pull the drive from the 9.x system, plug it into the new 11.1 system, run Import Disk from the GUI to a specific dataset, and then move the folders I want to keep. The old drive is a 2TB drive using UFS.

Here's the new system build:
FreeNAS 11.1 RELEASE 64bit [11.1-U5]
MOBO: Supermicro X11SSH-LN4F
CPU: Intel Xeon E3-1240 V5
RAM: 2x 16GB Crucial Technology CT16G4WFD824A 288-Pin EUDIMM DDR4
SSD: 2x PNY CS900 120GB SSD (boot, mirrored)
HDD: 6x Toshiba N300 8TB NAS (RAIDZ2 setup)
CASE: Fractal Design Node 804 Black MINI-ITX
PSU: Seasonic G-650 80 Plus Gold
UPS: CyberPower CP 1500C

Burn-in tests went pretty well. An old SSD failed so I changed it out for the 2 new PNY drives. Badblocks & SMART tests are all clear.

Last night, I started the disk import via the GUI, like this:
Import disk - start.jpg


This morning I checked on it and the status was still updating, everything seemed ok. Later I checked again and I noticed that the current filename hadn't changed in a while. While about 1TB has copied over, there's another ~.75TB to go. I found the disks weren't busy anymore, system load was above 1.0, the GUI was sluggish, and there was a python3.6 command using up 100% WCPU.
Drive usage graphs.jpg


The console was flooded with alert messages, repeating every 80 seconds:
Code:
Aug 18 15:18:16 lillet /alert.py: [system.alert:393] Alert module '<samba4.Samba4Alert object at 0x814b4e470>' failed: timed out	
Aug 18 15:18:27 lillet /alert.py: [system.alert:393] Alert module '<update_check.UpdateCheckAlert object at 0x81405c550>' failed: timed out																															 
Aug 18 15:19:37 lillet /alert.py: [system.alert:393] Alert module '<samba4.Samba4Alert object at 0x814b4e470>' failed: timed out	
Aug 18 15:19:47 lillet /alert.py: [system.alert:393] Alert module '<update_check.UpdateCheckAlert object at 0x81405c550>' failed: timed out


I made a poor decision and killed the python process that was using up all the cpu. That appears to have killed the disk import procedure as shortly after I noticed the two RSYNC processes I'd seen via top were also gone. I decided to reboot. After rebooting I checked via top and think the same python process and two RSYNC processes are back. However, it doesn't look like the disk import is continuing or at least making any progress. Used disk space hasn't changed in over an hour:

Code:
[root@lillet ~]# zfs list Tank																									 
NAME   USED  AVAIL  REFER  MOUNTPOINT																							   
Tank  1.07T  27.0T   176K  /mnt/Tank


I tried setting up the disk import through the GUI again based on what I read in another thread but I'm getting a disk busy error instead of the import status showing back up:

Code:
Import of Volume /dev/da0p2 Failed.

Reason: [Errno 16] Device busy: '/var/run/importcopy/tmpdir/dev/da0p2'


I could use some help here. Any thoughts?


Thanks,
Dave
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Sir:

Why not simply import the pool itself into your FN 11 system? Then simply do a zfs send/recv of the relevant data set(s) to the new volume? Then, detach the old volume, and go on with your day?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Ah I see. I read through these posts pretty fast and didn't pick up on it being UFS.

Must have been a pretty old FreeNAS sir!!! :)
 
Joined
Jul 10, 2018
Messages
6
Must have been a pretty old FreeNAS sir!!! :)

About 8~10 years, yep. Served us quite well!

Kinda hoping to get that much out of the 'new' one, just gotta get the data transferred over & finish reinstalling services (e.g. Plex).

I've had no luck clearing the "disk busy" condition (including unmounting the drive via shell and when that hasn't worked, shutting down, unplugging the drive, rebooting, verifying in GUI it's "gone", shutting down, replugging the drive, rebooting and trying again to no avail). I'm about ready to just reinstall FreeNAS 11.1 clean to clear, reconfigure everything, then try to figure out how best to copy the data over the network (rsync? via my desktop? <sigh>). More reading to do, as every method appears to have its pitfalls and configuration nuances.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Joined
Jul 10, 2018
Messages
6
I guess you mean old FreeNAS version in a VM... (?)

Oh, yeah, that wasn't clear of me at all, now was it? I still have the 'old' FreeNAS system hardware. I pulled the data drive from it and plugged it into my 'new' FreeNAS system with a USB<-->SATA adapter. I was thinking that using "Disk Import" from the old UFS system to a temporary ZFS dataset would be a relatively quick and easy way to move the ~2TB of data.

Since I haven't been able to figure out how to properly cancel the seemingly 'stuck' Disk Import task, I'm now thinking I'll just reinstall FreeNAS 11.1 (and hey, U6 just came out!) on the new system. That would definitely clear whatever process(es) are keeping the old UFS drive "busy".

As I see them, my options then seem to be:
1) With the UFS drive back in the old NAS hardware
either:
a) Setup an rsync connection from the old to the new system ("rsync?"). I still need to read more to know whether I need to configure SSH first to do this, as well to understand what settings I need to use in rsync to properly keep the file permissions on the data or whether it has any chance to keep file timestamps set.
b) Use a windows desktop connected via Samba/SMB to both the old and the new NAS systems and copy the 3 shares over using Explorer ("via my desktop")
c) Same as b) but use a Linux desktop [still Samba/SMB though]

2) Pull one of the mirrored boot drives out of the new system and connect the old UFS drive straight to the new system SATA to see if the USB<-->SATA adapter caused the Disk Import hang.

3) Try to figure out how to restore my Crashplan backup from the old NAS system to the new system. Not quite sure even how to go about this since Crashplan moved to the Small Business application so the FreeNAS plugin no longer works. I might be able to use my Windows desktop to do this since I migrated it to the Small Business application, but I'm not sure...hmm.

4) Some other method the wise gurus of this forum might be thinking of? :)
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
I don't know if it would be a good (safe) idea but are you able to connect and use any of the new drives (or both) in the old rig? Then you would be able to run rsync locally..

Sent from my mobile phone
 
Joined
Jul 10, 2018
Messages
6
I don't know if it would be a good (safe) idea but are you able to connect and use any of the new drives (or both) in the old rig? Then you would be able to run rsync locally..

I can run the old drive in the new rig. I can't think of any way to run 6x drives in the old rig though.

You've got me thinking I can look up how to manually mount the old drive in the new system, and try to do what I'm thinking "Import Disk" does.

Through a bit of trial and error I've learned I can clear the "Device busy" issue by killing the lowest PID rsync process via the shell. They do not restart if I reboot the system. Yet subsequent attempts at "Import Disk" via GUI still result in the "Device busy" issue from the start, with 2-3 rsync processes that start out looking busy but drop to 0.00% WCPU pretty quick. Once I kill the rsync process(es) I find I can unmount the drive without using -f:

Code:
top
kill <lowest rsync PID>
umount /var/run/importcopy/tmpdir/dev/da0p2

So, perhaps mount the drive, then try a manual rsync command to pick back up?

...thinking:
Code:
mount -r /dev/da0p2 /var/run/myimport/dev/da0p2
rsync -rltgoDhv --delete-during --inplace --log-file="/mnt/Tank/homes/dave/copy-log.log" -T -x /var/run/myimport/dev/da0p2/ /mnt/Tank/amarula

[since most of the data is Windows origin, rsync options adapted from Spearfoot's script with half an understanding of the man page]

By mounting read-only, what can I hurt, right?
 
Last edited:
Joined
Jul 10, 2018
Messages
6
I suppose all's well that ends well. I went ahead and tried the manual rsync command above which basically worked. I've edited the above post as rsync rejected two of the original options (-c arcfour -o Compression=no) and I found I missed a '/' on my origin folder (was getting a 'nested' copy of the folder til I fixed that). Oh, and I had to create the directory I mounted to (e.g. mkdir /var/run/myimport , mkdir /var/run/myimport/dev/da0p2 ) and then carefully delete them after I was done.

So, end result, everything's copied. I don't know why Import Disk failed. I suspect there's still a fragment of a script on that will try to complete its task but the drive is no longer connected anyway and the system appears to be running alright (not counting a separate USB UPS issue that was there before and remains til I have time to deal with it).

On to better things. Like where to back it up. And Let's Encrypt. And OwnCloud. And 11.2.
 
Status
Not open for further replies.
Top