Mounting NTFS to copy TO the disk

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Howdy...I've got my mini-XL, and it's up and running, in a very limited way because two of the drives I bought elsewhere to go in it were DOA (they don't show up at all, and when I put them in a different system...they didn't show up at all).

I thought I'd try to format one of the disks as NTFS, put it into the NAS, and see if I could copy data TO the disk. (The idea is to copy data to the disk, then remove it and keep it offsite as a backup. I've got basically no other way to get large amounts of data out of here as my internet connection is limited to 12 GB/month.)

The NAS GUI asserts that the drive is "imported" even if I do nothing at all. If I go to import disk and provide it /mnt/b as a location, it *seems* to work. I can see the minuscule amount of data I put onto the disk just to be able to check it. But, if I try to copy data TO that location, it QUICKLY fills up. A (supposed) 4TB NTFS partition fills up after less than 2GB.

Apparently, the disk isn't mounting in the normal way; what's happening is "importing" is simply copying data off the disk to wherever I tell it to go. (I could have given it the path to my dataset, and that would have made some sense...except that I'm not interested in copying data FROM the disk.)

My supposed mount point apparently lives on the SATADOM. I discovered this when I tried copying a large (100GB) file to it; it filled up after less than 2GB. And any files I copied there weren't actually on the disk when I mounted it on a different system.

So I poked around a bit and found nothing useful.

Trying to actually mount the drive at the command line:

ntfs-3g /dev/ada7 /mnt/b

spits back "fuse: failed to open fuse device: no such file or directory"

(There is a /dev/ada7 object, but gpart won't show it.)

Any ideas on how to do this?
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
...Oh, and I forgot to mention one possible solution. Create a pool only on that drive, and then remove the drive; it's now in ZFS format.

The problem is I had enough trouble finding a linux version of ZFS that would actually READ the drive that I am afraid of having my backups be useless because of a version incompatibility. It worked this time--but might not work next month.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Steve,

I would surely not write to NTFS from anything but Windows. That filesystem is pretty crappy and risks of corruption are higher than average when writing to it from many different drivers / OS. Actually, some still refuse to write to it.

What is happening here is that FreeNAS fills its RAM buffer instead of actually writing to the drive. NTFS is read-only for many environments. That is why nothing is actually written to the disk.

To create yourself a pool with a single drive vDev would create the kind of autonomous drive you are looking, with all the inconvenient that comes with it. Should that drive fails, you will loose it all. Should there be some bits flipped here and there, ZFS will detect it but will not be able to fix it. For that, you would need to use copy=2, writing everything twice on the drive.

Be sure to define your needs and design your solution before starting to build it.

Have fun designing your system,
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Hey Steve,
To create yourself a pool with a single drive vDev would create the kind of autonomous drive you are looking, with all the inconvenient that comes with it. Should that drive fails, you will loose it all. Should there be some bits flipped here and there, ZFS will detect it but will not be able to fix it. For that, you would need to use copy=2, writing everything twice on the drive.

Thanks for the response. Read only? Yeah that makes sense, it's consistent with what I saw.

I suppose I could try the same thing with the unix systems (ext2, etc). I have a linux box that could read those. I don''t know if that will introduce other issues though.

In the absence of that, a ZFS one drive stripe appears to be what I will have to do. Yes, it's zero-redundancy, but it's going to be part of a rotating backup, and this is a home system--losing the most recent backup won't be too huge a problem. It's to guard against the entire NAS going belly up (the main pool is a five drive RAIDZ2 so it should be fairly solid).

What I was doing, by the way, was simply cp -R. I'm not sure how to access the copy=2 option you mentioned.

The problem is, if the NAS goes belly up, my chances of not being able to read my backup are significant. Using one ZFS for linux utility, it wouldn't mount at all due to incompatible versions (and it was the latest version of the utility. Using the other it mounted, but it complained that some things weren't supported.

ZFS is wonderful, but support for it is apparently iffy or lacking outside of FreeNAS.

The good news is, copying from one pool to another (on the command line) is VERY fast, faster , so backups will be easy.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

Indeed, ZFS is a very specialized file system. In case you need to re-access your backup after an incident, you can also do it from a freshly installed FreeNAS. Just re-install FreeNAS on a computer, plug that disk, import your pool and Voilà, your data will be there (if the drive is still working properly of course...).

About the copy=2, this is an advanced property that can be configured at pool level or dataset level.
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Thanks again!

I found a lot more info here: https://www.ixsystems.com/community/threads/backing-up-data-from-freenas.42280/post-273827 in several posts by Arwen; she even addressed the ZFS incompatibility issue.

As an aside, I might not have the luxury of installing ZFS on a machine I own, should the NAS go belly up (or the house burn down, which alas is more likely!). As another aside, I'll definitely have to rethink things if I exceed 4TB of data (which I can certainly do on a 12TB pool). I don't think I even have 1TB yet so I probably won't have to worry about it until late next month.:D
 
Top