Creating new top-level dataset in pool without copying all data?

Status
Not open for further replies.

M1lk4h0l1c

Cadet
Joined
Nov 11, 2018
Messages
2
Hey there,

I have got a problem that I don't really know how to fix and I could not find anything on the Internet that exactly fits my case. My situation is as follows:

I have a FreeNAS server with a ZFS Pool (4 x 8TB, z1) that has the default top-level dataset with the same name as the volume. As this is my first FreeNAS server (that ran flawlessly over the last couple of months) I don't have too much experience with FreeNAS or ZFS specifcally.
As a result I just dumped all of my data on the top-level dataset without thinking about the implications and consequences this might have.
Now I want to create a place to store my Time Machine backup and because Apple does not allow setting a size limit, I have to rely on a separate dataset with a quota set. However, I don't want a new dataset as a child of my data-filled dataset. I want it to be next to it. However, I don't think it is possible to have more than one top-level dataset, right?

Now, I have this simple structure:
Code:
storage (volume)
|---> storage (dataset; containing all my files)


But I want this structure instead:
Code:
storage (volume)
|---> storage (dataset)
		|---> general (dataset; containing all my files)
		|---> timemachine (dataset; with quota)
		|---> ...


I know it would be possible to just create new datasets within the existing top-level dataset and copy all of the data, but this takes a lot of time and puts quite some stress on to the HDDs.


So, is there a way to create a new top-level dataset and move the existing one to that new one as a child without having to copy all of the data?

Thanks in advance...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I can't say for sure (I'm too lazy to test) but using mv instead of cp, ZFS MAY be able to just update the metadata of the blocks in question. This would assume that all dataset properties are the same. If this does not work, you wont make the same mistake again;)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Bugger. I figured that might be the case. I also thought it would be worth testing if the data needs to get moved anyway.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You can start with;
Code:
storage (volume)
|---> storage (dataset; containing all my files)
		|---> timemachine (dataset; with quota)
		|---> ...

Then as time permits, re-organize the data.

@garm is correct. In this instance of moving files to another dataset, Unix thinks it's a different file system so it does a copy and delete source. The so called quick move only works in the same file system, (aka in ZFS dataset).
 

M1lk4h0l1c

Cadet
Joined
Nov 11, 2018
Messages
2
Damn it. Then I'll have to bite the bullet and copy all my files over to a nested dataset and clean up the top-level one :(
Thanks anyways. Much appreciated!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
puts quite some stress on to the HDDs.
Where does the idea come from that reading and writing data puts some sort of unacceptable or abnormal stress on hard disks?
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Where does the idea come from that reading and writing data puts some sort of unacceptable or abnormal stress on hard disks?
What other kind of stress is there for a hard drive? Spinning the plates are not really subjecting the disks to any forces except when you stop or spoil them up. Moving the head on the other hand is done via an electromagneticall coil and the arm itself sits on a bearing (fluid or air) that keep the head a few nanometers from the plates. There is a magic number of actuations that arm will survive before you get catastrophic failure in one of several ways, from touching the plates to loosing the precision of the arms position. SSDs have no moving parts but suffer from other fatigue problems, but spinning disks are mechanical and suffer mechanical stress when used.

Personally I have SSDs for software logs, cache, temporarily storage and anything else that trickle IOs. I use my IronWold and Red drives for storing the actual data I built the NAS for. They write only new data and read it on demand. Anything else my system does, serving up webservices, application logs, configuration database is done on SSDs that don’t contain any critical data.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There is a magic number of actuations that arm will survive before you get catastrophic failure in one of several ways, from touching the plates to loosing the precision of the arms position.
Granted, though that magic number will vary wildly among a population of disks, and it may or may not be the ultimate cause of failure for the disk (any number of other things could kill it instead). Yes, moving the arm does put wear on the mechanism (so does spinning the disks at all--there is some, however slight, wear on the bearings as they spin). My objection (and, to a degree, confusion) is to the the mindset that reading and writing puts unacceptable or abnormal stress on the disk. What the hell do you have it there for, if not to write to it and read from it? It's not a very good paperweight if it's mounted inside your server chassis, and it's awfully expensive for a space heater.
 
Status
Not open for further replies.
Top