Possible to have one pool off while another on?

Status
Not open for further replies.

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
Hello

I currently have 10 discs running on one pool(for about 4 years now), and my JBOD enclosure holds 20 discs. I was thinking about adding a new pool (another 10 discs) as a backup to the first pool, but not have it running 24/7 like the first pool. Basically, configure it, transfer files, and shut those drives down, and maybe fire up every few months to transfer new content. Is this possible? Is it a good idea? I realize off site storage is preferred, but I have 30 Tb of media......

Cheers....
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Hello

I currently have 10 discs running on one pool(for about 4 years now), and my JBOD enclosure holds 20 discs. I was thinking about adding a new pool (another 10 discs) as a backup to the first pool, but not have it running 24/7 like the first pool. Basically, configure it, transfer files, and shut those drives down, and maybe fire up every few months to transfer new content. Is this possible? Is it a good idea? I realize off site storage is preferred, but I have 30 Tb of media......

Cheers....
Yes, it's possible. You can even export the backup pool to allow disk removal.

If it's purely a backup pool, you maybe can get away with a few changes:
  • Use larger disks now available, (that may not have been available when you originally made your FreeNAS).
  • Use fewer disks. This can allow having a free disk slot for easier disk replacements.
  • Change the redundancy. If you are using RAID-Z3 with 10 disks, perhaps your backup pool can use RAID-Z2.
Note that you probably want to perform your transfer / backups like this:
  1. Import your pool, and spin up it's disks
  2. Run a SMART long test. If any failures, examine before proceeding
  3. Run a ZFS scrub. If any failures, examine before proceeding and potentially fix any issues.
  4. Perform the transfer / backups
  5. Examine the output of zpool status for your backup pool, (in case errors were found during the backup)
  6. Export your pool, and spin down it's disks.
Some might say to run the ZFS scrub after the transfer / backup. My thought is that I would not want to transfer data onto a pool with errors. (Some errors can be fixed, live, due to redundancy, so I would want to find them, and have ZFS scrub fix them BEFORE I perform the transfer / backup.)
 

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
Yes, it's possible. You can even export the backup pool to allow disk removal.

If it's purely a backup pool, you maybe can get away with a few changes:
  • Use larger disks now available, (that may not have been available when you originally made your FreeNAS).
  • Use fewer disks. This can allow having a free disk slot for easier disk replacements.
  • Change the redundancy. If you are using RAID-Z3 with 10 disks, perhaps your backup pool can use RAID-Z2.
Note that you probably want to perform your transfer / backups like this:
  1. Import your pool, and spin up it's disks
  2. Run a SMART long test. If any failures, examine before proceeding
  3. Run a ZFS scrub. If any failures, examine before proceeding and potentially fix any issues.
  4. Perform the transfer / backups
  5. Examine the output of zpool status for your backup pool, (in case errors were found during the backup)
  6. Export your pool, and spin down it's disks.
Some might say to run the ZFS scrub after the transfer / backup. My thought is that I would not want to transfer data onto a pool with errors. (Some errors can be fixed, live, due to redundancy, so I would want to find them, and have ZFS scrub fix them BEFORE I perform the transfer / backup.)



Great information. Thank you. I understand most of everything.

Can you explain this further:

1. 'You can even export the backup pool to allow disk removal."
2. How would I shut off that particular pool (newly formed backup)?
 
Last edited:

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
Well, now I'm not sure what RAID config I originally installed. Here is the shell info:

0
gptid/e28ec68d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e400a246-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e58cfe6d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e715d923-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e8948549-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ea11d63b-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/eb9783dc-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ed11e01f-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ee89620d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/f0049e1d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0

errors: No known data errors
[root@nastest ~]#
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Well, now I'm not sure what RAID config I originally installed. Here is the shell info:

0
gptid/e28ec68d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e400a246-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e58cfe6d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e715d923-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e8948549-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ea11d63b-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/eb9783dc-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ed11e01f-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ee89620d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/f0049e1d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0

errors: No known data errors
[root@nastest ~]#

Hi, the important information has been cut off.

Can you provide the full output of zpool status in [ code ] tags please?
 

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
oops...sorry. Not sure about "in [ code ] tags please", but here's a copy of the shell:

state: ONLINE
scan: scrub repaired 0 in 0h9m with 0 errors on Sat Jul 29 03:54:54 2017
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
gptid/816a6987-a43d-11e4-8d9e-0cc47a07c1ab ONLINE 0 0 0

errors: No known data errors

pool: vdev
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 6h21m with 0 errors on Mon Aug 7 10:21:53 2017
config:

NAME STATE READ WRITE CKS
UM
vdev ONLINE 0 0
0
raidz2-0 ONLINE 0 0
0
gptid/e28ec68d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e400a246-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e58cfe6d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e715d923-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/e8948549-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ea11d63b-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/eb9783dc-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ed11e01f-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/ee89620d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0
gptid/f0049e1d-3618-11e4-b29b-0cc47a07c1ab ONLINE 0 0
0

errors: No known data errors
[root@nastest ~]# ^C
[root@nastest ~]#
 
Last edited:

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
volume.png
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Great information. Thank you. I understand most of everything.

Can you explain this further:

1. 'You can even export the backup pool to allow disk removal."
2. How would I shut off that particular pool (newly formed backup)?
I think, (but don't quote me), that you can export the pool from the GUI using Storage->Select Volume, (aka Pool)->Detach Volume, (aka Pool). This would cause the volume / pool to temporarily disappear from your FreeNAS. And if you have hot-swap disk bays and software, you can physically remove the disks without risk. (Or cold remove if needed.)

This has the advantage if there is a software bug, or virus, it's unlikely to affect the exported pool.

As for spinning down the disks, it's possible, but I don't have a procedure.

All in all, a bit of work for each time. Except you stated you would do it perhaps every 2 or 3 months. Thus, not to much work if you document the procedure carefully.
Well, now I'm not sure what RAID config I originally installed.
...
Based on your followup post, it's RAID-Z2. So you can loose 2 disks without data loss. (You can even have more bad blocks, as long as they are covered by another disk.)

By the way, naming your pool vdev was not a good idea. It's confusing.
 

patrick sullivan

Contributor
Joined
Jun 8, 2014
Messages
117
I think, (but don't quote me), that you can export the pool from the GUI using Storage->Select Volume, (aka Pool)->Detach Volume, (aka Pool). This would cause the volume / pool to temporarily disappear from your FreeNAS. And if you have hot-swap disk bays and software, you can physically remove the disks without risk. (Or cold remove if needed.)

This has the advantage if there is a software bug, or virus, it's unlikely to affect the exported pool.

As for spinning down the disks, it's possible, but I don't have a procedure.

All in all, a bit of work for each time. Except you stated you would do it perhaps every 2 or 3 months. Thus, not to much work if you document the procedure carefully.
Agreed. Thank you for the suggestion. This could definitely work! :)

Based on your followup post, it's RAID-Z2. So you can loose 2 disks without data loss. (You can even have more bad blocks, as long as they are covered by another disk.)
Yeah, I wanted to go Z3, but this server is in my basement (close by), so I decided to role the dice.

By the way, naming your pool vdev was not a good idea. It's confusing.
Agreed. Not sure what I was thinking..... :)
 
Last edited:
Status
Not open for further replies.
Top