Migrating from Mirrored vdevs to Raid-Z2

Status
Not open for further replies.

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
Hi all,

I have been looking through many forum posts on this topic but want clarity (peace of mind!) before I begin (measure twice, cut once).

I run a Dell T20 with 4x4TB WD Reds as mirrored vdevs. The motherboard only supports 4 SATA drives. Reaching exhaustion, I picked up 2 additional 4TB Reds and an LSI 9211-8i flashed to IT (P20). I want to move my current setup to 6x4 RaidZ2.

I am borrowing two additional 4TB reds from a friend in order to create a second pool (striped) for enough capacity. This will allow me to snapshot & zfs send my ~5.6TB to it, destroy the mirrored vdevs, create the RaidZ2 pool, and zfs send back. So the data will move twice.

Does this process sound correct? I am using this thread as my guide. I am wondering if I should go through the whole process twice by migrating data, rename, and re-import pool, before destroying my mirrored vdevs just to be sure. Then complete the process in the reverse. Or is that overkill?

Also, using the raid calculator from here, I notice that 2x4TB in stripe may only yeild ~5.7TB of storage. If I migrate 5.6TB to it, I will cross the 80% threshold, almost maxed! Is this going to be an issue if its merely for transfer?

Thanks
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I am wondering if I should go through the whole process twice by migrating data, rename, and re-import pool, before destroying my mirrored vdevs just to be sure. Then complete the process in the reverse. Or is that overkill?
I wouldn't do the import/export twice. The only reason to do that is so FreeNAS understands what is in the pool. I would import once your data was sitting on the RAIDZ2 pool.

Is this going to be an issue if its merely for transfer?
As long as you don't completely fill the pool, it should be ok to get very close to full. The issue with a very full pool is that performance really starts to suffer. If you completely fill the pool, it is likely that you will be in a world of hurt.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Another option is to "zpool split" half of the current mirrored pool. That is your presumably good set of data. (If you move to a new set of borrowed striped drives, do you really know they are in good condition?)

Then destroy the unsplit remaining pool, and add the 4 other drives to a new raidz2 pool. Copy over the data. destroy the split pool. replace/resilver the split drives in place of your borrowed drives.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Reaching exhaustion ... I want to move my current setup to 6x4 RaidZ2.
Why not make the most of your current setup by replacing one of your mirrors with larger drives? Alternatively, make use of the 2.5" bays to add a 3rd mirror.
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
I wouldn't do the import/export twice. The only reason to do that is so FreeNAS understands what is in the pool. I would import once your data was sitting on the RAIDZ2 pool.


As long as you don't completely fill the pool, it should be ok to get very close to full. The issue with a very full pool is that performance really starts to suffer. If you completely fill the pool, it is likely that you will be in a world of hurt.

Glad to get a reply from you! I'll be following your guide to doing this in order to bypass any zpool silvering shenanigans. I have questions regarding following the steps in your guide.

For step one, I am using a USB key for the freeNAS system, so I won't need to backup the system dataset, correct? I'll be sure to backup the system config file just in case.

I'll choose to create a manual ZFS snapshot recursive of the entire dataset, then using CLI:

Code:
zfs send -R tank@snapshot_name | zfs receive -v temp_tank


Then using GUI, detach tank and DESTROY. This should clean the disks? I can then create the RAIDZ2 dataset and do the process in the reverse?

After zfs send, is there a way to list the contents of temp_tank to confirm that the data is intact before tank detach?
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
(If you move to a new set of borrowed striped drives, do you really know they are in good condition?)

I got my friend to start building his own FreeNAS system. We both ordered a pair of new 4TB reds (he'll be doing a mirror) and they are all running through a burn-in badblocks test before being put to use. I chose to do this method to not have to go through the hassle of splitting and resilvering, just a simple copy back and forth.

Why not make the most of your current setup by replacing one of your mirrors with larger drives? Alternatively, make use of the 2.5" bays to add a 3rd mirror.

I wanted to go with more storage and better redundancy. If I expanded another mirror, I'd have 3 drives of storage yield vs 4 with RAIDZ2 (with added safety of any two drives failing). Also transplanted my T20 into a Rosewill R4000 for more capacity and better cooling.

6TB aren't at my price point yet.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
For step one, I am using a USB key for the freeNAS system, so I won't need to backup the system dataset, correct? I'll be sure to backup the system config file just in case.
The first step is to make sure that the system dataset isn't on the pool that you are moving. It isn't about making a backup (which is a good idea, but shouldn't be needed.)
Then using GUI, detach tank and DESTROY. This should clean the disks? I can then create the RAIDZ2 dataset and do the process in the reverse?
Yes and yes. But please ensure that all your data is copied to the new drives before destroying the original data.
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
The first step is to make sure that the system dataset isn't on the pool that you are moving. It isn't about making a backup (which is a good idea, but shouldn't be needed.)

Under System and System Dataset, it turns out that the tank pool is the one hosting it. Following your guide, I just select a different pool? Can I safely use the freenas-boot USB and move it back after the process? Whats the best practice for System Dataset, a dedicated SSD or mirrored USB drives? I am researching this now.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Following your guide, I just select a different pool? Can I safely use the freenas-boot USB and move it back after the process? Whats the best practice for System Dataset, a dedicated SSD or mirrored USB drives?
Yes, just choose the boot device, and things will get reconfigured and moved over. Whatever works for your boot device should be good enough for your system dataset. If you have SSD device(s) for boot, then I would use that. If you have USB's, I'd use the main pool (unless you have the drives set to spin down).
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
  1. Any issues with that?
  2. Did you upgrade the PSU?
  3. How's the noise level?

  1. Yes there were a few issues with the move.* Dell uses proprietary connectors for many thing (wishing I had reversed the roles of my T20 Freenas and TS140 ESXi host). * = Noted below with bullets
  2. Upgraded the PSU to a Seasonic S12II 430B Bronze + for more sata connectors + fan connectors
  3. The noise level is a bit higher than stock as it has 2x80mm in the back and 2x120mm in front of the two drive cages. I have replacement 120's on the way for better noise and air flow.
  • The motherboard uses an 8-pin power connector which required this adapter.
  • The fan connectors are proprietary 5-pin for CPU and System Fan (92mm in back). Had to purchase this cable and an 80mm PWM fan to replace the 92 that won't fit. Even though this cable works, and the speed fluctuates with load, running the dell diagnostic will throw an alarm that the system fan is bad (that fifth pin must be some "secret sauce" additive).
  • The Front I/O and Power button pinouts are proprietary and a guarded secret. They are also a special size, standard I/O pin plugs wont fit without breaking the plastic around the pins.
  • If you don't have the Front I/O or power button (with all 5 pins, I tried just the two) it will hang on POST with a warning which requires F1 to advance to boot. After much research, I called defeat and pulled the front I/O and power button and installed them in the Rosewill. Removed the bottom 3.5" bay and have the power button just sitting there and the front I/O tucked inside (never to be used). Pretty? No. Works? Yep!
All-in-all it was a pain but I feel that it works much better. It's sturdy, holds more drives, looks sexy, and most importantly keeps the drives cooler. For me, I noticed that the lower 2 drive cage in the T20 were always around 43-45c while the top were 38-39c. Current temps are ~30-33c. Drives running badblocks, 24 hours in ~35-37c! . I mulled over jury rigging a fan inside the case ala this picture but I'd rather a new case for the future!
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Good info, thanks.
the lower 2 drive cage in the T20 were always around 43-45c while the top were 38-39c
Yes, the drives in the bottom cage are always a few degrees warmer than those in the upper cage. For people sticking with their original T20 case, replacing the vented expansion card blanking plates with solid plates and covering the side vent improves drive cooling.
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
Posting my follow-up to this thread. Following @depasseg's guidance, the process was actually quiet easy.
  • After thoroughly testing the new HDDs with the burn-in test, I created a two disk striped pool temp_tank
  • Moved System Dataset to the freenas_boot USB for temporary storage. Created backup of the config (as once the pool was destroyed, all settings are gone)
  • Created a manual snapshot, recursive, of the pool tank
  • Code:
    zfs send -R tank@manualsnapshotname | zfs receive -vF temp_tank
    (note use GUI shell or tmux, can't close SSH session w/o interrupt)
  • It took about 24h to send the 5.5TB pool over
  • Detached tank and selected Destroy
  • Created RAIDZ2 with the same name tank
  • Reversed the process
 
Last edited:

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
And, someone should point out, without pictures it didn't happen.
;)

This is quite true! I have attached some pictures.

Some information about them.
  • Front: I removed the included Rosewill Front I/O and the bottom 3.5" so I can reach in and press the power button
  • Powerbutton: A picture of my ghetto-rigged power button
  • FrontIO: The Dell T20 stock Front I/O, tucked inside the case
  • 8PinCable: A shot of the cable that I used to go from 24 pin -> Dell 8 pin, has some form of shrink wrapped PCB in the middle.
  • Inside: My lazy, un-managed cable mess. Note the velcro on the support bar, that holds the case T20 ambient thermal sensor!
Also forgot to note that I did not bring over the Chassis Intrusion switch and disabled it in BIOS to squelch any alarm. I moved the ambient thermal sensor though and have it wrapped up as that also will throw an alarm if missing.

If you have any additional questions or things I should add let me know. I am trying to be quite thorough in these posts in hopes that some poor soul may be able to use this information for their own transplant. I can't tell you how many Dell forums from 3-5 years back I went through during this process where my question was asked only to find a post 4 pages later from OP "Thanks, I got it..." with no explanation.

Story of my life.
 

Attachments

  • Front.jpg
    Front.jpg
    169.3 KB · Views: 438
  • Powerbutton.jpg
    Powerbutton.jpg
    201.1 KB · Views: 401
  • FrontIO.jpg
    FrontIO.jpg
    172.3 KB · Views: 404
  • 8PinCable.jpg
    8PinCable.jpg
    111.8 KB · Views: 412
  • Inside.jpg
    Inside.jpg
    232.4 KB · Views: 434

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Thanks for the pics. Doesn't seem particularly messy to me.

Can I assume that if you were starting over, planning to built a 6-disk pool in a suitable case, you wouldn't start with a T20?
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
Can I assume that if you were starting over, planning to built a 6-disk pool in a suitable case, you wouldn't start with a T20?

Short Answer: The case does not support 6 drives. Not in the drive bays, motherboard SATA ports, or PSU power plugs. It's not a suitable choice at all.

It worked excellently for me and I personally would do it again. I purchased my T20 during a Dell holiday sale for $139. After all my modifications, I have still come ahead of any whitebox or consumer NAS. A supermicro board alone costs more than the entire system I purchased.
 

doodlebob

Dabbler
Joined
Nov 4, 2015
Messages
11
No I did get it with the G3220. Upgraded to a Xeon E3-1246v3 with HT for my TS140 ESXi host and swapped it in.
 
Status
Not open for further replies.
Top