Safe way to import a degraded RAID-Z1 pool in read-only?

Status
Not open for further replies.

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Hi,

tl;dr - Can you import a RAID-Z1 pool in a degraded state in read-only mode? Is this safe?

I'm migrating a large (around 20TB) of data from one FreeNAS machine to another.

Both machines are running FreeNAS 11.

The old machine is RAID-Z1, with 4 x 8TB drives.

The new machine is RAID-Z1, with 6 x 8TB drives.

I'm copying over a Gigabit network - I tried with both rsync (over SSH) and zfs send/receive, and it seems to cap-out around 25 MB/s - rsync/SSH pegs the CPU at 100% - even with encryption set to none. For zfs send/receive - each send/receive seems to be 100

However, I then thought of simply plugging in the old drives via SATA to the new machine.

Unfortunately, I'm limited on the number of SATA power/data ports I have.

My question is - can I take 3 of the disks from the RAID-Z1 pool - connect these to the new machine, then import these read-only in a degraded state?

What is the safest way to do this?

Cheers
Victor
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
With drives that large you should be using RAIDz2.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
My question is - can I take 3 of the disks from the RAID-Z1 pool - connect these to the new machine, then import these read-only in a degraded state?

But yes, I think you can do that. Of course, you can load them very easily in a non-read only fashion... and its even fairly safe, as when you do restore the raid, the extra disk will come back and provide redundancy again. The new writes (if any) will be resilvered to the old disk.

how to mount read-only... I'm sure it can be done... just don't know the incantation :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just like when a drive fails and the pool is still available, you should just import your pool. If it didn't work, you might need to use -f to force it.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
can I take 3 of the disks from the RAID-Z1 pool - connect these to the new machine, then import these read-only in a degraded state?
Why read-only? You should be able to import them directly using the GUI.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Ok - I was able to export the ZFS pool from the old system (via the GUI - using "Detach" - it didn't work via command-line, as , and re-import it read-only on the new system:

Code:
zfs import -o readonly=on <id_number> datastore-old


I'm copying files over now from rsync - however, the weird thing is, even locally performance is capped at around 65 MB/s - and I have no idea why? That seems awfully slow copying between local disks.

Is this related to the 4-drive RAID-Z1 being in degraded state? Or are there other things I should be checking.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
I'm using
Code:
rsync -av --info=progress2 <source> <destination>
, if that helps at all.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Also, I tried with zfs send/recv - and piping it through pv to get stats:

Code:
freenas-naulty-place# zfs send datastore-old/dataset1@migrate | pv | zfs recv -F datastore/dataset1@migrate
16.6GiB 0:00:36 [ 471MiB/s] 


So I get around 65 MB/s here as well - is there some fundamental bottleneck here I'm missing?
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
So I get around 65 MB/s here as well - is there some fundamental bottleneck here I'm missing?
There are quite a lot of possible places where a bottleneck could occur. If you are not seeing anything suspicious on the FreeNAS boxes themselves your focus should be on the network. How is that setup, direct connect/switches/cabling/...
Iirc theoretically the max speed be 125 MB/s. The last time I was moving my pools around I temporarily added sata ports to the machine to make sure the network was not slowing anything down during the replication.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
your focus should be on the network.
Why should his focus be on the network for replication between two pools on the same machine?
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Why should his focus be on the network for replication between two pools on the same machine?
Ugh I missed a post and was still working with the parameters from the first post. Excuse me while I am trying to locate the nearest caffeine dispenser...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm copying over a Gigabit network - I tried with both rsync (over SSH) and zfs send/receive, and it seems to cap-out around 25 MB/s - rsync/SSH pegs the CPU at 100%
You didn't tell us what hardware you are using...
So I get around 65 MB/s here as well - is there some fundamental bottleneck here I'm missing?
This makes it sound like you have a significantly under-powered server. If you list your hardware, in detail, we may be able to determine the shortcoming.
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Of course - sorry, you're right, should have included specs once this became a performance issue.

It's just, 65 MB/s seemed so low for local copying from one SATA drive to another, I assumed I was missing something stupidly fundamental.

Specifcations:
  • SuperMicro A2SDi-8C+-HLN4F
  • Source HDD's - 3 x 8TB Seagate SMR drives (RAID-Z1, 3 out of 4 drives)
  • Destination HDD's - 6 x 8TB WD Drives (Non-SMR)
  • 64 GB DDR4 ECC RAM
I've just checked again, and it's copied around 1.3 TB...would be great to see if there's a way to speed it up.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
While you're calculating parity for your data, you can't expect full speed.

The theory says that in a RAIDZ1 pool you should have a maximum pool external speed (different when you scrub as it's internal to the pool) of one single drive (perhaps that's the number I see quoted earlier in the thread as 125MB/s... I guess it's a 7200 rpm drive).
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Just to clarify - you're saying because it's in a degraded speed - it will be limited to the speed of one physical drive?

It's a Seagate 8TB SMR drive - and according to here:

http://www.storagereview.com/seagate_archive_hdd_review_8tb

Our first consumer test measures 2MB sequential performance. In this benchmark, the Seagate Archive 8TB posted read and write speeds of 188.02MB/s and 187.21MB/s, respectively.

When moving to our 2MB random transfer performance test, the Seagate Archive 8TB recorded 72.17MB/s read and 109.08MB/s write.


But if I load up all four drives in the array - the speed should return to normal? (Whatever normal for a 4-way RAID-Z1 array is).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Raidz arrays normally have a sequential speed of about n-p

They have the random io performance of a single disk

I’m not sure how being degraded affects this.

And I’m not sure if a ZFS send is more sequential or random. Perhaps a combination of the two.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Stupid question - but what are n and p in this context please? (Is "n" the number of drives? But then what is "p"?)

number of parity drives

RaidZ1 = 1
RaidZ2 = 2
RaidZ3 = 3

Interestingly, mirrors have sequential read of n, and sequential write of 1
 

victorhooi

Contributor
Joined
Mar 16, 2012
Messages
184
Ok, I was able to go out and buy a SATA power-splitter, and now I've connected all four drives from the source zpool ("datastore-old").

Code:
  pool: datastore
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
		attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
		using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 2.86M in 0 days 00:00:01 with 0 errors on Tue May  8 18:46:27 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		datastore									   ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/7d5ea347-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	 0
			gptid/7ed9558c-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	69
			gptid/805edb3c-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	 0
			gptid/81d13d26-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	 0
			gptid/835c2fc4-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	 0
			gptid/8502c6de-5167-11e8-8b2c-ac1f6b45da8c  ONLINE	   0	 0	 0

errors: No known data errors

  pool: datastore-old
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
		still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
		the pool may no longer be accessible by software that does not support
		the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 14:52:03 with 0 errors on Sun Apr 22 14:52:04 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		datastore-old								   ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/1b019b58-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0
			gptid/d2acd45b-6ff0-11e7-9812-10604b92dc14  ONLINE	   0	 0	 0
			gptid/1b586c61-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0
			gptid/1c918aec-5db5-11e6-92fe-10604b92dc14  ONLINE	   0	 0	 0

errors: No known data errors


I tried both rsync and zfs send/recv again - and the speed still seems to be bottle-necked at 65 MB/s =(.
 
Status
Not open for further replies.
Top