Basic system state:
Whitebox i3 system running FreeNAS-11.3-U3.2
System had 3 x WD Red 4TB drives in a RAIDZ1 Pool, 1 drive is completely dead, a second drive is starting to fail (Referred to below as "WD_RAID"). This pool has since been exported with the Pool destroyed in the process. The Failing drive and good drive are visible as unassigned disks in FreeNAS.
1 x WD Red on its own (basically Stripe/RAID0 single disk)
4 x new 4TB Seagate Barracuda in RAIDZ1 Pool (Referred to below as "SEA_RAID")
1 x SSD hosting Jails
Due to the failures in WD_RAID, I have successfully managed to migrate multiple Datasets from that Pool to the SEA_RAID Pool using snapshots and zfs send/receive via SSH session (using Putty). Total data transferred successfully was approx 6.6TB. Free space on SEA_RAID afterwards is approx 3.5TB.
I am currently attempting to migrate a Dataset (approx 2.2Tb of data) from the single WD disk also to SEA_RAID also via a snapshot and zfs send/receive. However, every time it sends, it appears to be transferring data (as monitored via the Putty session I'm performing zfs send/receive via), and viewing space usage via Storage > Pools I can see the respective Dataset on the SEA_RAID Pool growing, but if I attempt to browse the dataset contents on SEA_RAID via the GUI Console or via Putty, the Dataset is completely empty (including no hidden directories). I have attempted Snapshoting and sending both from the root Pool Dataset plus also from the single sub-Dataset under it, but in both cases the destination appears to be empty during and after the zfs send/receive.
I'm currently out of ideas (admittedly I am not an SME on FreeNAS/FreeBSD/Linux/Unix systems) so I'm seeking advice on how to get the data to transfer successfully and be visible without manually having to transfer it via other means
My eventual plan that I want to get to is:
The single "good" disk from WD_RAID" is combined with the single WD Red drive (that I'm currenty trying to migrate data) from are combined into a new Mirror Pool, and approx 2.3TB of data currently on SEA_RAID is migrated onto that new Pool.
SEA_RAID is left with approx 3.2TB free after data is transfer to the new Mirror pool.
Whitebox i3 system running FreeNAS-11.3-U3.2
System had 3 x WD Red 4TB drives in a RAIDZ1 Pool, 1 drive is completely dead, a second drive is starting to fail (Referred to below as "WD_RAID"). This pool has since been exported with the Pool destroyed in the process. The Failing drive and good drive are visible as unassigned disks in FreeNAS.
1 x WD Red on its own (basically Stripe/RAID0 single disk)
4 x new 4TB Seagate Barracuda in RAIDZ1 Pool (Referred to below as "SEA_RAID")
1 x SSD hosting Jails
Due to the failures in WD_RAID, I have successfully managed to migrate multiple Datasets from that Pool to the SEA_RAID Pool using snapshots and zfs send/receive via SSH session (using Putty). Total data transferred successfully was approx 6.6TB. Free space on SEA_RAID afterwards is approx 3.5TB.
I am currently attempting to migrate a Dataset (approx 2.2Tb of data) from the single WD disk also to SEA_RAID also via a snapshot and zfs send/receive. However, every time it sends, it appears to be transferring data (as monitored via the Putty session I'm performing zfs send/receive via), and viewing space usage via Storage > Pools I can see the respective Dataset on the SEA_RAID Pool growing, but if I attempt to browse the dataset contents on SEA_RAID via the GUI Console or via Putty, the Dataset is completely empty (including no hidden directories). I have attempted Snapshoting and sending both from the root Pool Dataset plus also from the single sub-Dataset under it, but in both cases the destination appears to be empty during and after the zfs send/receive.
I'm currently out of ideas (admittedly I am not an SME on FreeNAS/FreeBSD/Linux/Unix systems) so I'm seeking advice on how to get the data to transfer successfully and be visible without manually having to transfer it via other means
My eventual plan that I want to get to is:
The single "good" disk from WD_RAID" is combined with the single WD Red drive (that I'm currenty trying to migrate data) from are combined into a new Mirror Pool, and approx 2.3TB of data currently on SEA_RAID is migrated onto that new Pool.
SEA_RAID is left with approx 3.2TB free after data is transfer to the new Mirror pool.