Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Error Creating Pool - problem with gpart?

Joined
Jul 13, 2013
Messages
231
Using another system (Windows 10 diskpart "clean" command) on the three new disks I wanted to use in a new pool (along with some old disks from an old pool I destroyed) worked.
 

sretalla

Dedicated Sage
Joined
Jan 1, 2016
Messages
2,451
Using another system (Windows 10 diskpart "clean" command) on the three new disks I wanted to use in a new pool (along with some old disks from an old pool I destroyed) worked.
That won't have cleaned all the blocks that need to be cleaned for ZFS, but happy to hear that it worked for you.

The dd command seemed to be working on partition 2 of the disk... seems a little strange to me that in wiping a whole disk, you would bother to wipe the partitions first,
 
Joined
Jul 13, 2013
Messages
231
The dd command seemed to be working on partition 2 of the disk... seems a little strange to me that in wiping a whole disk, you would bother to wipe the partitions first,
I was expecting it to be easier, in truth; which means I didn't think in detail about exactly what I had to do in advance, and then I was in reactive mode. Problem is, I primarily run this one ZFS server, at home, and it works well enough that I don't get a lot of practice on the exceptional actions sometimes needed.

I was using some of the disks from 2 old zpools (both mirrors), plus 3 new disks, to form a new zpool (RAIDZ2 6x6TB). The old zpools I destroyed on the way out (after making multiple backups and scrubbing at least one backup of each pool). The new disks were in sealed packaging from Amazon, really new so far as I know. So I expected to be able to just put the new disks into slots in the case, boot, and have them appear in the GUI and be able to create a pool including them. They did appear, but the pool creation errored out. I have verified that it was specifically the three new disks that I had the problems with; none of the old disks caused a problem. (The three new disks were two Toshibas and one generic.)

I'm wondering, would I have had this same problem if I were replacing a failed disk with a new disk? I don't off-hand see why not. I hate to think of the knock-on consequences of that.

I don't really have enough information to write up a bug report (I'm willing to claim "bug" if two different manufacturer's brand-new disks arrive in a state that ZFS won't create a pool on; that's such a basic use-case). Is the underlying issue here known? I've blown away the example I had (now trying to get my data restored; which wouldn't be hard except I'm trying to change the arrangement and filesystem breakdown while still restoring all the existing snapshots using zfs send -R ... | zfs receive -d <newlocation>, followed by deleting some of the stuff restored that I don't want at that location, intending to restore again elsewhere).

I'm assuming that it's probably possible to use FreeBSD tools to beat the disks into submission somehow; maybe deleting all partitions before overwriting the first few MB of the base drive? Maybe have to mess with the MBR or GPT info also? Part of my problem knowing what's necessary is I don't actually know what ZFS on FreeBSD does when it initializes a pool; I started using ZFS on Solaris, back when it was new, and things changed enough that a lot of my early knowledge became invalid, and the GUI automates somethings I used to have to know about, but not everything. And that was before anything I'd seen used GPT, and I haven't learned about GPT very well either.

(The math on number of disks there isn't obvious. Anybody bothered, here's exactly what was going on: old configuration was a 2x6TB mirror and a 3x6TB mirror in different zpools. I detached a disk from each mirror and removed them (due to my bad memory; I should have been splitting the pools, my intention was to remove one disk from each pool with the very latest data, and use that to restore from, keeping my backups as backups to that, but I misremembered the commands and detached the disks instead, which wipes them). Anyway that left 3 disks in the server; I destroyed the two pools on them (one 2x mirror, one bare drive now). Then I added 3 new drives to the remaining drives from both pools to create the new pool on, giving me the 6x6TB RAIDZ2 I was intending. That's the number of controller ports in this box, too.)
 

etegration

Neophyte
Joined
Nov 22, 2019
Messages
5
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.
just dropping a note. this is still very relevant nad it worked for me. should have this built into the GUI.
 

eliboy

Neophyte
Joined
Jan 25, 2016
Messages
8
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.
This did not work for me, any ideas? I don't have an adapter to connect this HDD to my laptop
 

eliboy

Neophyte
Joined
Jan 25, 2016
Messages
8
I have just solved this issue at my system.

The HDD giving the error was part of a pool on the same FreeNAS box. I have destroyed the zpool from the UI and never got an error.
I dont know why but when running
Code:
zpool status
the old zpool was still there.
Code:
zpool destroy tank
solve the issue and now I am able to create a new zpool
 

tsf-freenas

Newbie
Joined
Dec 16, 2019
Messages
1
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.
This was literally a disk saver. Had a drive from a BSD based NAS system and couldn't include it in a pool in FreeNAS nor wipe it. Kept getting the same wipe error via the GUI as noted by @stupes

Now I have use of that disk and was able to create a new ZFS pool.

Thanks for this.
 

Wildfire

Newbie
Joined
Jan 17, 2020
Messages
1
Playing around with a test system using some old discs to get as familiar as I can with Freenas before using it for real I came across this problem of not being able to create a pool just like the first post in this thread.

The discs I was using were 4 discs from an old windows raid 5 array, the discs had been cleaned in windows before hand using the diskpart clean command. I understand from this thread this does not clean them completely in Freenas terms.

So I tried running the GUI wipe on the 4 discs, 3 worked fine but 1 returned this
Command '('dd', 'if=/dev/zero', 'of=/dev/da1', 'bs=1m', 'count=32')' returned non-zero exit status 1

so I tried running
sysctl kern.geom.debugflags=0x10
in the shell and then the GUI wipe command again, this now worked, I didn't need to use the
dd if=/dev/zero of=/dev/da1 bs=1m count=1
dd if=/dev/zero of=/dev/da1 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`
from the shell so maybe a safer way to avoid hitting the wrong disk

I then rebooted the system and now was able to create a pool on the discs without any problem.

My question is why was one of the 4 discs different to the rest since they had all only ever been used in the same array from new?
 

robinmorgan

Newbie
Joined
Jan 8, 2020
Messages
1
Hello all,

I too am having the same issue. I have 48 disks all previously used and wiped. I have followed all the suggestions above but with no luck.

Could anyone help?
 

Niel Archer

Junior Member
Joined
Jun 7, 2014
Messages
16
I'm also having this issue trying to replace a bad drive. I get the following error trying to wipe the 'new' drive:

[EFAULT] Command gpart create -s gpt /dev/da19 failed (code 1): gpart: Device not configured

Trying the dd command from a shell gives the same error 'dd: /dev/da19: Device not configured'
 
Last edited:
Top