Error Creating Pool - problem with gpart?

Joined
Jul 13, 2013
Messages
286
Using another system (Windows 10 diskpart "clean" command) on the three new disks I wanted to use in a new pool (along with some old disks from an old pool I destroyed) worked.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Using another system (Windows 10 diskpart "clean" command) on the three new disks I wanted to use in a new pool (along with some old disks from an old pool I destroyed) worked.
That won't have cleaned all the blocks that need to be cleaned for ZFS, but happy to hear that it worked for you.

The dd command seemed to be working on partition 2 of the disk... seems a little strange to me that in wiping a whole disk, you would bother to wipe the partitions first,
 
Joined
Jul 13, 2013
Messages
286
The dd command seemed to be working on partition 2 of the disk... seems a little strange to me that in wiping a whole disk, you would bother to wipe the partitions first,

I was expecting it to be easier, in truth; which means I didn't think in detail about exactly what I had to do in advance, and then I was in reactive mode. Problem is, I primarily run this one ZFS server, at home, and it works well enough that I don't get a lot of practice on the exceptional actions sometimes needed.

I was using some of the disks from 2 old zpools (both mirrors), plus 3 new disks, to form a new zpool (RAIDZ2 6x6TB). The old zpools I destroyed on the way out (after making multiple backups and scrubbing at least one backup of each pool). The new disks were in sealed packaging from Amazon, really new so far as I know. So I expected to be able to just put the new disks into slots in the case, boot, and have them appear in the GUI and be able to create a pool including them. They did appear, but the pool creation errored out. I have verified that it was specifically the three new disks that I had the problems with; none of the old disks caused a problem. (The three new disks were two Toshibas and one generic.)

I'm wondering, would I have had this same problem if I were replacing a failed disk with a new disk? I don't off-hand see why not. I hate to think of the knock-on consequences of that.

I don't really have enough information to write up a bug report (I'm willing to claim "bug" if two different manufacturer's brand-new disks arrive in a state that ZFS won't create a pool on; that's such a basic use-case). Is the underlying issue here known? I've blown away the example I had (now trying to get my data restored; which wouldn't be hard except I'm trying to change the arrangement and filesystem breakdown while still restoring all the existing snapshots using zfs send -R ... | zfs receive -d <newlocation>, followed by deleting some of the stuff restored that I don't want at that location, intending to restore again elsewhere).

I'm assuming that it's probably possible to use FreeBSD tools to beat the disks into submission somehow; maybe deleting all partitions before overwriting the first few MB of the base drive? Maybe have to mess with the MBR or GPT info also? Part of my problem knowing what's necessary is I don't actually know what ZFS on FreeBSD does when it initializes a pool; I started using ZFS on Solaris, back when it was new, and things changed enough that a lot of my early knowledge became invalid, and the GUI automates somethings I used to have to know about, but not everything. And that was before anything I'd seen used GPT, and I haven't learned about GPT very well either.

(The math on number of disks there isn't obvious. Anybody bothered, here's exactly what was going on: old configuration was a 2x6TB mirror and a 3x6TB mirror in different zpools. I detached a disk from each mirror and removed them (due to my bad memory; I should have been splitting the pools, my intention was to remove one disk from each pool with the very latest data, and use that to restore from, keeping my backups as backups to that, but I misremembered the commands and detached the disks instead, which wipes them). Anyway that left 3 disks in the server; I destroyed the two pools on them (one 2x mirror, one bare drive now). Then I added 3 new drives to the remaining drives from both pools to create the new pool on, giving me the 6x6TB RAIDZ2 I was intending. That's the number of controller ports in this box, too.)
 

etegration

Cadet
Joined
Nov 22, 2019
Messages
7
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

just dropping a note. this is still very relevant nad it worked for me. should have this built into the GUI.
 

eliboy

Cadet
Joined
Jan 25, 2016
Messages
8
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

This did not work for me, any ideas? I don't have an adapter to connect this HDD to my laptop
 

eliboy

Cadet
Joined
Jan 25, 2016
Messages
8
I have just solved this issue at my system.

The HDD giving the error was part of a pool on the same FreeNAS box. I have destroyed the zpool from the UI and never got an error.
I dont know why but when running
Code:
zpool status
the old zpool was still there.
Code:
zpool destroy tank
solve the issue and now I am able to create a new zpool
 

tsf-freenas

Cadet
Joined
Dec 16, 2019
Messages
4
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

This was literally a disk saver. Had a drive from a BSD based NAS system and couldn't include it in a pool in FreeNAS nor wipe it. Kept getting the same wipe error via the GUI as noted by @stupes

Now I have use of that disk and was able to create a new ZFS pool.

Thanks for this.
 

Wildfire

Cadet
Joined
Jan 17, 2020
Messages
1
Playing around with a test system using some old discs to get as familiar as I can with Freenas before using it for real I came across this problem of not being able to create a pool just like the first post in this thread.

The discs I was using were 4 discs from an old windows raid 5 array, the discs had been cleaned in windows before hand using the diskpart clean command. I understand from this thread this does not clean them completely in Freenas terms.

So I tried running the GUI wipe on the 4 discs, 3 worked fine but 1 returned this
Command '('dd', 'if=/dev/zero', 'of=/dev/da1', 'bs=1m', 'count=32')' returned non-zero exit status 1

so I tried running
sysctl kern.geom.debugflags=0x10
in the shell and then the GUI wipe command again, this now worked, I didn't need to use the
dd if=/dev/zero of=/dev/da1 bs=1m count=1
dd if=/dev/zero of=/dev/da1 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`
from the shell so maybe a safer way to avoid hitting the wrong disk

I then rebooted the system and now was able to create a pool on the discs without any problem.

My question is why was one of the 4 discs different to the rest since they had all only ever been used in the same array from new?
 

robinmorgan

Dabbler
Joined
Jan 8, 2020
Messages
36
Hello all,

I too am having the same issue. I have 48 disks all previously used and wiped. I have followed all the suggestions above but with no luck.

Could anyone help?
 

Niel Archer

Dabbler
Joined
Jun 7, 2014
Messages
28
I'm also having this issue trying to replace a bad drive. I get the following error trying to wipe the 'new' drive:

[EFAULT] Command gpart create -s gpt /dev/da19 failed (code 1): gpart: Device not configured

Trying the dd command from a shell gives the same error 'dd: /dev/da19: Device not configured'
 
Last edited:
Joined
Apr 26, 2015
Messages
320
I'm pretty confused with all this information and the last post being 2019. It's now 2020, I'm using 12.x and am getting the same problem and really, no idea where to start.


I posted my own question because I didn't find anything then suddenly started finding some reports like this one.
I also see no such option as wipe disk, etc.
 

robinmorgan

Dabbler
Joined
Jan 8, 2020
Messages
36
I resolved this by formatting the drives first.

Code:
sg_format --format --size=512 -6 -v -e -v /dev/da0


Don’t forget to change the drive name ”da0” to the drive you wish to format. The formatting can take awhile - use the following commands to check status.

Code:
sg_turs /dev/da0
 
Joined
Apr 26, 2015
Messages
320
The array was formatted from the external chassis but I went ahead anyhow since I've nothing to lose at the moment.
I think maybe there is no partition. I don't see a way of doing that from the GUI and from the command line, I get this error.

Wipe Disk da0
Error: [EFAULT] Command gpart create -s gpt /dev/da0 failed (code 1): gpart: Invalid argument

The following seems to show there is a partition though.

# gpart show
=> 40 285155248 mfid0 GPT (136G)
40 532480 1 efi (260M)
532520 33554432 3 freebsd-swap (16G)
34086952 251035648 2 freebsd-zfs (120G)
285122600 32688 - free - (16M)

=> 34 2321170365 da1 GPT (1.1T)
34 2014 - free - (1.0M)
2048 2321166337 1 vmware-vmfs (1.1T)
2321168385 2014 - free - (1.0M)

=> 40 22051119024 da0 GPT (10T)
40 22051119024 - free - (10T)

# diskinfo -v da0
da0
512 # sectorsize
11290172981248 # mediasize in bytes (10T)
22051119104 # mediasize in sectors
0 # stripesize
0 # stripeoffset
1372618 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
IBM 1746 FAStT # Disk descr.
SV12310695 # Disk ident.
No # TRIM/UNMAP support
10000 # Rotation rate in RPM
Not_Zoned # Zone Mode

Trying to format anyhow;

# sg_format --format --size=512 -6 -v -e -v /dev/da0
inquiry cdb: [12 00 00 00 24 00]
IBM 1746 FAStT 1070 peripheral_type: disk [0x0]
PROTECT=0
inquiry cdb: [12 01 00 00 24 00]
inquiry cdb: [12 01 80 01 00 00]
Unit serial number: SV12310695
inquiry cdb: [12 01 83 01 00 00]
LU name: 60080e500023f13200001b465f23b93b
mode sense(6) cdb: [1a 00 01 00 fc 00]
mode sense(6): pass-through requested 252 bytes (data-in), got got 24 bytes
Mode Sense (block descriptor) data, prior to changes:
Number of blocks=0 [0x0]
Block size=512 [0x200]

Format unit cdb: [04 18 00 00 00 00]
Format unit parameter list:
00 02 00 00
Format unit timeout: 20 seconds
Format unit:
Fixed format, current; Sense key: Illegal Request
vendor specific ASC=94, ASCQ=01 (hex)
Raw sense data (in hex), sb_len=64, embedded_len=166
70 00 05 00 00 00 00 9e 00 00 00 00 94 01 00 00
00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00
00 00 88 0b 00 99 99 99 99 04 18 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 53 56 31 32
d0 ea ff ff ff 7f 00 00 bd 3b 21 00 08 00 00 00
46 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 0Format unit command: Illegal request, type: sense key, apart from Invalid opcode
FORMAT UNIT failed

I know there is nothing wrong with the hardware because I was using it on something else.
Plus, the 1TB partition you see is on the same storage chassis and is in use by another system.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
Does anyone have any idea of what is going on here? Is FN not able to access a 10TB partition maybe? Can't be that since my old 9.3 version accesses at least 12TB.
 

zookeeper21

Dabbler
Joined
Jan 25, 2021
Messages
33
@Chris Moore Hey, I know you said this should have been marked as bug, but it seems like still issue in TrueNAS-12.0-U5.1. I am still getting error. Drive I am trying to use was part of proxmox VMs zfs mirror. Now I am moving to TrueNAS and having same issue. Doesn't let me create pool or wipe disk. It doesn't show up if I try to import as well.

Any solution?

Is this safe to run? - sysctl kern.geom.debugflags=0x10
 
Joined
Apr 26, 2015
Messages
320
What I found was that if I created multiple logical drives on the storage then had TN see it in the pools, it would always see only the first one. TN is either not able to see the same LUN from multiple logical drives in a storage chassis or is somehow limited to one.
TN can see the various logical drives but it can only use one of them, then it fails.

My solution was to create one large logical drive on the storage device, have it come up as one pool on TN and from there, slice it up as datasets as needed then shared.

So far, so good.
 

zookeeper21

Dabbler
Joined
Jan 25, 2021
Messages
33
@NasProjects What you mean? I am noob so I don't get it.
 
Top