Error Creating Pool - problem with gpart?

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
Hi
I am getting the following error when I try to create a 3 disk raidz pool with FreeNas 11.2. Please help, I am at a loss for ideas.
Code:
Error Creating Pool

Command '('gpart', 'create', '-s', 'gpt', '/dev/ada2')' returned non-zero exit status 1

Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 219, in wrapper
	response = callback(request, *args, **kwargs)
  File "./freenasUI/api/resources.py", line 1410, in dispatch_list
	request, **kwargs
  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 450, in dispatch_list
	return self.dispatch('list', request, **kwargs)
  File "./freenasUI/api/utils.py", line 251, in dispatch
	request_type, request, *args, **kwargs
  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 482, in dispatch
	response = method(request, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 1384, in post_list
	updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))
  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 2175, in obj_create
	return self.save(bundle)
  File "./freenasUI/api/utils.py", line 415, in save
	form.save()
  File "./freenasUI/storage/forms.py", line 311, in save
	raise e
  File "./freenasUI/storage/forms.py", line 300, in save
	notifier().create_volume(volume, groups=grouped, init_rand=init_rand)
  File "./freenasUI/middleware/notifier.py", line 738, in create_volume
	vdevs = self.__prepare_zfs_vdev(vgrp['disks'], vdev_swapsize, encrypt, volume)
  File "./freenasUI/middleware/notifier.py", line 673, in __prepare_zfs_vdev
	swapsize=swapsize)
  File "./freenasUI/middleware/notifier.py", line 319, in __gpt_labeldisk
	c.call('disk.wipe', devname, 'QUICK', job=True)
  File "./freenasUI/middleware/notifier.py", line 319, in __gpt_labeldisk
	c.call('disk.wipe', devname, 'QUICK', job=True)
  File "/usr/local/lib/python3.6/site-packages/middlewared/client/client.py", line 460, in call
	raise ClientException(job['error'], trace=job['exception'])
middlewared.client.client.ClientException: Command '('gpart', 'create', '-s', 'gpt', '/dev/ada2')' returned non-zero exit status 1.


I have tried to run the command in a shell and got the following for both ada1 and 2 so I do not think it is disk specific?

Code:
[root@stu-nas1 ~]# gpart create -s gpt /dev/ada2
gpart: geom 'ada2': Operation not permitted
[root@stu-nas1 ~]# sudo gpart create -s gpt /dev/ada2
Sorry, user root is not allowed to execute '/sbin/gpart create -s gpt /dev/ada2' as root on stu-nas1.local.
[root@stu-nas1 ~]# sudo gpart create -s gpt /dev/ada1
Sorry, user root is not allowed to execute '/sbin/gpart create -s gpt /dev/ada1' as root on stu-nas1.local.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Have you tried to wipe those disks from the GUI?
 

MorkaiTheWolf

Dabbler
Joined
Aug 8, 2018
Messages
32
By any chance, were these drives used in a previous system or RAID array? I would second @sretalla point and try wiping the disks via the GUI.
 

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
By any chance, were these drives used in a previous system or RAID array? I would second @sretalla point and try wiping the disks via the GUI.
Yes you are both right. they have been used before and detached (I thought this would wipe them). How do I go about wiping the disks please?
 

MorkaiTheWolf

Dabbler
Joined
Aug 8, 2018
Messages
32
Yes you are both right. they have been used before and detached (I thought this would wipe them). How do I go about wiping the disks please?
It depends on if you have any data stored on those drives.
I have some commands I saved from when doing my build that helps to wipe the header and footer of the drive to remove any possibility of old RAID data being there.

Before we get into that though, what kind of hardware are you using? (I'm curious about the motherboard itself as I found my board uses Marvel controllers that were causing all of my issues) More details can be found hardware specs post and this bug

And want to reiterate, do you have a backup of any data on these drives?
 

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
The drives I have, have ZERO data on them, they were part of a previous set up which I thought to have destroyed. The system is a HP ML115. The previous set up was striped hence I am replacing it with Raidz.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
If the disks are free of any useful data, then you can just use the Wipe button for each of the disks in the View Disks screen of the GUI under storage.
 

MorkaiTheWolf

Dabbler
Joined
Aug 8, 2018
Messages
32
Yes you are both right. they have been used before and detached (I thought this would wipe them). How do I go about wiping the disks please?
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.
 

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
I don't see a wipe button I am afraid. Just edit. I will try to wipe the disks with Gparted Live and see if the wipe button appears. Does this sound like a plan?
 

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

This is awesome info thanks I will try. Sorry the previous response was to sretalla.
 

stupes

Dabbler
Joined
Oct 4, 2018
Messages
25
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

These commands worked like an absolute dream. Thanks!
 

MorkaiTheWolf

Dabbler
Joined
Aug 8, 2018
Messages
32
Awesome! Glad to hear it worked out for ya! Make sure to keep those commands handy, never know when you might need them again. ;)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Too late now, but this should have been submitted as a bug report so it could be fixed.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chungwoo

Cadet
Joined
Oct 25, 2018
Messages
2
Detaching them does not automatically wipe them, you have to select 'Mark as new' to wipe the disks.

Anyways, as they are blank, try running the following:
sysctl kern.geom.debugflags=0x10
This will let you run the next set of commands as it removes the safeties on freenas where it wants to protect your disks, be very careful after this. It is not persistent and will be reset once you reboot the system.


dd if=/dev/zero of=/dev/ada2 bs=1m count=1


This should zero out the beginning of the disk


dd if=/dev/zero of=/dev/ada2 bs=1m oseek=`diskinfo ada2 | awk '{print int($3 / (1024*1024)) - 4;}'`


This should zero out the end of the disk. Replaced the 'ada2' with any other other drives then you can try gpart again.

I recommend rebooting after zeroing out the beginning and end of the disks just to swap that debugflag back off.

MorkkaiTheWolf....Thank you for helping out with my issue.

Would you explain or send a link talking about the command lines you used to fix this issue? I'm still a noob in linux and trying to learn more.

Thanks again!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm still a noob in linux
FreeNAS is not Linux. FreeNAS is an appliance based on FreeBSD, which is a version of Unix.
Would you explain or send a link talking about the command lines you used to fix this issue?
What kind of problem are you having?
 
Last edited:

Chungwoo

Cadet
Joined
Oct 25, 2018
Messages
2
FreeNAS is not Linux. FreeNAS is an appliance based on FreeBSD, which is a version of Unix.

What kind of problem are you having?

I was having the issues where my hard drives were not formatting when making a Pool. I was getting the following error:

Code:
Error: Traceback (most recent call last):

  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 219, in wrapper
    response = callback(request, *args, **kwargs)

  File "./freenasUI/api/resources.py", line 1414, in dispatch_list
    request, **kwargs

  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 450, in dispatch_list
    return self.dispatch('list', request, **kwargs)

  File "./freenasUI/api/utils.py", line 251, in dispatch
    request_type, request, *args, **kwargs

  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 482, in dispatch
    response = method(request, **kwargs)

  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 1384, in post_list
    updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))

  File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 2175, in obj_create
    return self.save(bundle)

  File "./freenasUI/api/utils.py", line 415, in save
    form.save()

  File "./freenasUI/storage/forms.py", line 311, in save
    raise e

  File "./freenasUI/storage/forms.py", line 300, in save
    notifier().create_volume(volume, groups=grouped, init_rand=init_rand)

  File "./freenasUI/middleware/notifier.py", line 759, in create_volume
    vdevs = self.__prepare_zfs_vdev(vgrp['disks'], vdev_swapsize, encrypt, volume)

  File "./freenasUI/middleware/notifier.py", line 694, in __prepare_zfs_vdev
    swapsize=swapsize)

  File "./freenasUI/middleware/notifier.py", line 340, in __gpt_labeldisk
    c.call('disk.wipe', devname, 'QUICK', job=True)

  File "./freenasUI/middleware/notifier.py", line 340, in __gpt_labeldisk
    c.call('disk.wipe', devname, 'QUICK', job=True)

  File "/usr/local/lib/python3.6/site-packages/middlewared/client/client.py", line 477, in call
    raise ClientException(job['error'], trace=job['exception'])

middlewared.client.client.ClientException: Command '('gpart', 'create', '-s', 'gpt', '/dev/da0')' returned non-zero exit status 1.


MorkaiTheWolf fix in this thread helped with getting the pool made and i was hoping to learn what "dd" did. I was able to find a site the explain the basics of the command. Just wanted to know what the rest of the line did to correct the error.

As you can see, I'm very new to the system and don't know a whole lot about the different between Linux and UNIX-like systems. Just looking to learn as I go!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
@MorkaiTheWolf fix in this thread helped with getting the pool made and i was hoping to learn what "dd" did. I was able to find a site the explain the basics of the command. Just wanted to know what the rest of the line did to correct the error.
The dd command was used to erase the previous content of the disks. It is something that should have been working with the GUI. please submit a bug report.
 
Joined
Jul 13, 2013
Messages
286
If the disks are free of any useful data, then you can just use the Wipe button for each of the disks in the View Disks screen of the GUI under storage.

At least in my case, wipe does the same thing apparently, since it returns essentially the same error:

Wipe

Command '('dd', 'if=/dev/zero', 'of=/dev/ada4p2', 'bs=1m', 'count=32')' returned non-zero exit status 1.
 
Joined
Jul 13, 2013
Messages
286
From the command line I get slightly more data back:


[ddb@fsfs /]$ sudo dd if=/dev/zero of=/dev/ada4p2 bs=1m count=32
Password:
dd: /dev/ada4p2: Operation not permitted

Partition structure is not quite identical with the other disks, but they do all have a freebsd-zfs bit of the same size in the same place?

[ddb@fsfs /]$ gpart show
=> 34 11721045101 ada4 GPT (5.5T)
34 94 - free - (47K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)
11721045128 7 - free - (3.5K)

=> 40 11721045088 ada5 GPT (5.5T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)

=> 40 11721045088 ada0 GPT (5.5T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)

=> 40 11721045088 ada1 GPT (5.5T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)

=> 40 11721045088 ada2 GPT (5.5T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)

=> 40 11721045088 ada3 GPT (5.5T)
40 88 - free - (44K)
128 4194304 1 freebsd-swap (2.0G)
4194432 11716850696 2 freebsd-zfs (5.5T)
 
Joined
Jul 13, 2013
Messages
286
And setting the kern.geom.debugflags to 16 (0x10) did not change the behavior of either wipe or creating a pool.
 
Top