Error on building a pool

piratemike

Cadet
Joined
Jul 5, 2019
Messages
2
i start to build a pool and after a time it errors out:
hardware is a repurposed isilon node with 36X3tb drives and 2 8G SSD. 1 is running the OS.
pool being built is 36 drives in z3

what am i doing wrong, it worked once and i removed the pool so i could build a bigger pool with all of the drives.

eventual purpose is vmware hosts.

thanks in advance,
mike.



Error: Traceback (most recent call last):

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 219, in wrapper
response = callback(request, *args, **kwargs)

File "./freenasUI/api/resources.py", line 1448, in dispatch_list
request, **kwargs

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 450, in dispatch_list
return self.dispatch('list', request, **kwargs)

File "./freenasUI/api/utils.py", line 251, in dispatch
request_type, request, *args, **kwargs

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 482, in dispatch
response = method(request, **kwargs)

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 1384, in post_list
updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 2175, in obj_create
return self.save(bundle)

File "./freenasUI/api/utils.py", line 445, in save
form.save()

File "./freenasUI/storage/forms.py", line 316, in save
raise e

File "./freenasUI/storage/forms.py", line 310, in save
c.call("alert.unblock_source", lock)

File "./freenasUI/storage/forms.py", line 303, in save
notifier().create_volume(volume, groups=grouped, init_rand=init_rand)

File "./freenasUI/middleware/notifier.py", line 763, in create_volume
vdevs = self.__prepare_zfs_vdev(vgrp['disks'], vdev_swapsize, encrypt, volume)

File "./freenasUI/middleware/notifier.py", line 698, in __prepare_zfs_vdev
sync=False)

File "./freenasUI/middleware/notifier.py", line 341, in __gpt_labeldisk
c.call('disk.wipe', devname, 'QUICK', False, job=True)

File "./freenasUI/middleware/notifier.py", line 341, in __gpt_labeldisk
c.call('disk.wipe', devname, 'QUICK', False, job=True)

File "/usr/local/lib/python3.6/site-packages/middlewared/client/client.py", line 477, in call
raise ClientException(job['error'], trace=job['exception'])

middlewared.client.client.ClientException: Command '('dd', 'if=/dev/zero', 'of=/dev/da1p2', 'bs=1m', 'count=32')' returned non-zero exit status 1.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
i start to build a pool and after a time it errors out:
hardware is a repurposed isilon node with 36X3tb drives and 2 8G SSD. 1 is running the OS.
pool being built is 36 drives in z3

what am i doing wrong, it worked once and i removed the pool so i could build a bigger pool with all of the drives.

eventual purpose is vmware hosts.

thanks in advance,
mike.



Error: Traceback (most recent call last):

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 219, in wrapper
response = callback(request, *args, **kwargs)

File "./freenasUI/api/resources.py", line 1448, in dispatch_list
request, **kwargs

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 450, in dispatch_list
return self.dispatch('list', request, **kwargs)

File "./freenasUI/api/utils.py", line 251, in dispatch
request_type, request, *args, **kwargs

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 482, in dispatch
response = method(request, **kwargs)

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 1384, in post_list
updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs))

File "/usr/local/lib/python3.6/site-packages/tastypie/resources.py", line 2175, in obj_create
return self.save(bundle)

File "./freenasUI/api/utils.py", line 445, in save
form.save()

File "./freenasUI/storage/forms.py", line 316, in save
raise e

File "./freenasUI/storage/forms.py", line 310, in save
c.call("alert.unblock_source", lock)

File "./freenasUI/storage/forms.py", line 303, in save
notifier().create_volume(volume, groups=grouped, init_rand=init_rand)

File "./freenasUI/middleware/notifier.py", line 763, in create_volume
vdevs = self.__prepare_zfs_vdev(vgrp['disks'], vdev_swapsize, encrypt, volume)

File "./freenasUI/middleware/notifier.py", line 698, in __prepare_zfs_vdev
sync=False)

File "./freenasUI/middleware/notifier.py", line 341, in __gpt_labeldisk
c.call('disk.wipe', devname, 'QUICK', False, job=True)

File "./freenasUI/middleware/notifier.py", line 341, in __gpt_labeldisk
c.call('disk.wipe', devname, 'QUICK', False, job=True)

File "/usr/local/lib/python3.6/site-packages/middlewared/client/client.py", line 477, in call
raise ClientException(job['error'], trace=job['exception'])

middlewared.client.client.ClientException: Command '('dd', 'if=/dev/zero', 'of=/dev/da1p2', 'bs=1m', 'count=32')' returned non-zero exit status 1.
The very last line of your post ( dd', 'if=/dev/zero', 'of=/dev/da1p2', 'bs=1m', 'count=32') hints that there is a problem with this device: /dev/da1. The dd command is failing for some reason, perhaps because the drive is bad.

Did you burn-in your drives before building the system?

Are you setting up a single 36-disk RAIDZ3 pool? That's considered far too 'wide', i.e., your pool will have the capacity of 33 drives -- 36 drives less 3 parity drives -- but will only have the IOPS of a single drive. You will get more IOPS if you create a pool with more vdevs, e.g., 4 x 9-disk RAIDZ2 vdevs.

Also, note that RAIDZ2/RAIDZ3 is a poor choice for VM storage; you'd be much better off using mirrors. A pool made up of 18 mirrored vdevs will have 18 times the IOPS of a single-vdev pool, and more than 4 times the IOPS of a pool made up of 4 RAIDZ2 vdevs.
 

piratemike

Cadet
Joined
Jul 5, 2019
Messages
2
hi spearfoot thanks for the reply!

i found 1 drive (da4) that was throwing bad blocks. replaced it and created smaller sets of pools then erased them and it seemed to work.
as far as burn in, is there a feature to "exercise" the drives? past that the unit had been on for over 24 hours.

the idea was one large pool (im not set on z2 or z3) and then break it up into vdevs of about 12-15tb each then present those to vmware if i can figure out how. the mirrored method while great for iops would kill the needed amount of storage. currently i am consolidating 4 physical machines/storage totaling almost 45TB into 4 vms. the iops isnt a huge deal as its mostly just storage and home machine backups.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
There is quite a bit of information here on the forum about burn-in and system testing, for example:

https://www.ixsystems.com/community/resources/building-burn-in-and-testing-your-freenas-system.38/

I have written a disk burn-in script that lets you exercise your disks before using them. I highly recommend that you use it on your drives:

https://www.ixsystems.com/community...for-freenas-scripts-including-disk-burnin.28/

User @Chris Moore has compiled a handy list of references here:

https://www.ixsystems.com/community/resources/links-to-useful-threads.108/

Among the references is a slideshow explaining vdevs, pools, ZIL, and L2ARC. I recommend you study this before you commit to a pool design, because your statement about breaking up a pool into vdevs makes me think you're not familiar with these concepts. A pool is made up of one or more vdevs, so you can't 'break up' a pool into vdevs. You can create multiple zvols from a single pool; perhaps this is what you meant?

There is quite a bit of discussion on the forum about the subject of pool design, including pool design for VM storage. Using all 36 disks in a single vdev is just a bad idea, but if you were to do so you would end up with ~99TB of space (33 x 3TB). Using 4 RAIDZ2 vdevs, each made up of 9 disks, would give you ~84TB (28 x 3TB) with 4 times the IOPS. In both cases you will actually have less available space because of overhead and because a 3TB disk is only 2.7TiB. Something to consider...

Good luck!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Top