- Joined
- May 26, 2011
- Messages
- 654
Hiao,
First the specifications:
FreeNAS-9.2.0-RELEASE-x64 (ab098f4)
6x3TB WD green connected to on-board SAS2 on X10SL7-F (da0-da5)
2x2TB WD RED (Mirrored), (ada0,ada1)
1x 8GB USB stick for system (da7)
1z 8GB USB stick for scripts (da6)
Now the issue:
28.12. 21:15
- created RAIDZ2 with encryption enabled AND "Initialize Safely" checked
- six dd processes spawned and writing of random trash on the disks started
- Each disk was utilizing approximately ~10MB/s write speed. So after small math (3*1024*1024/3600/10) the whole process should take approx 87 hours, so the END timestamp should be around 1.1. 13:00
- I've been checking the system time to time - last check 31.12. at 13:30
1.1.2014 at 4PM
- checked the system again, and no dd processes were runing anymore, so i suppose the safe init finished, but
- zpool is NOT visible under "View volumes"
- all six disks are still visible under "ZFS volume manager" so they are acting like "available"
- But when checking via zpool commands, i can see that the pool was created
- zpool history shows that the pool was created 1.1. at 3:05 AM which is like 10hours before the expected time
- BUT database entry is missing for all of the devices ! Following two entries are related to another encrypted pool.
- I have following errors in the console and these occurred BEFORE the pool was created
- Bellow are details about what eating the whole space
So, i have several questions:
- What the hell happened ?
- How can i verify that the safe initialization finished successfully - all disks are full of some random trash? I suppose they are, but want to be sure
- What should i do with the full FS? Delete manually or reboot and let the system do some cleanup?
- How to fix that poll which is somehow missing in the GUI but physically exists? I did not created passphrase yet but i can see the .key file under "/data/geli". Anyway autoimport from GUI is NOT possible since step 3/3 will show blank dropdown menu for "Volume" so its no-go.
If this will be confirmed, i will open a ticket.
Thanks
HolyK
EDIT: More info about the system
gpart show:
glabel status:
database records:
as you can see,
First the specifications:
FreeNAS-9.2.0-RELEASE-x64 (ab098f4)
6x3TB WD green connected to on-board SAS2 on X10SL7-F (da0-da5)
2x2TB WD RED (Mirrored), (ada0,ada1)
1x 8GB USB stick for system (da7)
1z 8GB USB stick for scripts (da6)
Now the issue:
28.12. 21:15
- created RAIDZ2 with encryption enabled AND "Initialize Safely" checked
- six dd processes spawned and writing of random trash on the disks started
Code:
[root@HolyNAS] /# ps -efuxww | grep dd root 9290 66.9 0.0 9908 2548 ?? R 9:16PM 153:01.84 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f64188b1-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9288 64.2 0.0 9908 2548 ?? R 9:16PM 152:53.69 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f54c6d12-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9286 63.8 0.0 9908 2548 ?? R 9:16PM 153:03.54 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f453b025-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9284 63.0 0.0 9908 2548 ?? R 9:16PM 152:55.02 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f35c2829-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9292 62.8 0.0 9908 2548 ?? R 9:16PM 153:09.79 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f73e6dff-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9282 61.8 0.0 9908 2548 ?? R 9:16PM 152:56.83 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f2699e30-6ffc-11e3-8909-00259086c71a.eli bs=1m
- Each disk was utilizing approximately ~10MB/s write speed. So after small math (3*1024*1024/3600/10) the whole process should take approx 87 hours, so the END timestamp should be around 1.1. 13:00
- I've been checking the system time to time - last check 31.12. at 13:30
Code:
[root@HolyNAS] ~# ps -efuxww | grep dd root 9290 58.7 0.0 9908 2548 ?? R Sat09PM 2331:39.52 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f64188b1-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9286 58.4 0.0 9908 2548 ?? R Sat09PM 2332:09.98 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f453b025-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9288 55.6 0.0 9908 2548 ?? R Sat09PM 2329:48.50 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f54c6d12-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9284 55.3 0.0 9908 2548 ?? R Sat09PM 2330:37.57 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f35c2829-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9282 54.2 0.0 9908 2548 ?? RL Sat09PM 2329:43.58 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f2699e30-6ffc-11e3-8909-00259086c71a.eli bs=1m root 9292 53.8 0.0 9908 2548 ?? R Sat09PM 2333:14.58 DJANGO_SETTINGS_MODULE=freenasUI.settings PATH=/sbin:/bin:/usr/sbin:/usr/bin PWD=/ HOME=/ RC_PID=23 dd if=/dev/random of=/dev/gptid/f73e6dff-6ffc-11e3-8909-00259086c71a.eli bs=1m
1.1.2014 at 4PM
- checked the system again, and no dd processes were runing anymore, so i suppose the safe init finished, but
- zpool is NOT visible under "View volumes"
- all six disks are still visible under "ZFS volume manager" so they are acting like "available"
- But when checking via zpool commands, i can see that the pool was created
Code:
[root@HolyNAS] /# zpool list st0rage NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT st0rage 16.2T 1.79M 16.2T 0% 1.00x ONLINE /mnt [root@HolyNAS] /# zpool status st0rage pool: st0rage state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM st0rage ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/f2699e30-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 gptid/f35c2829-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 gptid/f453b025-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 gptid/f54c6d12-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 gptid/f64188b1-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 gptid/f73e6dff-6ffc-11e3-8909-00259086c71a.eli ONLINE 0 0 0 errors: No known data errors
- zpool history shows that the pool was created 1.1. at 3:05 AM which is like 10hours before the expected time
Code:
[root@HolyNAS] /mnt# zpool history st0rage History for 'st0rage': 2014-01-01.03:05:42 zpool create -o cachefile=/data/zfs/zpool.cache -o failmode=continue -o autoexpand=on -O aclmode=passthrough -O aclinherit=passthrough -f -m /st0rage -o altroot=/mnt st0rage raidz2 /dev/gptid/f2699e30-6ffc-11e3-8909-00259086c71a.eli /dev/gptid/f35c2829-6ffc-11e3-8909-00259086c71a.eli /dev/gptid/f453b025-6ffc-11e3-8909-00259086c71a.eli /dev/gptid/f54c6d12-6ffc-11e3-8909-00259086c71a.eli /dev/gptid/f64188b1-6ffc-11e3-8909-00259086c71a.eli /dev/gptid/f73e6dff-6ffc-11e3-8909-00259086c71a.eli
- BUT database entry is missing for all of the devices ! Following two entries are related to another encrypted pool.
Code:
[root@HolyNAS] /# sqlite3 /data/freenas-v1.db "select * from storage_encrypteddisk;" 2|1|gptid/0f79f96b-3c09-11e3-b566-00259086c71a|10 2|2|gptid/0fff42cb-3c09-11e3-b566-00259086c71a|11
- I have following errors in the console and these occurred BEFORE the pool was created
Code:
Dec 31 01:00:08 HolyNAS kernel: pid 22198 (bsdtar), uid 0 inumber 27877 on /var: filesystem full Dec 31 01:33:26 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:33:38 HolyNAS last message repeated 9 times Dec 31 01:33:39 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:33:40 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:33:41 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:33:42 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:33:44 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:33:45 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:33:46 HolyNAS kernel: pid 9292 (dd), uid 0 inumber 9307 on /var: filesystem full Dec 31 01:33:47 HolyNAS kernel: pid 9292 (dd), uid 0 inumber 9307 on /var: filesystem full Dec 31 01:33:48 HolyNAS kernel: pid 9284 (dd), uid 0 inumber 9303 on /var: filesystem full Dec 31 01:33:50 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:33:51 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:33:53 HolyNAS last message repeated 2 times Dec 31 01:33:54 HolyNAS kernel: pid 9288 (dd), uid 0 inumber 9305 on /var: filesystem full Dec 31 01:33:56 HolyNAS last message repeated 2 times Dec 31 01:33:57 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:34:00 HolyNAS last message repeated 3 times Dec 31 01:34:01 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:34:02 HolyNAS kernel: pid 9288 (dd), uid 0 inumber 9305 on /var: filesystem full Dec 31 01:34:03 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:34:04 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:34:06 HolyNAS kernel: pid 9284 (dd), uid 0 inumber 9303 on /var: filesystem full Dec 31 01:34:08 HolyNAS last message repeated 2 times Dec 31 01:34:09 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:34:10 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:34:15 HolyNAS last message repeated 5 times Dec 31 01:34:17 HolyNAS kernel: pid 9284 (dd), uid 0 inumber 9303 on /var: filesystem full Dec 31 01:34:18 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:34:19 HolyNAS kernel: pid 9292 (dd), uid 0 inumber 9307 on /var: filesystem full Dec 31 01:34:20 HolyNAS kernel: pid 9284 (dd), uid 0 inumber 9303 on /var: filesystem full Dec 31 01:34:22 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:34:23 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:34:24 HolyNAS kernel: pid 9282 (dd), uid 0 inumber 9302 on /var: filesystem full Dec 31 01:34:25 HolyNAS kernel: pid 9286 (dd), uid 0 inumber 9304 on /var: filesystem full Dec 31 01:34:26 HolyNAS kernel: pid 9284 (dd), uid 0 inumber 9303 on /var: filesystem full Dec 31 01:34:28 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:34:29 HolyNAS kernel: pid 9290 (dd), uid 0 inumber 9306 on /var: filesystem full Dec 31 01:34:30 HolyNAS kernel: pid 9292 (dd), uid 0 inumber 9307 on /var: filesystem full
- Bellow are details about what eating the whole space
Code:
[root@HolyNAS] /# cd /var [root@HolyNAS] /var# df -h . Filesystem Size Used Avail Capacity Mounted on /dev/md2 149M 149M -11M 109% /var [root@HolyNAS] /var# du -sm ./* 1 ./account 1 ./agentx 1 ./at 1 ./audit 1 ./authpf 1 ./backups 1 ./cache 1 ./crash 1 ./cron 29 ./db 1 ./empty 1 ./etc 1 ./games 1 ./heimdal 1 ./log 1 ./mail 1 ./md_size 1 ./msgs 1 ./named 1 ./netatalk 1 ./pbi 1 ./preserve 1 ./run 1 ./rwho 1 ./spool 120 ./tmp 1 ./yp [root@HolyNAS] /var# ll | grep tmp drwxrwxrwt 9 root wheel 1024 Jan 1 16:15 tmp/ [root@HolyNAS] /var# cd tmp [root@HolyNAS] /var/tmp# du -sm ./* 1 ./alert 1 ./firmware 1 ./freenas_config.md5 1 ./ixdiagnose_boot.log 1 ./nginx 1 ./pbi-repo.rpo 1 ./rc.conf.freenas 1 ./sessionidw9yid6zrn9cas8jhff9a79ygq1w3bo59 0 ./tmp.P97hLF 20 ./tmp9n_LDV 20 ./tmpLul75M 20 ./tmpMU2IEc 20 ./tmpln2aD7 20 ./tmpoR6qmA 20 ./tmpvhQIxZ 1 ./vi.recover [root@HolyNAS] /var/tmp# ll total 121960 drwxrwxrwt 9 root wheel 1024 Jan 1 16:15 ./ drwxr-xr-x 29 root wheel 512 Dec 28 20:18 ../ drwxr-xr-x 2 root wheel 512 Dec 29 03:49 .PBI.22041/ drwxr-xr-x 2 root wheel 512 Dec 29 03:50 .PBI.23980/ drwxr-xr-x 2 root wheel 512 Jan 1 03:50 .PBI.25815/ drwxr-xr-x 2 root wheel 512 Dec 28 20:18 .PBI.3051/ -rw-r--r-- 1 root wheel 161 Jan 1 16:15 alert drwx------ 2 www wheel 512 Jan 1 15:32 firmware/ -rw-r--r-- 1 root wheel 33 Dec 28 20:18 freenas_config.md5 -rw-r--r-- 1 root wheel 1628 Dec 28 20:18 ixdiagnose_boot.log drwxr-xr-x 6 root wheel 512 Dec 28 20:18 nginx/ -rw-r--r-- 1 root wheel 945 Dec 20 16:57 pbi-repo.rpo -rw-r--r-- 1 root wheel 741 Dec 28 20:18 rc.conf.freenas -rw------- 1 root wheel 1408 Jan 1 15:32 sessionidw9yid6zrn9cas8jhff9a79ygq1w3bo59 -rw------- 1 root wheel 0 Dec 28 20:18 tmp.P97hLF -rw-r--r-- 1 root wheel 20770797 Jan 1 03:04 tmp9n_LDV -rw-r--r-- 1 root wheel 20766716 Jan 1 02:59 tmpLul75M -rw-r--r-- 1 root wheel 20766720 Jan 1 03:02 tmpMU2IEc -rw-r--r-- 1 root wheel 20766702 Jan 1 02:56 tmpln2aD7 -rw-r--r-- 1 root wheel 20766696 Jan 1 03:05 tmpoR6qmA -rw-r--r-- 1 root wheel 20766720 Jan 1 03:00 tmpvhQIxZ drwxrwxrwt 2 root wheel 512 Dec 20 23:26 vi.recover/ [root@HolyNAS] /var/tmp#
So, i have several questions:
- What the hell happened ?
- How can i verify that the safe initialization finished successfully - all disks are full of some random trash? I suppose they are, but want to be sure
- What should i do with the full FS? Delete manually or reboot and let the system do some cleanup?
- How to fix that poll which is somehow missing in the GUI but physically exists? I did not created passphrase yet but i can see the .key file under "/data/geli". Anyway autoimport from GUI is NOT possible since step 3/3 will show blank dropdown menu for "Volume" so its no-go.
If this will be confirmed, i will open a ticket.
Thanks
HolyK
EDIT: More info about the system
gpart show:
Code:
[root@HolyNAS] /# gpart show => 34 3907029101 ada0 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5k) => 34 3907029101 ada1 GPT (1.8T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 3902834696 2 freebsd-zfs (1.8T) 3907029128 7 - free - (3.5k) => 34 15124925 da6 GPT (7.2G) 34 15124925 1 freebsd-ufs (7.2G) => 63 15669185 da7 MBR (7.5G) 63 1930257 1 freebsd (942M) 1930320 63 - free - (31k) 1930383 1930257 2 freebsd [active] (942M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 11764256 - free - (5.6G) => 0 1930257 da7s1 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M) => 0 1930257 da7s2 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M) => 34 5860533101 da0 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 da1 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 da2 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 da3 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 da4 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k) => 34 5860533101 da5 GPT (2.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5k)
glabel status:
Code:
[root@HolyNAS] # glabel status Name Status Components gptid/0f79f96b-3c09-11e3-b566-00259086c71a N/A ada0p2 gptid/0fff42cb-3c09-11e3-b566-00259086c71a N/A ada1p2 ufs/misc N/A da6p1 ufs/FreeNASs3 N/A da7s3 ufs/FreeNASs4 N/A da7s4 ufsid/521c684590455604 N/A da7s1a ufs/FreeNASs1a N/A da7s1a ufs/FreeNASs2a N/A da7s2a gptid/0f6cd34c-3c09-11e3-b566-00259086c71a N/A ada0p1 gptid/0fe5ee9c-3c09-11e3-b566-00259086c71a N/A ada1p1 gptid/f24e70ca-6ffc-11e3-8909-00259086c71a N/A da0p1 gptid/f2699e30-6ffc-11e3-8909-00259086c71a N/A da0p2 gptid/f3415fa0-6ffc-11e3-8909-00259086c71a N/A da1p1 gptid/f35c2829-6ffc-11e3-8909-00259086c71a N/A da1p2 gptid/f4381f04-6ffc-11e3-8909-00259086c71a N/A da2p1 gptid/f453b025-6ffc-11e3-8909-00259086c71a N/A da2p2 gptid/f5314250-6ffc-11e3-8909-00259086c71a N/A da3p1 gptid/f54c6d12-6ffc-11e3-8909-00259086c71a N/A da3p2 gptid/f627a9bc-6ffc-11e3-8909-00259086c71a N/A da4p1 gptid/f64188b1-6ffc-11e3-8909-00259086c71a N/A da4p2 gptid/f723fa33-6ffc-11e3-8909-00259086c71a N/A da5p1 gptid/f73e6dff-6ffc-11e3-8909-00259086c71a N/A da5p2
database records:
Code:
[root@HolyNAS] /# sqlite3 /data/freenas-v1.db SQLite version 3.8.0.2 2013-09-03 17:11:13 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> select * from storage_disk; Disabled|60|3000592982016|||{devicename}da0|1|0|Disabled|1|Auto||bay4|da||1|da0 Disabled|60|3000592982016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|1|Disabled|4|Auto||bay3|da||1|da1 Disabled|60|3000592982016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|2|Disabled|5|Auto||bay2|da||1|da2 Disabled|60|3000592982016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|3|Disabled|6|Auto||bay1|da||1|da3 Disabled|60|3000592982016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|4|Disabled|7|Auto||bay6|da||1|da4 Disabled|60|3000592982016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|5|Disabled|8|Auto||bay5|da||1|da5 Disabled|Always On|7743995904|||{devicename}da6|1|6|Disabled|9|Auto||USB-scripts|da||1|da6 Disabled|60|2000398934016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|0|Disabled|10|Auto||bay7|ada||1|ada0 Disabled|60|2000398934016|WD-WMCxxxxxxxxx||{serial}WD-WMCxxxxxxxxx|1|1|Disabled|11|Auto||bay8|ada||1|ada1 Disabled|Always On|8022654976|||{devicename}da7|1|7|Disabled|13|Auto|||da||0|da7 sqlite> select * from storage_encrypteddisk; 2|1|gptid/0f79f96b-3c09-11e3-b566-00259086c71a|10 2|2|gptid/0fff42cb-3c09-11e3-b566-00259086c71a|11
as you can see,