Unable to create pool on some disks

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
I'm creating several pools with different settings for testing and I noticed some disks won't let me create a zpool. Apparently the OS thinks the drive is busy, but how can I find out what's holding it?

Code:
root@freenas ~ # zpool create fred da14
cannot create 'fred': no such pool or dataset

root@freenas ~ # gpart show da14
=>         40  19532873648  da14  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)

root@freenas ~ # gpart destroy -F da14
gpart: Device busy

root@freenas ~ # dd if=/dev/zero of=/dev/da14 
dd: /dev/da14: Operation not permitted

root@freenas ~ # smartctl -a /dev/da14
smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               HGST
Product:              HUH721010AL5200
Revision:             A384
Compliance:           SPC-4
User Capacity:        10,000,831,348,736 bytes [10.0 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000cca2674cb104
Serial number:        JEHB5HPN
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Tue Oct  1 18:01:25 2019 PDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:     42 C
Drive Trip Temperature:        85 C

Manufactured in week 47 of year 2018
Specified cycle count over device lifetime:  50000
Accumulated start-stop cycles:  52
Specified load-unload count over device lifetime:  600000
Accumulated load-unload cycles:  99
Elements in grown defect list: 0

Vendor (Seagate) cache information
  Blocks sent to initiator = 385835031592960

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:          0        0         0         0       7891        583.598           0
write:         0        0         0         0       2551       4672.373           0
verify:        0        0         0         0        324          0.007           0

Non-medium error count:        0

No self-tests have been logged

root@freenas ~ # zpool status
  pool: double
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    double      ONLINE       0     0     0
      da2       ONLINE       0     0     0
      da3       ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      ada0p2    ONLINE       0     0     0

errors: No known data errors

  pool: quad
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    quad        ONLINE       0     0     0
      da4       ONLINE       0     0     0
      da5       ONLINE       0     0     0
      da6       ONLINE       0     0     0
      da7       ONLINE       0     0     0

errors: No known data errors

  pool: single
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    single      ONLINE       0     0     0
      da1       ONLINE       0     0     0

errors: No known data errors
 

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
Aha! This problem is due to a very strange swap configuration. It looks like I have five 2-way gmirrors used for swap! This system is a clean install of FreeNAS 11.2 U6 on a single SATA SSD. I have 24 SAS disks on a JBOD (SAS3008 chip) which may have been used previously on another system . So why the hell did it take 10 of my disks for swap?? Also, shouldn't there be an additional line in fstab for the root partition? I'm not sure what a typical fstab should look like on FreeBSD.

Code:
root@freenas ~ # cat /etc/fstab
fdescfs    /dev/fd    fdescfs rw    0 0
/dev/mirror/swap0.eli    none    swap    sw    0    0
/dev/mirror/swap1.eli    none    swap    sw    0    0
/dev/mirror/swap2.eli    none    swap    sw    0    0
/dev/mirror/swap3.eli    none    swap    sw    0    0
/dev/mirror/swap4.eli    none    swap    sw    0    0

root@freenas ~ # geom mirror status
        Name    Status  Components
mirror/swap0  COMPLETE  da23p1 (ACTIVE)
                        da22p1 (ACTIVE)
mirror/swap1  COMPLETE  da21p1 (ACTIVE)
                        da20p1 (ACTIVE)
mirror/swap2  COMPLETE  da19p1 (ACTIVE)
                        da18p1 (ACTIVE)
mirror/swap3  COMPLETE  da17p1 (ACTIVE)
                        da16p1 (ACTIVE)
mirror/swap4  COMPLETE  da15p1 (ACTIVE)
                        da14p1 (ACTIVE)


In case anyone else has this problem, I fixed it with swapoff -a followed by geom mirror destroy -f swap{1..4}. Does FreeNAS like to have swap?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So why the hell did it take 10 of my disks for swap?
Because FreeNAS does that by default, and it has (in varying ways) as long as it's been around. One reason is to account for variations in disk size when you need to replace a disk.
shouldn't there be an additional line in fstab for the root partition?
No, ZFS pools ordinarily have mountpoints set as a pool/dataset property and wouldn't appear in fstab.
Does FreeNAS like to have swap?
Any *nix system should have some swap.
 

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
Thanks. Maybe this would have been seamless if I used the GUI but I like the shell especially for benchmarking.
I totally get the reasoning for partitions slightly smaller than the disk. But it seems like all this extra swap usage would reduce throughput on the pool; is it not more efficient to just have one swap partition on the boot disk? And why are they mirrored?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
is it not more efficient to just have one swap partition on the boot disk?
Not when the boot devices were typically USB sticks. And the system isn't ordinarily using much swap in any event.
And why are they mirrored?
If a disk failed, and swap was being used on that disk, it would cause a system crash. Mirroring swap avoids that problem.
 

Elliott

Dabbler
Joined
Sep 13, 2019
Messages
40
Yeah good point. I hope to never use anyway. When the ARC fills up RAM, I wonder if there's any performance difference between writing to swap vs writing to the ZIL on disk.
I have read about people using USB boot drives, is that really still happening? I don't get it, when SATA drives are so cheap these days, and so much larger, faster and reliable.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have read about people using USB boot drives, is that really still happening?
Many of the design and architecture decisions were made when that was the case, regardless of whether it still is--but yes, people still do that. It isn't really recommended any more, but it's still done in systems without enough SATA ports for a boot device (or where the users haven't read, or don't care about, the hardware recommendations).
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Guilty here :smile:

I run from mirrored USB drives because I want all my sata ports to be used for the actual pool. Also, because FreeNAS as an OS is so easy to restore, I will take the risk because it is not that big. Re-install on clean USB drives, restore the backups (and system dataset is in the pool anyway), not too much trouble in my opinion.
 
Top