SATADOM and SSD pooled in Factory Configuration with Accelerator Pack

Kellin

Cadet
Joined
Sep 20, 2020
Messages
5
Greetings all!

First time FreeNAS User working my way through installation of a FreeNAS Mini XL+ and am trying to align what I expected with what I'm seeing on the machine I received.

System Details
  • FreeNAS Mini XL +
  • SSD Accelerator Pack (two 480 GB Micron SSDs)
  • 64 GB Memory
  • 5 WD Drives in a single pool RaidZ2
It seems that, from the factory, a zpool named freenas-boot was created with the 16 GB SATADOM and one of the two SSDs.

This pool only shows up in System > System Dataset and not in Storage > Pools.

Assumptions I have made from my reading:
  1. system data should not live on the same pool as the data storage because it contains the encryption keys to decrypt it
  2. partitioning the SSD to have a single 16 GB partition to pool with the SATADOM is against best practices because the caches should have no other IO hitting them
  3. system-data must live on a pool
What is the expectation for operating with this particular configuration with the two SSDs from the accelerator pack?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
This would be a misconfiguration if the SATADOM and one of the SSDs were attached in the same pool. I have never heard of this before.

Please contact ixsystems support if you have any trouble uncoupling them.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Could you post the output of zpool status and gpart show, please?
 

Kellin

Cadet
Joined
Sep 20, 2020
Messages
5
I will contact them later today @morganL, thanks for the heads up.

Here's the output from those commands @Patrick M. Hausen

Code:
root@freenas[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:06 with 0 errors on Mon Sep 21 06:45:06 2020
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada7p2    ONLINE       0     0     0

errors: No known data errors

  pool: primary-storage-pool
 state: ONLINE
  scan: none requested
config:

        NAME                                                STATE     READ WRITE CKSUM
        primary-storage-pool                                ONLINE       0     0     0
          raidz2-0                                          ONLINE       0     0     0
            gptid/be38dd5d-fba6-11ea-b4c0-3cecef0cd190.eli  ONLINE       0     0     0
            gptid/be580ca6-fba6-11ea-b4c0-3cecef0cd190.eli  ONLINE       0     0     0
            gptid/be649816-fba6-11ea-b4c0-3cecef0cd190.eli  ONLINE       0     0     0
            gptid/be945190-fba6-11ea-b4c0-3cecef0cd190.eli  ONLINE       0     0     0
            gptid/be76bcb0-fba6-11ea-b4c0-3cecef0cd190.eli  ONLINE       0     0     0
        cache
          gptid/be35de36-fba6-11ea-b4c0-3cecef0cd190.eli    ONLINE       0     0     0

errors: No known data errors


Code:
root@freenas[~]# gpart show
=>      40  31277152  ada7  GPT  (15G)
        40    532480     1  efi  (260M)
    532520  30736384     2  freebsd-zfs  (15G)
  31268904      8288        - free -  (4.0M)

=>         40  19532873648  ada3  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)

=>       40  937703008  ada5  GPT  (447G)
         40         88        - free -  (44K)
        128  937702920     1  freebsd-zfs  (447G)

=>         40  19532873648  ada0  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)

=>         40  19532873648  ada4  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)

=>         40  19532873648  ada2  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)

=>         40  19532873648  ada1  GPT  (9.1T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  19528679256     2  freebsd-zfs  (9.1T)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The SSD is not attached to the boot pool.

It is either completely unconfigured, no partition table - or broken or otherwise not visible to the system. Do a camcontrol devlist, please.
 

Kellin

Cadet
Joined
Sep 20, 2020
Messages
5
Here's the output you requested.

Code:
root@freenas[~]# camcontrol devlist
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus1 target 0 lun 0 (pass1,ada1)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus2 target 0 lun 0 (pass2,ada2)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus3 target 0 lun 0 (pass3,ada3)
<WDC WD100EMAZ-00WJTA0 83.H0A83>   at scbus4 target 0 lun 0 (pass4,ada4)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus8 target 0 lun 0 (pass5,ses0)
<Micron 5200 MTFDDAK480TDC D1MU020>  at scbus10 target 0 lun 0 (pass6,ada5)
<Micron 5200 MTFDDAK480TDC D1MU020>  at scbus11 target 0 lun 0 (pass7,ada6)
<16GB SATA Flash Drive SFDK004A>   at scbus12 target 0 lun 0 (pass8,ada7)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus13 target 0 lun 0 (pass9,ses1)


The screenshot below is what started me on this search. The drive is the same size as the SATADOM here.

disks_screen.png


Thank you for helping me @Patrick M. Hausen, it is appreciated. I have been trying to find translations of Linux commands in BSD; I thought camcontrol was for drivers - this was a helpful thing to see.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So it is simply unused. The reported size is nonsense - unless, again, there is something broken with that SSD.
If your intention is to add it to the pool as an SLOG device you can do just that in the UI.
Or you could attach it to the cache device to get a mirrored cache. Which might be interesting once we get persistent L2ARC ...
 

Kellin

Cadet
Joined
Sep 20, 2020
Messages
5
This makes sense to me @Patrick M. Hausen , though I still have some questions to help improve my understanding.

Which command would show me what was applied to the bootpool filesystem? (e.g. raidz1, raidz2, probably other?)

I see that my spinning disks showed that information in zpool status but the boot pool does not:

Code:
root@freenas[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:06 with 0 errors on Mon Sep 21 06:45:06 2020
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada7p2    ONLINE       0     0     0

errors: No known data errors


Why would the bootpool not show up in the WebUI when I look at the pools?

Finally, in the UI it only ever sees that drive as having 16GB. From the command line gpart doesn't show it as you've seen.

What's the proper way to approach the drive? From my own experience; I'd destroy the disk label and reformat to see if it recognized the proper size. E.g. gpart destroy ada6. Is that the right tack in BSD or are there less destructive/preferred approaches?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The ada6 device has been underprovisioned using the HPA (host protected area) to be used for SLOG purposes (IIRC, iX sets their SLOG devices to 16G - feel free to correct me on this one if I'm off, @morganL ) but it hasn't been added to the pool. You should be able to do this from the GUI, but make sure that ada6 is added as a LOG type vdev. If you accidentally add it to the main pool, it could be difficult (or impossible) to remove depending on your ZFS version. If you have any concerns with this or are unsure about the process, give iXsystems support a call.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
TIL "host protected area" - thanks, @HoneyBadger. So just ignore my "nonsense" statement above ...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Which command would show me what was applied to the bootpool filesystem? (e.g. raidz1, raidz2, probably other?)
Your command above shows that it is built from a single disk partition, namely ada7p2. That means no redundancy, just a ZFS on a single device.
 
Top