"Force 4096 bytes sector size" Weird Behaivor

Status
Not open for further replies.

TScott

Cadet
Joined
Aug 30, 2012
Messages
9
Version: FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)

When creating a zpool with 2 (two) mirrored vdevs using the "Force 4096 bytes sector size" option in the Volume Manager I noticed a strange issue. First, creating the pool and adding the initial mirrored vdev happens without issue:

  1. I set the volume name.
  2. I select the 2 member disks.
  3. I select ZFS as the FS
  4. I select mirrored for the drive arrangement
  5. I select the Force 4096 bytes sector size option

A new pool with a single mirrored vdev is created successfully. From SSH, issuing the zdb [POOLNAME] | grep ashift, shows an ashift of 12 for that vdev. Going back into Volume Manager, and selecting the previously created volume to extend, I select the remaining 2 drives, choose mirrored, and the force 4k option. That pool seems to be extended successfully, but when I go to Volume Status the newly mirrored pair, while showing "online", the drive names have a .nop at the end of them. Further, those 2 drives are still selectable from the Volume Manager to create a new volume. Destroying the volume and starting over except without the force 4k option and the pool is created and expanded as expected. The drives in the 2nd mirrored vdev don't show up in the volume manager and the drive names under Volume Status are as expected. Unfortunately, however, they both appear to be aligned to 512 instead of 4k (ashift = 9 for both vdevs). I also tried creating the initial pool and mirror with the force 4k option and then adding the second mirror without. That results in the 1st vdev with an ashift of 12, but the second with an ashift of 9.

When creating the pool I noticed in the console footer messages that FreeNAS seems to be creating the .nop files and then deleting them for the initial vdev. But when expanding the pool it only seems to create them (I didn't see any messages about deleting them). The solution around this problem is pretty simple though, I just followed the steps outlined in some of the messages on this forum and across the web in regards to using "geom nop create" to force 4k alignment. Doing that, the pool is created successfully, with 4k alignment, and appears in the GUI properly. For reference, the commands I ran are below and the drives used were Seagate Barracuda 3TB SATA3 drives (ST3000DM001). These drives are purported by Seagate to handle reporting their sector alignment to both legacy and modern OSes without issue, but that doesn't appear to be the case (at least with FreeBSD/ZFS).

Code:
geom nop create -v -S 4096 da0
geom nop create -v -S 4096 da1
geom nop create -v -S 4096 da2
geom nop create -v -S 4096 da3

zpool create poolname mirror da0.nop da1.nop mirror da2.nop da3.nop
cannot mount '/poolname': failed to create mountpoint
zpool export poolname
geom nop destroy -v da0.nop da1.nop da2.nop da3.nop
[Auto-imported the pool from the GUI]
zdb poolname | grep ashift
ashift=12
ashift=12
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
A new pool with a single mirrored vdev is created successfully. From SSH, issuing the zdb [POOLNAME] | grep ashift, shows an ashift of 12 for that vdev. Going back into Volume Manager, and selecting the previously created volume to extend, I select the remaining 2 drives, choose mirrored, and the force 4k option. That pool seems to be extended successfully, but when I go to Volume Status the newly mirrored pair, while showing "online", the drive names have a .nop at the end of them.
This is expected behavior until a reboot. See Ticket 1717.

Further, those 2 drives are still selectable from the Volume Manager to create a new volume.
That sounds like a bug.
 

TScott

Cadet
Joined
Aug 30, 2012
Messages
9
This is expected behavior until a reboot. See Ticket 1717.

That sounds like a bug.

Thanks paleoN. I'm ebarrassed to say that of the many things I tried, I didn't try a reboot after expanding the pool. That may have addressed the 2 drives still showing up in the Volume Manager as well. I'll have to spin up a FreeNAS test VM and try that again.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
FWIW, I have created vdevs like the OP and the .nop files survive the reboot and are viewable in the GUI. They appear to be gone from the zfs system (all disks are referred to by gptid).

I also found that disks added to the pool while forcing 4K are still available to select from the Volume Manager as new vdevs are added. If you select one, you'll get an error, of course.
 
Status
Not open for further replies.
Top