ZFS and 512n vs 512e vs 4kN drives

512n, 512e or 4kN drives for ZFS?

  • 512n

    Votes: 0 0.0%
  • 512e

    Votes: 0 0.0%

  • Total voters
    1
  • Poll closed .

geronimo

Cadet
Joined
Jul 20, 2012
Messages
6
Guys, I will try to keep it very short as I tend to get into unnecessary details.

My ultimate goal is to choose between using 512n, 512e, or pure 4kN drives for a new ZFS storage appliance that will either consist of eight 4TB drives in four mirror VDEVs, or eventually Raidz2 in order to mitigate any unrecoverable read errors that might occur during re-silvering a mirror in case of disk failure.

I understand the difference between 512n, 512e and 4kN drives and that 512e drives are basically 4k sector drives with emulation for backward compatibility for software that supports only 512b sectors. I know that with 4k drives there is more efficiency as there is eight times less gaps between sectors and eight times less markers.

Still it is quite difficult for me to make a decision on which drives to choose from as with HGST (Ultrastar 7k6000 SAS) I can choose between 512n, 512e and 4kN and with Seagate (Enterprise Capacity 3.5 SAS) I can choose between 512e and 4kN).

One of the things that bothers me given the fact that ZFS will always use the physical sector size to create VDEVs is if there would be any difference between using 512e and 4kN drives from the same model when using these in ashift=12 vdevs? My understanding here is that ZFS treats 512e and 4kN drives the same way in such vdevs and my guess is that any operations involving pure 4k reads and writes from and to 512e drive would not cause any in-drive processor usage e.g. causing performance degradation which is the case when using the drive in 512e mode. Would I be right to assume that?

I am trying to figure out if there is any reasoning behind getting pure 4kN drives that would get another 15% on top of my bill as opposed to 512e drives.

On the other side of the coin, having the option from HGST to choose 512n 4TB drives I am trying to figure out if that would be of any benefit other than optimizing overhead for certain file sizes in RaidZ setups. And I do understand that 512n drives could be little bit slower due to the higher number of operations needed to acquire sectors.

Any opinions are highly appreciated!



~Cheers~
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
From my limited understanding, no one should plan to use 512e drives, (or a 4KB drive's 512e feature),
unless they absolutely have to. It's basically slower.

A bit like RAID-5's read, modify and write cycle. Meaning if the block you want to write to is not the full
4KB native disk block, the drive has to read the 4KB block, modify with your 512 byte block change(s)
and re-write the 4KB native disk block. Slower, though fully backward compatible.

Edit: It's basically slower for writes...
 
Last edited:

geronimo

Cadet
Joined
Jul 20, 2012
Messages
6
From my limited understanding, no one should plan to use 512e drives, (or a 4KB drive's 512e feature),
unless they absolutely have to. It's basically slower.

A bit like RAID-5's read, modify and write cycle. Meaning if the block you want to write to is not the full
4KB native disk block, the drive has to read the 4KB block, modify with your 512 byte block change(s)
and re-write the 4KB native disk block. Slower, though fully backward compatible.

Arwen you are right, the real question though is how would ZFS treat such 512e drives. From my understanding it would respect the physical sector size which for 512e drives that don't lie about it is 4k. I would like to figure out if there would be any difference between using 512e and 4kN drive with ZFS in vdevs created with ashift-12 and if ZFS would do only 4k reads and writes with such 512e drives. That would help me decide if I want to pay the 15%-20% premium for having 4kN drives as opossed to 512e.
 

xhoy

Dabbler
Joined
Apr 25, 2014
Messages
39
HI! thank you for asking :) i was thinking about exactly the same thing today :) Did you get any answers?
 

xhoy

Dabbler
Joined
Apr 25, 2014
Messages
39
I just did soms reading on this. We are using freenas with NFS for esxi. since the NFS layer does not communicate if its 4k or not. So there is no way our VM's know what is going on. So in our case i would think that using the 'old fasion' 512n would be the best solution. But ofcz in your situation this could be different.
 

baviron

Cadet
Joined
Nov 29, 2017
Messages
1
http://en.community.dell.com/techce...ce-comparison-between-4k-and-512e-hard-drives

Conclusion
4k drive are out performing 512e drive on each of the counters. These drives provide double error correction code with efficiency up to 97% which is 10% more than that of 512e drives. The misalignment issues can be avoided in 4K drives which in turn gives better performance to the customers. The price of both 4k drives are $10 more only which comes with long term benefits for customer. Hence the 4k drives should be opted for any kind of OLTP workload.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It shouldn’t matter. Just ensure your pool is ashift=12 (it should be) and perhaps prefer 4K drives.

BTW, you shouldn’t be passing virtual drives in to FreeNAS via ESXi for use as pool drives. You should be passing in the drive controller instead.
 
Joined
Dec 29, 2014
Messages
1,135
@Stux Sorry, I know this is a really old thread, but it was the only one that search found on this subject. I have never messed with the ashift options. Are those only available at the time of pool creation? I saw something that looked like a good deal on eBay for some 4kN drives, but I wasn't sure if that would have some big ripples into FreeNAS. It probably means I have too much time on my hands at the moment... :smile:
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I have never messed with the ashift options. Are those only available at the time of pool creation?
ashift is created per-vdev technically, but yes it can only be set at creation time.

However FreeNAS uses a value called vfs.zfs.min_auto_ashift which is set to 12, so it should never create vdevs with an ashift of less than 12 (2^12 = 4096) so 4Kn drives should work fine when being used to create a new pool/vdevs.

The problem comes if you have a very old pool that's been imported and upgraded from previous versions which is still riding on ashift=9 (512 bytes) - if you have to use a 512e/4Kn drive to replace a failed unit, your performance will nosedive.
 
Joined
Dec 29, 2014
Messages
1,135
The problem comes if you have a very old pool that's been imported and upgraded from previous versions which is still riding on ashift=9 (512 bytes)
I was looking at the output of a zfs get, but I didn't see ashift as a value. Is that reflected in the recordsize? All my pools are at the default of 128k.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I was looking at the output of a zfs get, but I didn't see ashift as a value. Is that reflected in the recordsize? All my pools are at the default of 128k.

Nope, it's buried deeper in the weeds.

zdb -U /data/zfs/zpool.cache | grep ashift

will pull out the lines you want - if any of them aren't 12+ then we need to go digging a little more to find out which one.
 
Joined
Dec 29, 2014
Messages
1,135
That worked, and all my pools are set to 12. I figured since I re-created all the pools when I did my latest round of FreeNAS updates. Thanks!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That worked, and all my pools are set to 12. I figured since I re-created all the pools when I did my latest round of FreeNAS updates. Thanks!
Awesome. Feel free to do an unfiltered zdb -U /data/zfs/zpool.cache if you want to see a bunch more information about your pool (physical paths to your disks, the txg's that your vdevs were created in, your metaslab size, etc)
 

smcclos

Dabbler
Joined
Jan 22, 2021
Messages
43
OK, here is my $64,000 question. In the past I expanded my pool of disks by swapping out and re-silvering. 3x3TB for 3x2TB, and then a 4x4TB for 4x3TB drives.

I believe that all of these drives are 512n drives. Now if I want to swap out 4x8TB for 4x4TB, but the drives 8TB drives are 4Kn drives and the 4TB drives are 512n, will this work exactly the same, or do I have to worry about anything?

I checked and all my pools are ashift 12

I think I found the answer to my own question: Drives flow down hill= Making 4Kn & 512E play nicer in the same pool
 
Last edited:
Top