Resource icon

List of known SMR drives

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Hard drives that write data in overlapping, "shingled" tracks, have greater areal density than ones that do not. For cost and capacity reasons, manufacturers are increasingly moving to SMR, Shingled Magnetic Recording. SMR is a form of PMR (Perpendicular Magnetic Recording). The tracks are perpendicular, they are also shingled - layered - on top of each other. This table will use CMR (Conventional Magnetic Recording) to mean "PMR without the use of shingling".

SMR allows vendors to offer higher capacity without the need to fundamentally change the underlying recording technology.
New technology such as HAMR (Heat Assisted Magnetic Recording) can be used with or without shingling. The first drives are expected in 2020, in either flavor.

SMR is well suited for high-capacity, low-cost use where writes are few and reads are many.

SMR has worse sustained write performance than CMR, which can cause severe issues during resilver or other write-intensive operations, up to and including failure of that resilver. It is often desirable to choose a CMR drive instead. This thread attempts to pull together known SMR drives, and the sources for that information.

There are three types of SMR:
- Drive Managed, DM-SMR, which is opaque to the OS. This means ZFS cannot "target" writes, and is the worst type for ZFS use. As a rule of thumb, avoid DM-SMR drives, unless you have a specific use case where the increased resilver time (a week or longer) is acceptable, and you know the drive will function for ZFS during resilver. See (h)
- Host Aware, HA-SMR, which is designed to give ZFS insight into the SMR process. Note that ZFS code to use HA-SMR does not appear to exist. Without that code, a HA-SMR drive behaves like a DM-SMR drive where ZFS is concerned.
- Host Managed, HM-SMR, which is not backwards compatible and requires ZFS to manage the SMR process.

I am assuming ZFS does not currently handle HA-ZFS or HM-ZFS drives, as this would require Block Pointer Rewrite. See page 24 of (d) as well as (i) and (j).

The list of SMR drives known to the community is in the "Overview" tab. This is also where the referenced sources are.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Just a heads-up: I converted this thread into a Resource. The thread will continue as the discussion thread for the resource, but I left the table out of the first post to avoid having two copies that might get out of sync.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Oh hahaha I literally just threw a report at you a few minutes ago asking you to make it into a resource. Thanks! ;-)
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
That rabbit hole went a little deeper than I thought it might, with HA-SMR and BPR.

I am thinking on how to detect a DM-SMR drive, since manufacturers are sneaky. One method, though it will take a long time, would be to write 1TB of zeros. The CMR portion of the drive won't be 1TB. The amount here could be reduced, depending on drive and CMR size. As this test would only need to be run once for any given manufacturer's model, I am not too concerned about the amount of time it takes.

dd if=/dev/zero of=/dev/rawdiskdev bs=2048k count=500k status=progress
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Straight-up writing zeros in a large stream is probably not a good test, if the firmware team did their job right. You would need very random accesses to nearby LBAs, but then the firmware can optimize it as one large streaming read and return only relevant LBAs.

So it's a deep one we have here...

Realistically, one of the better indicators is number of platters, which is easyish to estimate. We're at what, 2 TB per platter with SMR or energy-assisted recording? The latter is very marketable, so it's not going to be sneaked into anything.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Given a strong enough CPU, one could always if=/dev/random. And yes, per-platter capacity sounds like a good indicator.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Blocks & Files just released a note on Toshiba 3.5" and 2.5" drives. Table updated.
 

FJep

Dabbler
Joined
Mar 7, 2019
Messages
38
Thanks for the list.
It would be ever better if it would be extended with list of drives garanteed to be CMR.
 

radovan

Cadet
Joined
Apr 13, 2020
Messages
5
Really nice resource, thank you. Unfortunatelly it's too weeks late and I have a few wd red efaxes on the table. Anybody needs a paper weight?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
@FJep , I very deliberately did not do that. One, that'll be quite the list, and two, with manufacturers getting a little sneaky, I am not sure how reliable that would remain. Better that people do their own research before buying drives.

From what I can see, safe bets right now that fit the "SOHO NAS category" are:
WD Red 8TB and larger
WD Red Pro
HGST Ultrastar HE, including shucked ones
Seagate Ironwolf / Pro
Toshiba N300 / X300
 
Last edited:
Joined
May 10, 2017
Messages
838
Realistically, one of the better indicators is number of platters, which is easyish to estimate. We're at what, 2 TB per platter with SMR or energy-assisted recording? The latter is very marketable, so it's not going to be sneaked into anything.

Yep, that's currently the best way, currently any 3.5" drive with 2TB platters and any 2.5" drive with 1TB platters can only be SMR.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I've gone through the platter size DB and added anything 2TB and larger 3.5".

Edit: Added references to more 2.5" drives. Did not want to "blow up" the table, and 2.5" is less common for NAS use.
 
Last edited:
Joined
May 10, 2017
Messages
838
After some research I believe I can say with confidence that the WD101EFAX is not SMR, it's a rebadged HC 330, as you can see they look physically identical and even have the same R/N code, so it's a 6 platter CMR drive, and the air version of the 10TB helium (WD100EFAX).

hc330.jpg wd101efax.PNG wd101efax3.PNG
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Oh. That platter DB is yours, isn't it? Nice :)
 

Tron

Cadet
Joined
Apr 16, 2020
Messages
2
Excellent info. Getting ready to build a new systemm had a discussion with a friend having major issues with their new system. Learning about PMR/CMR and SMR and how it impacts ZFS. Found a few videos and links related to the subject

Defining SMR based HDDs and the problem created in OpenZFS pools with the OpenZFS devs - Related SMR discussion starts at 20:20
https://youtu.be/mS4bfbEq46I?list=PLaUVvul17xSegxJjny2Gz85IgIyq9wu8n&t=1220
Resilvering and SMR specifics at 31:19 https://youtu.be/mS4bfbEq46I?list=PLaUVvul17xSegxJjny2Gz85IgIyq9wu8n&t=1879
Possible solutions for SMR at 46:06 https://youtu.be/mS4bfbEq46I?list=PLaUVvul17xSegxJjny2Gz85IgIyq9wu8n&t=2766

Top down deep-dive SMR discussion from manufacturer HGST https://youtu.be/a2lnMxMUxyc
The "golden" question of SMR compatibility with OpenZFS at 30:03 https://youtu.be/a2lnMxMUxyc?t=1803

Long resilvering times https://forums.freebsd.org/threads/resilver-taking-very-long-time.61643/

Difficult or impossible to rebuild RAID 5 and RAID 6 arrays with SMR
 
Joined
May 10, 2017
Messages
838
Difficult or impossible to rebuild RAID 5 and RAID 6 arrays with SMR

I also read that but IIRC there were several users here who successfully resilvered one or more devices using WD SMR drives, and except for the extra long time there were no errors, I mysself had a pool with Seagate SMR drives some time ago and never had issues with resilvering (except for how long it took), and I resilved 8 drives.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Thank you @Tron , I added those two YT videos to the resource.
That long resilvering time you link is unrelated to SMR, those were CMR SAS disks and the issue was with scheduled snapshot creation.
The IDNF errors appear to be a WD Red firmware issue. WD needs to acknowledge this as an issue, and fix it. Until then, using SMR WD Reds even in archive use cases is fraught. Results with Seagate Archive v2 are encouraging, 4.5 days resilver time for an 8x8TB raidz2 in the linked test.

Arguably 8TB CMR drives are so affordable that going for SMR is not needed. I expect as we will get to 20TB capacity this year and 50TB by 2026 (Seagate roadmap), we will deal with SMR more, and need to figure out what that means for ZFS. It's easy to say "that's for archival purposes only", and, maybe we will see more hybrid use cases in future. Also, the resilver time on an unassisted 50TB SMR draid2/3 will be interesting to see. Unassisted as in "no specific ZFS code to handle HA-SMR".
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Couple random-ish thoughts on ZFS and SMR. Upcoming features make SMR drives more "palatable".

- Sequential resilver, random I/O is murder on SMR.
- Fusion pools, keep metadata and small files on SSD, so that SMR disks don't have to deal with those data.

Work to make ZFS understand HA-SMR / HM-SMR would need to deal with Garbage Collection somehow, to free zones that had most of their data deleted from them. That's either full-on BPR, or an indirection layer (which would grow and grow?) like is used for device removal. Possible additional work, linked on the HiSMRfs paper's page, could be to identify "hot" data, and write that to designated "hot" zones, so that fewer zones see changes.
 
Top