WD/HGST/Seagate/others

What disks are in your system?

  • WD RED

  • WD RED PRO

  • SEAGATE IRONWOLF

  • SEAGATE IRONWOLF PRO

  • WD GOLD

  • HGST DESKSTAR

  • ENTERPRISE

  • a HDD is just a HDD and all colors are created equal


Results are only viewable after voting.

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
Well it’s that time that I ask you all to comment with pros and cons of the disks you’re using, would you recommend the same to others looking for new disks? Each of the disk manufacturers have now rolled 2 (possibly more) of NAS drives with additional warranties etc. How old are your current disks (power on hours)? Any other pertinent/important info you might want to add.
This post is to get maximum amount of votes for disks so we can realize which hard disks are better for FreeNAS use or rather which ones are the worst and shouldn’t be brought even close to your system.
Backblaze as of 2019 has started going to bigger disks thus leaving out most of the home level consumers high and dry and with no reliable source of data. I would recommend keeping other discussions to a minimum so this post doesn’t get bloated with discussion.
I remember when I was making my first system back in 2014 there was minimum data about WD red but a lot of members did help me make up my mind.
 
Joined
Oct 18, 2018
Messages
969
I've got a mix of WD Reds, Seagate Iron Wolfs, Seagate Constellations, and a smattering of used 2TB/3TB/8TB consumer drives @ 7200rpm, some purchased used. The used drives bit the dust quickly, and all of the 7200rpm drives run hot and those near them run hotter. I've had 1 constellation die on me, but it was several years old and was in several systems prior to going in my NAS. None of my Iron Wolf or WD Reds have failed yet; most are ~1 year of use.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
I personally am running 8x4TiB WD REDs for over last 4.5 years with 1 cold spare. The warranty on these bad boys was 3 years and with that I replaced drives everytime some sectors went pending.
Last 1.5 years, I have had just 1 drive (cold spare) being utilized.

Here is an output of my disks before i decided to switch to 8tb WD REDs
Code:
########## SMART status report summary for all drives ##########

+------+---------------+----+-----+-----+-----+-------+-------+--------+------+------+------+-------+----+
|Device|Serial         |Temp|Power|Start|Spin |ReAlloc|Current|Offline |UDMA  |Seek  |High  |Command|Last|
|      |               |    |On   |Stop |Retry|Sectors|Pending|Uncorrec|CRC   |Errors|Fly   |Timeout|Test|
|      |               |    |Hours|Count|Count|       |Sectors|Sectors |Errors|      |Writes|Count  |Age |
+------+---------------+----+-----+-----+-----+-------+-------+--------+------+------+------+-------+----+
|ada0 ?|WD-WCC4E3CDR5SC| 36 |26355|  102|    0|      0|      2|       0|     0|   N/A|   N/A|    N/A|  29|
|ada1 ?|WD-WCC4E4JL9NDZ| 38 |36185|  269|    0|      0|      2|       0|     0|   N/A|   N/A|    N/A|  29|
|ada2 ?|WD-WCC7K2VDVPKT| 34 |13988|   54|    0|      0|      0|       0|     0|   N/A|   N/A|    N/A|  29|
|ada3 ?|WD-WCC4E4KC79HK| 36 |36185|  270|    0|      0|      5|       0|     0|   N/A|   N/A|    N/A|  29|
|ada4 ?|WD-WCC4E0ESU744| 35 |30033|  126|    0|      0|      0|       0|     0|   N/A|   N/A|    N/A|  29|
|ada5 ?|WD-WCC4E2VSE6NP| 37 |36185|  263|    0|      0|      0|       0|     0|   N/A|   N/A|    N/A|  29|
|ada6 ?|WD-WCC4E1FSUL4N| 37 |27523|  108|    0|      0|      4|       0|     0|   N/A|   N/A|    N/A|  29|
|ada7 ?|WD-WCC4E2FL7TSK| 36 | 6621|   19|    0|      0|      1|       0|     0|   N/A|   N/A|    N/A|   1|
+------+---------------+----+-----+-----+-----+-------+-------+--------+------+------+------+-------+----+

As you can see multiple disks developed pending sectors, I had run out of space and also 5 disks were close to 30k hours on each.
LBA and ATA errors had started popping up, accompanied by CAM STATUS errors.

A detailed output of recent scrub

Code:
########## ZPool status report summary for all pools ##########

+--------------+--------+------+------+------+----+--------+------+-----+
|Pool Name     |Status  |Read  |Write |Cksum |Used|Scrub   |Scrub |Last |
|              |        |Errors|Errors|Errors|    |Repaired|Errors|Scrub|
|              |        |      |      |      |    |Bytes   |      |Age  |
+--------------+--------+------+------+------+----+--------+------+-----+
|freenas-boot !|ONLINE  |     0|     0|     0| 42%|       0|00:05:12|18153|
|bhoot        !|ONLINE  |     0|     0|     0| 76%|   10.5M|12:36:09|18153|
+--------------+--------+------+------+------+----+--------+------+-----+



########## ZPool status report for freenas-boot ##########

  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:05:12 with 0 errors on Sun Sep  1 03:50:12 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    freenas-boot                                    ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/40460acb-cf27-11e5-b12b-f07959376c84  ONLINE       0     0     0
        da1p2                                       ONLINE       0     0     0

errors: No known data errors



########## ZPool status report for bhoot ##########

  pool: bhoot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 10.5M in 1 days 12:36:09 with 0 errors on Mon Sep  2 16:36:14 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    bhoot                                           ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/5663b940-bdde-11e5-9e00-f07959376c84  ONLINE       0     0     0
        gptid/ce93d2ab-ef03-11e8-aac3-f07959376c84  ONLINE       0     0     0
        gptid/ec0f7827-2d2c-11e6-b1de-f07959376c84  ONLINE       0     0     0
        gptid/ce06b19f-e4d8-11e4-b39d-f07959376c84  ONLINE       0     0     0
        gptid/ce69a75d-e4d8-11e4-b39d-f07959376c84  ONLINE       0     0     0
        gptid/b1f3389f-5382-11e6-885d-f07959376c84  ONLINE       0     0     0
        gptid/f3b91656-f1d6-11e7-be68-f07959376c84  ONLINE       0     0     0
        gptid/cf91d6e8-e4d8-11e4-b39d-f07959376c84  ONLINE       0     0     0

errors: No known data errors

 

CSP-on-FN

Dabbler
Joined
Apr 16, 2015
Messages
15
I'm running a pair of these, mirrored - 24x7:
HGST HUS724040ALA640 4TB Ultrastar 7K4000 Enterprise SATA 3

Each drive has accumulated 36,300+ power-on hours.
The work-cycle on the mirrored Volume is 'light' ... being kept busy for about an hour each day.
Plus - of course - there's the twice per month scrub which keeps the drives busy for a solid 5+ hours each time.
 
Joined
Jan 18, 2017
Messages
524
I'm still rocking some Seagate Constellations with around 50k hours as well as six ST8000NM0055 as far as in my server I only use enterprise drives. My offsite backups however are consumer desktop drives.
 
Top