680 is high, I would have a spare burned in and ready to go. Are any of those drives still covered by their warranties? Can you post the smart output for da1
Email their tech support, with over 600 reallocated sectors I have no doubt they will honor the warranty but asking them is the best way to be sure. I honestly can't remember the last time I used their diag tool for warranty purposes and I always pay for the advanced replacement service.
I've had no more problems on SC61 since that one isolated incident, so I'm declaring the random timeout issue resolved. This thread can finally die! Bottom line, if you have 10TB Ironwolf, you absolutely want the SC61 firmware upgrade.
Looks like I have to pull all the drives and use another PC, it doesn't look like the utilities can detect any of the drives connected to the backplane. PITA.
Email their tech support, with over 600 reallocated sectors I have no doubt they will honor the warranty but asking them is the best way to be sure. I honestly can't remember the last time I used their diag tool for warranty purposes and I always pay for the advanced replacement service.
Yikes, I guess I'd better do something about da1, since I just checked to see if anything had changed and got this. I wonder how many spare sectors these drives have?
Code:
# for i in {0..7}; do echo -n "da$i "; smartctl -a /dev/da$i | grep 'Reallocated_Sector_Ct' | awk '{ print $10 }'; done
da0 0
da1 13104
da2 16
da3 64
da4 0
da5 0
da6 0
da7 32
13k and still operating, it will probably fail out right before it runs out of sectors. Is it still passing SMART tests?. I'd be getting RMA's going on DA1 and DA3 and one month before warranty expires I'd do DA2 and DA7 assuming they don't start rapidly increasing their reallocated sectors.
@mloiterman Have you tried connecting any of your disks to the SATA connectors on that MB? That's one thing I haven't tried yet as mine isn't that unstable. I have a drive dropping out every month or so on average and I just re-join it to the vdev.
Drives: Seagate Barracuda Pro 10TB drives (ST10000DM0004) (6 total)
SAS: LSI 2008 (Dell PERC H310 flashed to IT mode with the latest LSI firmware - 20.00.07.00)
Issue: Same as OP, drop-offs about every day, usually timed to a routine task (cloud sync in my case)
The NCQ queuing script did not work for me, nor are there firmware updates for my model of Seagate 10TB drives. I tried different cables and PSUs to no avail.
What did end up working was connecting the drives to the SATA ports on my motherboard instead. In my config, I have 3x3TB/3x4TB (mix of WD and Seagate) and the 6x10TB Barracuda Pros. I was able to switch it around so the 10TB drives are connected to the MB and the 3&4TB drives are the ones connected through the SAS controller. Fingers crossed, but for the last week I've had no issues on either sets of drives. I therefore suspect the issue is between large Seagate drives and LSI SAS controllers.
A quick head's up for those following this thread. Chris Mellor from Blocks & Files just published a piece around Western Digital and possibly Seagate shipping Drive-Managed Shingled Media Recording (DM-SMR) drives which hide this fact when queried. Apparently this tech can be detrimental to certain high-load NAS operations. Anyway have a read.
I'm suffering big issues of the type reported in this thread (SYNCHRONIZE CACHE, command timeouts, dropped disks from zpool), with all 6 of my new Seagate SATA 8TB ST8000VN004-2M2101 drives in 6x disk RAID-Z2 array on LSI9207-8i (SAS2308 controller) in IT mode with latest 20.00.07.00 firmware, in FreeNAS11.3U2 (and before that U1 and also 11.2).
Came here looking for answers and it's been very informative. I'll aim to do a write up of my experience in another post.
Edit: 19th April 2020: Ars Technica have picked up on the SMR/CMR issue as well:
Problem: ZFS array reporting DEGRADED or UNAVAILABLE (!!) due to hard drives going in a FAULTED state. Lots of drive-releated jibberish in the system logs and remote VGA (seen via iDRAC login). When this first happened to a drive I RMA'd it. Then I saw that every drive suffers from the issue and I knew something bigger and more sinister was up.
My environment:
Dell R710 rev II chassis. 2x 870w PSU. 2x Intel X5650 CPU. 128GB Micron ECC Registered DIMMs (tested with memtest86+ and onboard Lifecycle Controller (F10 at boot) diagnostics. All BIOS and firmware updated to very latest revisions.
Intel X520-DA2 10GbE NIC (onboard broadcomm 4x 1GbE disabled in BIOS)
LSI9207-8i RAID card flashed with IT-mode firmware 20.00.07.00. UEFI and BIOS erased as unnecessary.
Aftermarket 0.8m SAS2 SFF-8087 cables (tested fine elsewhere)
RAID array is 6x disk RAIDZ2. No ARC/L2ARC, or ZIL/SLOG
Disks are Seagate 8TB SATA 7200rpm "IronWolf NAS" ST8000VN004-2M210 with SC60 f/w (no update available online)
OS is FreeNAS 11.3U2 - BIOS boot (not UEFI) off a sandisk 32GB USB3 Samsung FIT flash drive in internal slot
Fault trigger:
I use a great backup tool called Nakivo Backup & Replication to backup VMs from VMware vSphere. vCenter 6.7 with ESXi 6.5. It's only when doing what's called a full data verification scheduled via "run full data verification on a schedule" or triggered manually with "Verify all backups" on the local onboard repository, that the issue occurs. It's definitely not a Naikvo issue due to the fault showing up at a much lower level in the data stack. It just uses the disks in such an intense R/W way as to trigger the underlying issue. Incidentally....
Normal Nakivo backups are fine
Resilvering is fine
ZFS pool scubs are fine
Go figure! It really is a corner case, but it is very noticeable and a data verification with Nakivo can bring down the whole array. There's *always* issues from doing a Nakivo full data verification. Issues don't always bring down the array or even show up on the array ("zpool status") but there are always issues showing up in the /var/log/messages sometimes spread out by tens of minutes or even hours. Sometimes they aren't enough to FAULT a drive and sometimes they are. BTW, Nakivo runs in a FreeNAS IOCage jail.
CLI outputs:
Note it was often much worse than this.... NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/24d54693-38cc-11ea-bb29-b8ac6f88c792 ONLINE 0 0 0
gptid/4e94da77-38f5-11ea-bb29-b8ac6f88c792 ONLINE 0 0 0
gptid/1b887e9c-5520-11ea-bb29-b8ac6f88c792 ONLINE 0 0 0
gptid/ed46b8d4-3a62-11ea-bb29-b8ac6f88c792 ONLINE 0 0 0
gptid/3125b087-628d-11ea-af30-b8ac6f88c792 FAULTED 3 0 0 too many errors
gptid/8755b035-3815-11ea-bb29-b8ac6f88c792 ONLINE 0 0 0
root@freenas01[~]# smartctl -a -q noserial /dev/da0
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate IronWolf
Device Model: ST8000VN004-2M2101
Firmware Version: SC60
User Capacity: 8,001,563,222,016 bytes [8.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-4 (minor revision not indicated)
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Apr 15 14:26:21 2020 AEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 559) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 715) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x50bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1499 -
# 2 Extended offline Completed without error 00% 1177 -
# 3 Short offline Completed without error 00% 1165 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
NB: I am testing 19.00.00.00 firmware here but will go back to 20.00.07.00 as v19 didn't help.
root@freenas01[~]# camcontrol devlist
<ATA ST8000VN004-2M21 SC60> at scbus0 target 0 lun 0 (pass0,da0)
<ATA ST8000VN004-2M21 SC60> at scbus0 target 1 lun 0 (pass1,da1)
<ATA ST8000VN004-2M21 SC60> at scbus0 target 2 lun 0 (pass2,da2)
<ATA ST8000VN004-2M21 SC60> at scbus0 target 3 lun 0 (pass3,da3)
<ATA ST8000VN004-2M21 SC60> at scbus0 target 4 lun 0 (pass4,da4)
<ATA ST8000VN004-2M21 SC60> at scbus0 target 5 lun 0 (pass5,da5)
<PLDS DVD-ROM DS-8DBSH MD51> at scbus1 target 0 lun 0 (pass6,cd0)
<Samsung Flash Drive FIT 1100> at scbus4 target 0 lun 0 (pass7,da6)
<iDRAC LCDRIVE 0323> at scbus5 target 0 lun 0 (pass8,da7)
<iDRAC Virtual CD 0323> at scbus6 target 0 lun 0 (pass9,cd1)
root@freenas01[~]# mpsutil show all
Adapter:
mps0 Adapter:
Board Name: SAS9207-8i
Board Assembly: H5-25412-00C
Chip Name: LSISAS2308
Chip Revision: ALL
BIOS Revision: 7.37.00.00
Firmware Revision: 19.00.00.00
Integrated RAID: no
PhyNum CtlrHandle DevHandle Disabled Speed Min Max Device
0 0001 0009 N 6.0 1.5 6.0 SAS Initiator
1 0002 000a N 6.0 1.5 6.0 SAS Initiator
2 0004 000c N 6.0 1.5 6.0 SAS Initiator
3 0003 000b N 6.0 1.5 6.0 SAS Initiator
4 N 1.5 6.0 SAS Initiator
5 N 1.5 6.0 SAS Initiator
6 0005 000d N 6.0 1.5 6.0 SAS Initiator
7 0006 000e N 6.0 1.5 6.0 SAS Initiator
Devices:
B____T SAS Address Handle Parent Device Speed Enc Slot Wdt
00 00 4433221100000000 0009 0001 SATA Target 6.0 0001 03 1
00 01 4433221101000000 000a 0002 SATA Target 6.0 0001 02 1
00 03 4433221103000000 000b 0003 SATA Target 6.0 0001 00 1
00 02 4433221102000000 000c 0004 SATA Target 6.0 0001 01 1
00 04 4433221106000000 000d 0005 SATA Target 6.0 0001 05 1
00 05 4433221107000000 000e 0006 SATA Target 6.0 0001 04 1
Enclosures:
Slots Logical ID SEPHandle EncHandle Type
08 500605b008de16d0 0001 Direct Attached SGPIO
Expanders:
NumPhys SAS Address DevHandle Parent EncHandle SAS Level
root@freenas01[~]# diskinfo -t /dev/da0
/dev/da0
512 # sectorsize
8001563222016 # mediasize in bytes (7.3T)
15628053168 # mediasize in sectors
4096 # stripesize
0 # stripeoffset
972801 # Cylinders according to firmware.
255 # Heads according to firmware.
63 # Sectors according to firmware.
ATA ST8000VN004-2M21 # Disk descr.
######## # Disk ident. (I hid this output)
No # TRIM/UNMAP support
7200 # Rotation rate in RPM
Not_Zoned # Zone Mode
Seek times:
Full stroke: 250 iter in 5.823103 sec = 23.292 msec
Half stroke: 250 iter in 4.550814 sec = 18.203 msec
Quarter stroke: 500 iter in 4.087809 sec = 8.176 msec
Short forward: 400 iter in 1.137892 sec = 2.845 msec
Short backward: 400 iter in 2.081851 sec = 5.205 msec
Seq outer: 2048 iter in 0.103975 sec = 0.051 msec
Seq inner: 2048 iter in 0.213326 sec = 0.104 msec
Transfer rates:
outside: 102400 kbytes in 0.588111 sec = 174117 kbytes/sec
middle: 102400 kbytes in 0.474089 sec = 215993 kbytes/sec
inside: 102400 kbytes in 0.843246 sec = 121436 kbytes/sec
My problem-solving actions:
After my first FAULTED disk I RMA'd it straight away with Seagate. Got a new one back.
Problems then happened on several disks - in fact all disks. My whole array became UNAVAILABLE once and I almost had a combined heart attack + brain explosion. A reboot brought everything back up as if nothing had happened (phew!) A zfs scrub of the data showed no issues. Very strange.
I originally had a Dell HBA crossflashed to LSI. It had SAS2008 controller. Suspecting some crossflash dodginess I bought a brand new LSI 9207-8i and updated firmware to 20.00.07.00 and flashed in the BIOS and UEFI boot roms.
Swapped SAS2 SFF-8087 0.8m cables with new ones
Swapped SAS backplane card from another chassis I had spare
Swapped Intel X520-DA2 with a new one (shares IRQ with LSI card)
Swapped NIC and LSI cards around in different PCIe slots trying different IRQs (mindful that some are PCI 2.0 8x and some are PCI 2.0 4x)
Turned off unneeded items in the BIOS like 1Gbps Ethernet ports and serial console
Changed BIOS QPI Bandwidth Priority from "Compute" to "I/O" and turned on "Maximum Performance" power setting
Swapped 870W power supplies with new ones
Downgraded LSI firmware to 19.00.00.00, with and without BIOS and UEFI boot ROMs
You are here :)
There's no reason for me to suspect CPU and motherboard after running motherboard, CPU and RAM tests, and the system runs stable as a rock it's just the ZFS / disk issues I'm seeing. I run an Ubuntu Linux VM under Bhyve and have had zero issues with that.
Thoughts so far:
I really do think at this stage it's the drives, in combination with FreeNAS 11.2 and above. I didn't run FreeNAS 11.1 on this setup. Have tried 11.2 and 11.3, 11.3U1 and 11.3U2 all with the issue. I think FreeNAS BSD drivers are a exposing the underlying drive issue.
I will attempt to turn off Native Command Queueing on the Seagates (if I can work out how to do it and confirm it's done, mindful it's not reboot persistant without a shell script).
I did have slower 5,400 rpm WD Red NAS drives in the system initially but swapped them out for the faster Seagates, and because I wanted those slower, cooler, quieter drives in my second home NAS instead of the work array. Those drives are all "WDC WD80EFZX-68UW8N0" and I don't recall those ever being an issue. After playing around with NCQ, I may slot the WD drives into the work array (the one I'm having issues with here) one by one and then see if the issue persists after a full drive swapover.
If that's the case I'd be tempted to get WD Red Pro's in there, but only if I can be guaranteed they are CMR and not SMR drives! Ughh.. the SATA drive space really has become a system builder's nightmare these days.
The results are in. The "camcontrol tags ..." command has worked around the issue I was having! I did a full Nakivo backups verification and it was rock solid. It did take a bit longer than usual, but at least there was no issues whatsover. This never happens with these disks.
At long last I've finally gotten to the root of the issue!
The post-init script I use for my "ST8000VN004-2M21 SC60" 8TB SATA 7200rpm Seagate IronWolf drives is:
Code:
#!/usr/local/bin/bash
for i in `camcontrol devlist | grep "ATA ST800" | cut -d"," -f2 | cut -d")" -f1` ; do camcontrol tags $i -N 1 ; done
For future searchers, I was initially confused that the "camcontrol tags" command wasn't disabling NCQ because of this: #root@freenas01[~]# camcontrol inquiry da0
pass0: <ATA ST8000VN004-2M21 SC60> Fixed Direct Access SPC-4 SCSI device
pass0: Serial Number ##hidden##
pass0: 600.000MB/s transfers, Command Queueing Enabled
...specifically that Command Queueing was still "Enabled". But what the "tags" command is doing is shrinking the command queue down from 255 to 1, and a queue of one is effectively not a queue.
You can verify before and after the change that the sum of "dev_openings" and "dev_active" shrinks from 255 to 1.
It really is unbelievable that Seagate would take several years to come out for a fix for this *and* that they would come out for a SC61 firmware update for the 10TB Seagate IronWolf (+ IronWolf Pro) models ST10000VN0004 and ST10000NE0004, but not release SC61 firmware for the equivalent 8TB models ST8000VN004 (IronWolf) and ST8000NE0004 (IronWolf Pro)
Just to put a full stop to this for my issues. I engaged with Seagate tech support and they supplied me with another firmware for my ST8000VN004 8TB drives. Good news - I could enable NCQ again and not have any issues with SMART Command_Timeout accumulation, and I could do a full Nakivo backups verification, plus ZFS scrub and not have any issues. So I consider this issue resolved.
I used the Seagate utility to boot from a USB stick to update the drives. Thankfully it worked with the LSI RAID card, so that I didn't have to tediously transpose the drives into another caddy to put into a Windows server to run. The weird thing is that the "new" firmware still says SC60 and not SC61. You'd think they'd at least give it an engineering code like SC60e or something or even SE60. Anyways - there's other IT challenges to tackle. Finally I can put this hard drive madness to rest! (touch wood).
Just to put a full stop to this for my issues. I engaged with Seagate tech support and they supplied me with another firmware for my ST8000VN004 8TB drives. Good news - I could enable NCQ again and not have any issues with SMART Command_Timeout accumulation, and I could do a full Nakivo backups verification, plus ZFS scrub and not have any issues. So I consider this issue resolved.
I used the Seagate utility to boot from a USB stick to update the drives. Thankfully it worked with the LSI RAID card, so that I didn't have to tediously transpose the drives into another caddy to put into a Windows server to run. The weird thing is that the "new" firmware still says SC60 and not SC61. You'd think they'd at least give it an engineering code like SC60e or something or even SE60. Anyways - there's other IT challenges to tackle. Finally I can put this hard drive madness to rest! (touch wood).
Could you provide a link to this so others can benefit? I don't have any of these drives myself, but this is the first time I've heard of new firmware being available for the 8TB model.
Just to put a full stop to this for my issues. I engaged with Seagate tech support and they supplied me with another firmware for my ST8000VN004 8TB drives. Good news - I could enable NCQ again and not have any issues with SMART Command_Timeout accumulation, and I could do a full Nakivo backups verification, plus ZFS scrub and not have any issues. So I consider this issue resolved.
I used the Seagate utility to boot from a USB stick to update the drives. Thankfully it worked with the LSI RAID card, so that I didn't have to tediously transpose the drives into another caddy to put into a Windows server to run. The weird thing is that the "new" firmware still says SC60 and not SC61. You'd think they'd at least give it an engineering code like SC60e or something or even SE60. Anyways - there's other IT challenges to tackle. Finally I can put this hard drive madness to rest! (touch wood).
I was having similar issues and just recently Seagate provided me a file for my ST8000VN0002 drive. Not sure if it will work for other because i see your model number is different but it worked for mine and it says its SC61. Unfortunately, I cant attach it here. If anyone knows how i can do it let me know or send me a message to share it.
Hi guys, I have the same drives and the same errors. No matter how hard i try, seagate support refuses to provide me the firmware. I have been in contact with Supermicro whom provided me demo firmware (the one that leaves the firmware version at sc60) and it fixed the issues somewhat. I still have a couple drives that occasionally keep getting marked as bad by ZFS.
Can anyone help me out with how to get the SC61 firmware for ST8000VN004?
Hi guys, I have the same drives and the same errors. No matter how hard i try, seagate support refuses to provide me the firmware. I have been in contact with Supermicro whom provided me demo firmware (the one that leaves the firmware version at sc60) and it fixed the issues somewhat. I still have a couple drives that occasionally keep getting marked as bad by ZFS.
Can anyone help me out with how to get the SC61 firmware for ST8000VN004?
I don't believe there is any SC61 for ST8000VN004 (8TB) drives - only SC60. There's SC61 only for the bigger drives in the IronWolf NAS series.
I bought four more ST8000VN004 drives recently (like a few weeks ago). They still came with SC60 firmware. I applied the engineering release SC60 version I received from support from about a year ago and it loaded fine. It's called "ST8000VN004.ALL.LOD" and size is 2.53 MB (2,662,400 bytes). Maybe that's the same as the "demo firmware" that you have already? The checksums are:
MD5 Checksum: 1A3709E83D0D23F9B18525F27AD1D279
SHA-1 Checksum: 6C76E16E72EF9528E307D642E7B1B40DB49D8254
SHA-256 Checksum: 17B560D47586B6B4E60463855ED9F1854C7A276588232EC553654F676A676399
SHA-512 Checksum: C9C854EBC87EF11D806201D80F736CB024FA85D690089BE802F954EC8A3E79075C622DBB879A7E4683B6210D42AE2AE5ED84B0A2661BF8F09A05D062A75EB793
Generated by MD5 & SHA Checksum Utility @ http://raylin.wordpress.com/downloads/md5-sha-1-checksum-utility
Is your controller SAS2 or 3? You might want to try upgrading to a SAS3 controller as well if you're still on SAS2, if that's possible for you. Especially if you plan on running SCALE in future, which has issues on older legacy hardware - at least in the current beta.
I don't believe there is any SC61 for ST8000VN004 (8TB) drives - only SC60. There's SC61 only for the bigger drives in the IronWolf NAS series.
I bought four more ST8000VN004 drives recently (like a few weeks ago). They still came with SC60 firmware. I applied the engineering release SC60 version I received from support from about a year ago and it loaded fine. It's called "ST8000VN004.ALL.LOD" and size is 2.53 MB (2,662,400 bytes). Maybe that's the same as the "demo firmware" that you have already? The checksums are:
MD5 Checksum: 1A3709E83D0D23F9B18525F27AD1D279
SHA-1 Checksum: 6C76E16E72EF9528E307D642E7B1B40DB49D8254
SHA-256 Checksum: 17B560D47586B6B4E60463855ED9F1854C7A276588232EC553654F676A676399
SHA-512 Checksum: C9C854EBC87EF11D806201D80F736CB024FA85D690089BE802F954EC8A3E79075C622DBB879A7E4683B6210D42AE2AE5ED84B0A2661BF8F09A05D062A75EB793
Generated by MD5 & SHA Checksum Utility @ http://raylin.wordpress.com/downloads/md5-sha-1-checksum-utility
Is your controller SAS2 or 3? You might want to try upgrading to a SAS3 controller as well if you're still on SAS2, if that's possible for you. Especially if you plan on running SCALE in future, which has issues on older legacy hardware - at least in the current beta.
Hi Gcon! Thank you for your thorough reply, and for your otherwise excellent contributions to this thread. Of all my google searches, this thread has proven to be the most valid and on-point resource for me and my research on this matter.
My storage box is SuperMicro 847E1C-R1K28JBOD, controller is LSI SAS 9300-8e. I have been in contact with SuperMicro and they confirmed that the ST8000VN004 drives were not on the supported list for their storage array. I did get an offer for RMA from seagate (two years ago) but stupidly I did not have the original packaging left from the original purchase of the 14 drives, and thus didn't want to send them back unsafely. Now I am just determined to get them to work.
Could you paste some of the final lines of the result if you run strings on your firmware file? I'd like to compare the date and numbers that come up to get a hint of what kind of age difference we are talking.
My issue currently is with only a few of the drives constantly failing over to a hotspare. The drives work, are not faulty, I am absolutely convinced there is just a bug in the firmware. My original round of firmware updating might have missed a drive or two and for this reason, I am planning on patching each drive individually after I move over my data to a secondary array.
Again, Gcon, thank you for your participation in this matter. The information and knowledge you contribute is absolutely stellar!
Let me know if you need it.
Good idea to just run through the firmware updates again. It's interesting you list 14 drives. The spec sheet says Seagate officially support "1-8 Bays" for the ST8000VN004:
I run mine as a pool of 6 drives - a "stripe" of 3 mirrored VDEVs (almost like RAID10) - which should be a supported configuration. I see that the IronWolf Pro 8TB (ST8000NE001) supports "Up to 24-bay" setups:
More expensive, but they give a 5-year warranty on the Pro (vs 3 years for non-Pro) which is nice. If you do have to change over the drives then now might be a good time to sell the current ones as there's a bit of a shortage with the "chia" crypto farming/mining going on. You just need to be able to secure stocks of whatever replacements you might go with, which is not easy. For example, this month I'm changing from a SAS2 to SAS3 controller, and as soon as some new SAS3 6TB drives (on backorder) come in, I'll be running 8x 6TB SAS, instead of the current 6x 8TB SATA. Due to purchase limits I had to use two different stores and won't get all 8 drives until the end of July, at best :'( The 8TB IronWolf drives coming out will go into an older QNAP. Am really praying this engineering firmware plays nice with the QNAP!
Hi All.
I have same problem with completely new Seagate IronWolf NAS 8Tb drive ST8000VN004, dropping disk from ZFS on Qnap TS-h973AX NAS. I am running SeaTools for Windows on other computer and everything looks good, but NAS marked my drive with Warning, to many S.M.A.R.T. errors "Uncorrectable sector count".
I just created POOL in raid 5 with 5 disks and started to copy data. This happened in first 8 hours of drive working.
I do not know what kind of controler is inside Qnap TS-h973AX NAS.
There is only SC60 firmware for 8Tb drive, no SC61. If there is any unofficial firmware anywhere, can you point please...
I am sorry for writing here not using TrueNAS but Qnap, but it is Seagate IronWolf NAS 8Tb drive related.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.