Troubleshoot slow performance RAIDz2

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am throwing this out there so other people can see this and perhaps use this information to understand what is going wrong with their system. I think I know what the problem is with my system, but I will be happy to learn from anyone that has wisdom to share.
I replaced some "old" 2 TB drives in my system with new 4 TB drives (an entire vdev) which caused me to need to destroy the pool, recreate it with the new dives and copy the data back. Based on past performance of the system, this should have taken about 2.5 hours. Six hours later, it is still not done.
Looking into the issue, I found that three of the "new" drives are not performing as they should.
The output below is from zpool iostat -v and I used a screen capture so I could mark it:

iostat-2.PNG If you notice the numbers I highlighted, they are significantly lower for those three drives and that is bringing down the performance of the entire vdev, see how it is 46.8M per drive (because all the drives are loaded equally) but in the other vdev the drives are all 58.7M. All the drives in vdev-1 are running slower because of those three drives, and a single drive could have this effect. So, one drive could drag down the performance of the vdev and one vdev drags down the performance of the pool. The overall performance is less than half what it should be.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I adjusted this so the columns aligned properly, to make it easier to read:
Code:
# zpool iostat -v
										   capacity	 operations	bandwidth
pool									alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
Backup								  14.5T  7.28T  3.58K	  0   432M  1.02K
  raidz1								14.5T  7.28T  3.58K	  0   432M  1.02K
	gptid/-a35b-11e8-aefa-0cc47a9cd5a4	  -	  -	424	  0   149M	576
	gptid/-a35b-11e8-aefa-0cc47a9cd5a4	  -	  -	358	  0   148M	572
	gptid/-a35b-11e8-aefa-0cc47a9cd5a4	  -	  -	441	  0   149M	570
	gptid/-a35b-11e8-aefa-0cc47a9cd5a4	  -	  -	348	  0   148M	568
--------------------------------------  -----  -----  -----  -----  -----  -----
Emily								   14.8T  28.7T	  0  3.37K	334   407M
  raidz2								8.22T  13.5T	  0  1.89K	205   226M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	686	 29  56.7M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	736	 29  56.7M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	728	 44  56.7M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	705	 32  56.7M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	719	 31  56.7M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	737	 45  56.7M
  raidz2								6.57T  15.2T	  0  1.48K	129   181M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	451	 20  45.3M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	422	 18  45.3M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	554	 20  45.3M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	698	 27  45.3M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	704	 25  45.3M
	gptid/-bf05-11e8-b5f3-0cc47a9cd5a4	  -	  -	  0	700	 22  45.3M
--------------------------------------  -----  -----  -----  -----  -----  -----
Irene								   14.8T  28.7T  3.30K	  0   404M	135
  raidz2								7.33T  14.4T  1.64K	  0   200M	 63
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	867	  0  33.4M	 73
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	858	  0  32.9M	 73
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	877	  0  33.5M	 73
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	880	  0  33.9M	 74
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	873	  0  33.7M	 74
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	888	  0  33.9M	 73
  raidz2								7.45T  14.3T  1.66K	  0   203M	 72
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	899	  0  34.4M	 77
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	860	  0  32.9M	 78
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	889	  0  34.6M	 78
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	887	  0  34.4M	 79
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	879	  0  33.3M	 78
	gptid/-becf-11e8-b1c8-0cc47a9cd5a4	  -	  -	916	  0  34.9M	 78
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot							10.4G  26.9G	 11	 47   213K   340K
  mirror								10.4G  26.9G	 11	 47   213K   340K
	gptid/-4b12-11e6-a97c-002590aecc79	  -	  -	  5	 10   107K   340K
	gptid/-4b12-11e6-a97c-002590aecc79	  -	  -	  5	 10   107K   340K
--------------------------------------  -----  -----  -----  -----  -----  -----
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I also found this telling:
Code:
  pool: Emily
 state: ONLINE
  scan: scrub in progress since Sun Sep 23 11:46:59 2018
		5.08T scanned at 1.11G/s, 195G issued at 966M/s, 14.8T total
		0 repaired, 1.29% done, 0 days 04:23:53 to go
config:

		NAME											STATE	 READ WRITE CKSUM
		Emily										   ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b07bc723-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b1893397-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b2bfc678-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b3c1849e-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b4d16ad2-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
		  raidz2-1									  ONLINE	   0	 0	 0
			gptid/b637fc6a-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b7cbc568-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b8c6c238-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/b9de3232-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/baf4aba8-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
			gptid/bbf26621-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE	   0	 0	 0
		logs
		  gptid/ae487c50-bec3-11e8-b1c8-0cc47a9cd5a4	ONLINE	   0	 0	 0
		cache
		  gptid/ae52d59d-bec3-11e8-b1c8-0cc47a9cd5a4	ONLINE	   0	 0	 0

errors: No known data errors
The projected time to completion is about four and a half hours where the pool was able to scrub in about two hours before these new drives were installed. The poor performance of the new drives is cutting overall performance by about half.

I can't do it instantly, but it looks like I will be replacing the replacement drives with a different model.

The poor performing drives are the following:
Code:
partition  zpool		 device  disk					 size  serial		   rpm
------------------------------------------------------------------------------------
da26p1	 Emily		 da26	ATA ST4000DM004-2CV1	 4000  ZFN0			5425
da27p1	 Emily		 da27	ATA ST4000DM004-2CV1	 4000  ZFN0			5425
da28p1	 Emily		 da28	ATA ST4000DM005-2DP1	 4000  ZDH1			5980

Ones that are working properly are:
Code:
partition  zpool		 device  disk					 size  serial		   rpm
------------------------------------------------------------------------------------
da4p1	  Emily		 da4	 ATA ST4000DM000-1F21	 4000  Z307			5900
da5p1	  Emily		 da5	 ATA ST4000DM000-1F21	 4000  Z307			5900
da6p1	  Emily		 da6	 ATA ST4000DM000-1F21	 4000  Z307			5900
da7p1	  Emily		 da7	 ATA ST4000DM000-1F21	 4000  Z305			5900
da8p1	  Emily		 da8	 ATA ST4000DM000-1F21	 4000  Z307			5900
da9p1	  Emily		 da9	 ATA ST4000DM000-1F21	 4000  Z305			5900

da29p1	 Emily		 da29	ATA ST4000DM000-1F21	 4000  Z307			5900
da30p1	 Emily		 da30	ATA ST4000DM000-1F21	 4000  Z307			5900
da31p1	 Emily		 da31	ATA ST4000DM000-1F21	 4000  Z307			5900

da14p1	 Irene		 da14	ATA ST4000DM000-1F21	 4000  Z301			5900
da15p1	 Irene		 da15	ATA ST4000DM000-1F21	 4000  S300			5900
da16p1	 Irene		 da16	ATA ST4000DM000-1F21	 4000  Z301			5900
da17p1	 Irene		 da17	ATA ST4000DM000-1F21	 4000  Z301			5900
da18p1	 Irene		 da18	ATA ST4000DM000-1F21	 4000  Z301			5900
da19p1	 Irene		 da19	ATA ST4000DM000-1F21	 4000  S300			5900
da20p1	 Irene		 da20	ATA ST4000DM000-1F21	 4000  Z301			5900
da21p1	 Irene		 da21	ATA ST4000DM000-1F21	 4000  W300			5900
da22p1	 Irene		 da22	ATA ST4000DM000-1F21	 4000  Z301			5900
da23p1	 Irene		 da23	ATA ST4000DM000-1F21	 4000  Z300			5900
da24p1	 Irene		 da24	ATA ST4000DM000-1F21	 4000  Z301			5900
da25p1	 Irene		 da25	ATA ST4000DM000-1F21	 4000  Z301			5900
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Seagate Barracuda vs Seagate Desktop.

Any differences in specs? Amount of cache etc?

I’ve noticed performance differences (on the order of 10-20%) based on amount of cache between different revisions of the same drive before.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The drives that are problems are the new green and black label drives that are marked as Barracuda. The older Seagate Desktop marked drives are working great.
The cache is advertised as the same but the rotational speed is different. I'm disappointed with this.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I will admit, I took shortcuts. Two of the three problem drives, I didn't even do burn-in testing on. I'm regretting it now. One of those new drives threw 6 bad sectors today.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yeouch
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
ST4000DM004 are SMR drives, so likely to be slower than the other ones which are PMR.
I didn't know. That probably explains why I got such a great deal on the price.

They are in warranty until some time in 2020, so I guess I will keep them and move them to my wife's desktop PC. With the way they are dragging down the performance of the pool, I will be pulling them as soon as possible.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
They would make good backup drives in a write once type scenario. Maybe move em to your secondary?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Full transparency. I got too eager and installed drives without doing enough prep work. I know better. I also didn't dig into the specs on the model ST4000DM004-2CV1 drives to realize they were SMR. I had four of the six drives long enough that I did full burn-in testing but I was waiting to have enough drives to replace an entire vdev, so when I got the last couple drives, I jumped the gun. I received those last two drives and, without testing them, installed them in my primary pool on my primary NAS. Now, the Seagate Barracuda ST4000DM005-2DP1 drive, the only one I have of that model, and one of the two drives that got installed with no burn-in test, has bad sectors.
I installed the drives in the pool on Saturday, the fun part of my weekend. On Sunday I got an alert email that the error count had increased from zero to six. Then, this morning, I am seeing that the drive has 1904 Reallocated sectors. Total power on hours for the drive is only 29 and power cycle count is only 2. This drive was taken out of the factory wrapping the first time on Saturday.
Here is an excerpt from the SMART data:
Code:
SMART overall-health self-assessment test result: PASSED

ID# ATTRIBUTE_NAME		  FLAG	 VALUE WORST THRESH TYPE	  UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate	 0x000f   069   064   006	Pre-fail  Always	   -	   7694056
  3 Spin_Up_Time			0x0003   099   099   000	Pre-fail  Always	   -	   0
  4 Start_Stop_Count		0x0032   100   100   020	Old_age   Always	   -	   99
  5 Reallocated_Sector_Ct   0x0033   096   096   010	Pre-fail  Always	   -	   1904
  7 Seek_Error_Rate		 0x000f   065   060   045	Pre-fail  Always	   -	   3358569
  9 Power_On_Hours		  0x0032   100   100   000	Old_age   Always	   -	   29 (180 88 0)
 10 Spin_Retry_Count		0x0013   100   100   097	Pre-fail  Always	   -	   0
 12 Power_Cycle_Count	   0x0032   100   100   020	Old_age   Always	   -	   2
183 Runtime_Bad_Block	   0x0032   100   100   000	Old_age   Always	   -	   0
184 End-to-End_Error		0x0032   100   100   099	Old_age   Always	   -	   0
187 Reported_Uncorrect	  0x0032   094   094   000	Old_age   Always	   -	   6
188 Command_Timeout		 0x0032   100   100   000	Old_age   Always	   -	   0 0 0
189 High_Fly_Writes		 0x003a   100   100   000	Old_age   Always	   -	   0
190 Airflow_Temperature_Cel 0x0022   067   065   040	Old_age   Always	   -	   33 (Min/Max 31/35)
191 G-Sense_Error_Rate	  0x0032   100   100   000	Old_age   Always	   -	   0
192 Power-Off_Retract_Count 0x0032   100   100   000	Old_age   Always	   -	   2
193 Load_Cycle_Count		0x0032   100   100   000	Old_age   Always	   -	   11
194 Temperature_Celsius	 0x0022   033   040   000	Old_age   Always	   -	   33 (0 25 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000	Old_age   Always	   -	   0
198 Offline_Uncorrectable   0x0010   100   100   000	Old_age   Offline	  -	   0
199 UDMA_CRC_Error_Count	0x003e   200   200   000	Old_age   Always	   -	   0
240 Head_Flying_Hours	   0x0000   100   253   000	Old_age   Offline	  -	   28h+52m+02.563s
241 Total_LBAs_Written	  0x0000   100   253   000	Old_age   Offline	  -	   2358985135
242 Total_LBAs_Read		 0x0000   100   253   000	Old_age   Offline	  -	   2349106100

ATA Error Count: 6 (device log contains only the most recent five errors)
   CR = Command Register [HEX]
   FR = Features Register [HEX]
   SC = Sector Count Register [HEX]
   SN = Sector Number Register [HEX]
   CL = Cylinder Low Register [HEX]
   CH = Cylinder High Register [HEX]
   DH = Device/Head Register [HEX]
   DC = Device Command Register [HEX]
   ER = Error register [HEX]
   ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 6 occurred at disk power-on lifetime: 10 hours (0 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 00 ff ff ff 4f 00	  10:12:49.020  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:12:49.020  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:12:49.020  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:12:49.020  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:12:49.020  READ FPDMA QUEUED

Error 5 occurred at disk power-on lifetime: 10 hours (0 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 00 ff ff ff 4f 00	  10:04:34.038  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:34.038  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:34.038  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:34.038  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:34.038  READ FPDMA QUEUED

Error 4 occurred at disk power-on lifetime: 10 hours (0 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 00 ff ff ff 4f 00	  10:04:11.249  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:11.009  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:11.009  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:10.477  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:04:10.477  READ FPDMA QUEUED

Error 3 occurred at disk power-on lifetime: 10 hours (0 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 00 ff ff ff 4f 00	  10:03:39.210  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:39.210  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:39.209  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:39.209  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:39.209  READ FPDMA QUEUED

Error 2 occurred at disk power-on lifetime: 10 hours (0 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 ff ff ff 0f  Error: UNC at LBA = 0x0fffffff = 268435455

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 00 ff ff ff 4f 00	  10:03:26.126  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:26.126  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:26.126  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:26.126  READ FPDMA QUEUED
  60 00 00 ff ff ff 4f 00	  10:03:26.126  READ FPDMA QUEUED

Test_Description	Status				  Remaining  LifeTime(hours)  LBA_of_first_error
Short offline	   Completed without error	   00%		21		 -
 
Joined
May 10, 2017
Messages
838
Then, this morning, I am seeing that the drive has 1904 Reallocated sectors.

That's a lot for such a new drive, a few reallocated sectors can be OK but so many in so few hours is a really bad sign, there also a few reported uncorrect errors, I guess it's still under warranty so you should have no problem replacing it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
They would make good backup drives in a write once type scenario. Maybe move em to your secondary?
My wife does video processing, part of the reason I built her a Xeon workstation, and she uses one set of drives as the source and another for the destination when she is rendering video. Right now, she has a pair of WD Black 2 TB drives, and a pair of HGST 2 TB drives but they are between 3 and 5 years old and I think they are probably due for replacement and, being older, the performance might be less, these might be as good with greater capacity. I will have to look at the specs on both and decide. I don't want to keep them in the NAS, not even the backup, because they are so much slower. The way I read the output of iostat they are almost half the speed of the "older model" Seagate Desktop (ST4000DM000-1F21) drives. Not quite half, but close. That is a lot of performance and it has already made the data load between vdev-0 and vdev-1 imbalanced. That was part of the reason I blew the pool away and reloaded from backup. I wanted to have the pool balanced and remove as much fragmentation as possible.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That's a lot for such a new drive, a few reallocated sectors can be OK but so many in so few hours is a really bad sign, there also a few reported uncorrect errors, I guess it's still under warranty so you should have no problem replacing it.
I have some spares. I just wasn't planning to use them. I will swap it out when I get home. I will contact the vendor today and see if I can send it back. I just got the thing so I would prefer that the seller replace it with a brand new drive instead of sending it to Seagate for a refurbished drive on exchange.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Joined
May 10, 2017
Messages
838
Another victim of Seagate's lousy labeling of SMR drives.

Yes, they are very inconsistent, and some times appear to be trying to hide that, for example, check the datasheet for these laptop drives, earlier version from February 2016 mentions SMR:

https://www.seagate.com/www-content/datasheets/pdfs/mobile-hddDS1861-1-1602-en_GB.pdf


upload_2018-9-24_15-6-45.png


This mention disappeared from the later datasheet from March 2016:

https://www.seagate.com/www-content/datasheets/pdfs/mobile-hddDS1861-2-1603-en_GB.pdf

On the other hand, they did add SMR to the later revisions of the manual:

https://www.seagate.com/www-content...ptop-fam/mobile-hdd/en-us/docs/100775165d.pdf

https://www.seagate.com/www-content...ptop-fam/mobile-hdd/en-us/docs/100775165f.pdf

upload_2018-9-24_15-12-17.png


So it's very inconsistent, who knows what they are trying to do, they don't mention SMR at all on the ST4000DM004 family manual, only by the tech specs we can deduce they are SMR, since 2TB PMR platters are not possible yet.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
who knows what they are trying to do
I'm sure it's just a coincidence that they're obscuring a negative performance characteristic of their cheapest drives in a market that thrives on TB/$.

Sarcasm_Detector.jpg
 
Joined
Jan 18, 2017
Messages
525
I don't want to keep them in the NAS, not even the backup, because they are so much slower.

After replacing the drive that is acting up I'd retest the speed of the vdev both your old and new Seagates supposedly use the same recording technology. Which Seagate is pointedly avoiding calling SMR.....

Manuals
ST4000DM000, ST4000DM004
 
Top