one drive dock running slower than the other during burn in tests, bend in cable???

Status
Not open for further replies.

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
I'm about 32 hours into burn in testing, and Drives 4, 5 and 6 (sata3 - sata5) on the right are running (read/writing) about 15 percent slower than drives 1, 2 and 3 (sata0 - sata2) on the left. I'm running badblocks on all the drives concurrently with TMUX, and the processes were all started in sequntial orders within a few minutes of each other.

Is this a function of the hardware config or is this an installation problem on my end? The hardware config doesn't seem to be taxed, plenty of idle and free ram.

The bend in the SATA is a little tight, but I wouldn't say forced or excessive. The only other consideration i suppose would be additional vibes from the PSU...

Code:
last pid: 43279;  load averages:  0.57,  0.52,  0.46														up 2+05:29:31  15:26:29
60 processes:  1 running, 59 sleeping																							 
CPU 0:  8.9% user,  0.0% nice,  2.0% system,  0.0% interrupt, 89.1% idle															
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  1.0% interrupt, 99.0% idle															
CPU 2:  0.0% user,  0.0% nice,  4.0% system,  0.0% interrupt, 96.0% idle															
CPU 3: 19.8% user,  0.0% nice,  0.0% system,  0.0% interrupt, 80.2% idle															
CPU 4:  2.0% user,  0.0% nice,  3.0% system,  4.0% interrupt, 91.1% idle															
CPU 5:  1.0% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.0% idle															
CPU 6:  1.0% user,  0.0% nice,  2.0% system,  0.0% interrupt, 97.0% idle															
CPU 7:  0.0% user,  0.0% nice,  2.0% system,  0.0% interrupt, 98.0% idle															
Mem: 76M Active, 607M Inact, 876M Wired, 29G Free																				 
ARC: 226M Total, 19M MFU, 196M MRU, 16K Anon, 1801K Header, 9382K Other															
Swap:																															 
																																 
  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND													 
 9107 root		  2  22	0   127M 32520K select  3   3:44  24.88% python3.6													
 3933 root		 15  20	0   505M   158M umtxn   0   2:18   5.20% uwsgi														
65586 root		  1  20	0 10620K  2840K physwr  4  73:54   2.38% badblocks													
65612 root		  1  20	0 10620K  2836K physwr  2  73:02   2.14% badblocks													
65420 root		  1  21	0 10620K  2836K physwr  5  79:38   2.11% badblocks													
65496 root		  1  20	0 10620K  2840K physwr  2  72:36   2.00% badblocks													
65470 root		  1  21	0 10620K  2836K physwr  0  79:16   2.00% badblocks													
65462 root		  1  21	0 10620K  2836K physwr  7  78:46   1.98% badblocks													
 3121 root		 12  20	0   241M 46968K nanslp  4   5:51   1.90% collectd													
43273 root		  1  20	0 24272K  3548K CPU4	4   0:00   0.20% top														 
 3071 root		 17  46	0 66976K 20128K uwait   6   4:57   0.17% consul														
 4135 www		   1  20	0 30980K  6280K kqread  5   0:00   0.15% nginx														
65347 root		  1  20	0 22120K  3748K select  3   1:13   0.06% tmux														
 2174 root		  2  20	0 24704K 12544K select  6   0:10   0.01% ntpd														
 3039 root		  1  20	0   407M 99140K kqread  4   0:11   0.00% uwsgi														
 3042 root		  1  52	0   293M 63792K select  6   1:28   0.00% python3.6													
 1884 root		  1 -52   r0  7600K  3572K nanslp  4   0:30   0.00% watchdogd													
 3075 root		 19  20	0 52592K 12004K kqread  7   0:15   0.00% consul-alerts												
  214 root		  6  20	0   406M 98580K kqread  4   0:10   0.00% python3.6													
 3962 root		  1  49	0   236M 57264K wait	0   0:02   0.00% python3.6													
 4001 root		 15  20	0 52320K 14932K uwait   2   0:01   0.00% consul														
 1756 root		  1  20	0 61876K  6720K kqread  4   0:01   0.00% syslog-ng	




Code:
Parts List:
Case: Fractal Design Node 804
PSU: BFG 550 (using it for testing only, will upgrade to more efficient)
Motherboard: Supermicro X10SDV-4C+-TLN4F-O (Intel® Xeon® processor D-1518, Single socket FCBGA 1667; 4-Core, 8 Threads, 35W)
Memory: 2x 16GB Crucial RDIMM DDR4-2133 MT/S (PC4-2133) CL15 dual ranked x4based ECC Registered Server Memory CT2K16G4RFD4213
USBs: 2x Sandisk Cruzer Fit 16GB USB2
HDs: 6x Seagate IronWolf 8TB



20171213_193433b.jpg
 
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
They make data cables with angled plugs SUCH AS THESE
Please understand the link is only for visual reference and not a recommendation;)

As far as the speeds of BadBlock testing goes, many people have experienced what you
describe and have no issues (assuming no errors).

I have personally watched closely as the speeds between my
drives change during long periods of testing and have never had an issue.
 
Last edited:

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
Thanks, that's reassuring, and the drives are 53 hours into testing with no errors.

Maybe it's unnecessry, but I went ahead and ordered right angle connectors for the SATA data and power lines to isolate the HDDs from any direct PSU vibrations and remove the cable bends. I'll do some testing with a different PSU as well, in the event that run of sata power is problematic. Just seems odd to me that the three drives that have sata bends and share the same sata voltage feed are all running badblocks slower than the other three drives.

Here's a shot of tmux running the 6 drives, /dev/ada0 is at the top of the screen, with /dev/ada5 at the bottom.

when badblocks is done, i'll run some speed tests with dd or bonnie++ just to cure my curiosity, but probably wont worry about it. thanks again

20171215_142756.jpg
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
My HGST drives were all over the place too. No worries.

The bend ina cable would not contribute to slower operation. Just ensure there is not too much pressure on the connector on the hard drive, too much pressure could brake something. Your photo actually looks fine, the bend is minor.

So you haven't added any bracing for theo hard drives in those cages to stablize them? If you did, I don't see it in the photo. Yup, slightly off topic but it's your machine so I feel safe asking.
 

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
So you haven't added any bracing for theo hard drives in those cages to stablize them? If you did, I don't see it in the photo. Yup, slightly off topic but it's your machine so I feel safe asking.

No bracing yet, waiting for Fractal to get back to me to see if they have an alternative cage option....
 
Status
Not open for further replies.
Top