SAS disks

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
1642086359504.png
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Is that all? It seems a bit limited and high-level. I'm also not sure why it's reporting multiple ports...
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Well, the output was longer, but the beginning was identical to the previous screenshot, so I cropped just the extra info.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Do I have to power the server down in order to unplug a disk (not part of any pool) safely, or can I do that on the fly?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Depends on the server. You need a chassis with an appropriate backplane, it's not very safe to do this with bare cables. Although, unplugging a running disk shouldn't be too bad. Plugging in a disk is fairly likely to cause trouble with the PSU.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
I didn't feel like shutting the entire server down (it's a little annoying when everything is virtualized), but I guess I better play it safe.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I didn't feel like shutting the entire server down (it's a little annoying when everything is virtualized), but I guess I better play it safe.

The problem with bare cables you have no mechanical alignment, no rails to keep everything straight as the insertion occurs. So you run the risk of bending pins, and shorting PSU supply rails. It's just not worth the risk.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I am glad I found this thread as I am in the same boat as @Octopuss in that I was looking at used SAS drives on Ebay to use as a pool.

@Octopuss, which seller was willing to provide SMART info to you prior to purchase? I have asked 7 different SAS drive sellers and they all responded in the negative.

Lots of interesting info related to block size etc that I might have to go over as well since the sellers I found that sell a mix of 512 and 520 block size randomly. As long as the formatting can be changed to 512 then it should be ok to run together, correct?

The sellers that I have been looking at, for 3TB sizes, have the HGST HUS723030ALS640 & Seagate ST3000NM0023 drives. Should either of those model be of concern?

I am not looking for speed like the OP. The reason I was looking at used SAS drives was
  1. I wanted to create a ZFS pool on proxmox as a backup to my main TrueNAS pool
  2. I was planning on a 6 drive RAIDZ2 pool and keep 1 HDD as cold spare -- so I would be buying a total of 7 drives.
  3. at $15-$25 a pop, they are significantly cheaper than new SATA drives which cost around $70 for WD Red Plus 3TB and $105 for Seagate Ironwolf 3TB
@Octopuss what's your latest report in terms of drive stability/reliability for the used SAS drives that you purchased?
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
I didn't get any SMART results from the seller, but there was a screenshot from HD Sentinel which I also use, so it was good enough for me.
https://www.ebay.com/usr/synergyindustrial is the one I bought the disks from. He was fairly willing to communicate too.
During the course of looking for suitable disks, I figured it was probably best to avoid anything older than five years or so, or stuff that had too many hours on record.

I haven't started using the disks yet because I ran into all kinds of complications, but at least I know for a fact they are physically without errors.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
The sellers that I have been looking at, for 3TB sizes, have the HGST HUS723030ALS640 & Seagate ST3000NM0023 drives. Should either of those model be of concern?
From anecdotal evidence the HGST drives are more reliable than Seagate.
Also try for 7K4000 generation drives : HUS724XXXXX
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
From anecdotal evidence the HGST drives are more reliable than Seagate.
Also try for 7K4000 generation drives : HUS724XXXXX

It doesn't have to be anecdotal... :smile:
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I bought my first set of 5 drives for $13 bucks a pop. Cheap enough to not fret too much if the HDDs fail faster than expected. Will see what condition they are in when they arrive. Getting ready to put them in my server and run badblocks/SMART etc.

I will test these out first and then buy another 2-- so that I can create a RAIDz2 pool with 6 disks and keep 1 spare -- assuming all drives don't fail during testing.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
After a while, temps felt down to 36 38 38 42. Hah.
Certainly too hot for my taste. If you look at the Backblaze paper, anything over 30 C has a significant impact, when the drives get 3-4 years old. Also, I remember that I recently read in the fine print of a data sheet (Toshiba?) that the specified MTBF failure etc. were valid only below 30 C. I would encourage you to do some more research on this and form your own opinion.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
I read enough about this to form an opinion long time ago.
What you say is nonsense. From personal experience as well.
 

Octopuss

Patron
Joined
Jan 4, 2019
Messages
461
Also, cable management with molex = #FML.
1642849090404.png


Can't be arsed to make it prettier though, even my OCD has its limits.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
@Octopuss , you can of course have an opinion that is different from mine. But accusing me of spreading nonsense without any argument to support that claim is at best unprofessional and impolite.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I have been running badblocks on my 5 drives and it has taken 24 hours to finish 77% of the very first test. Seems it will take about 32 hours for each pass. I have run badblocks on enough drives to know that there are 4 tests each with a Writing phase and a Reading & Comparing phase each. I know all phases are not linear but assuming it takes 32 hours for each pass, it would take 256 hours = almost 11 days to finish. :(

Not to mention the 6 hours it took for the smartctl long test .. and a 2nd round of smartctl long would add another 6 or so hours in order for the entire burn-in process to complete.

I wish it was faster.

EDIT: forgot to mention I got the Seagate ST3000NM0023 drives -- all 5 manufactured in 2015 with between 45065-45073 hours each. So just a shade over 5 years of power on hours. Start/Stop cycles were between 100-109
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Bought another set of 2 HGST HUS724030ALS640 3TB drives to complete the 6xRAIDZ2 + 1cold spare. These were manufactured in Dec 2015 .

These HGST didn't show up in fdisk -l at all and couldn't even connect to the drive but gdisk could connect to the drive. Looked up smartctl info and sure enough they were 520 formatted and the best thing -- both drives only had 19 hours of power-on time. I didn't know that fdisk couldn't connect to anything but 512 formatted drives.
In any case, I ran sg3_utils to reformat it to 512. Now running badblocks on both which will probably take about 72 hours per drive as that's how long it took for my Seagate drives when I ran with block size 4092 and -c flag set as 256. Running them both in tmux... so I should be able to set up the RAIDZ2 in about 3-4 days.
 
Top