How do old IBM SAS drives stack up to new drives?

Status
Not open for further replies.

helloha

Contributor
Joined
Jul 6, 2014
Messages
109
I have to deploy some older supermicro x8 servers and hook them up with 10GbE.

They will be used as scratch disks on the network for rendering so we don't need much storage but it has to be reasonably fast (300 MB/s min). I'm on a budget so I can't put ssd's, consumer ssd's are not an option as they will get hammered quite a bit. The chassis allows for four disks.

I was wondering if I could use old hard disks and put them in raid Z1 like the: IBM 73.4 GB 15K SAS 3.5" - 39R7348
But since they are older, I was wondering if they are comparable to modern drives. If a drive fails it's no biggie to swap it out, I'm buying some spares.

Cheers!
Karel.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Interface speed looks like it's the same. And raw throughput will be pretty decent. But the 3.5" drives will probably have lower iops performance than a similarly sized 2.5" drive of the same capacity.

And as for RaidZ1, you will only have the IOPS of a single drive. As a scratch disk, I'm not sure if that will be acceptable. To improve IOPS, you need more Vdevs (so striped mirrors are a good choice).
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The drives should be as fast or faster than the typical SATA drives recommended 'round these parts. They are lower-density, so your seek times may be a bit longer (more head movement required).

Since these are for scratch storage, is redundancy even required? What happens if the worst happens - the server or array goes tango uniform? If the software is smart enough to start over (assuming the loss in rendering time isn't too significant), you might just do a stripe of 4 drives and accept the risk.

Otherwise, you should do a pair of striped mirrors for improved performance. You may need to jump up to larger drives.
 

helloha

Contributor
Joined
Jul 6, 2014
Messages
109
The drives should be as fast or faster than the typical SATA drives recommended 'round these parts. They are lower-density, so your seek times may be a bit longer (more head movement required).

Since these are for scratch storage, is redundancy even required? What happens if the worst happens - the server or array goes tango uniform? If the software is smart enough to start over (assuming the loss in rendering time isn't too significant), you might just do a stripe of 4 drives and accept the risk.

Otherwise, you should do a pair of striped mirrors for improved performance. You may need to jump up to larger drives.
Thanks for the reply, the reason I was thinking of raidz1 is that these are older used drives and I'd like to avoid data corruption.

Does ZFS use checksums on striped arrays? I know it can't fix errors because there's no parity but could this somehow result in corrupted data?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Does ZFS use checksums on striped arrays? I
ZFS uses checksums on everything. In order to correct an error it will need a mirror or a RaidZ(x). IOW, if you had a single disk, ZFS would detect the error, but be unable to correct it (actually there is an option of being able to config multiple copies on the same device).
I know it can't fix errors because there's no parity
It would check the checksum on the other mirror(s), and if those were valid, it would correct the erroneous data.
 
Status
Not open for further replies.
Top