New Multi-Actuator Hard Drives from Seagate

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I recently got to look at some information in a presentation from Seagate about the new model drive they are releasing.

Here is a link: https://www.seagate.com/innovation/multi-actuator-hard-drives/

They say that each drive basically shows up to the operating system as two drives that are able to function fully independent and it nearly doubles the data rate to the drive. They said some of their industry partners have already been testing them. Anyone seen them in the wild?

They said they will send me 10 sample units for testing but they are supposed to be available on the market now.

Thoughts?
 

Attachments

  • faqseagateexos2x141604614622274.pdf
    277.6 KB · Views: 271
  • techpaperseagatemach21604614742879.pdf
    977.4 KB · Views: 280
Joined
Jul 2, 2019
Messages
648
Interesting. I think that these could really show any bottlenecks on the PCI bus. I bet they won't be cheap.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
Just like an IBM 3380 then
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The devil is going to be in testing the interference between them. Small reads & writes to the same LBA's. Trip up the locking in the drive firmware, try and get the drive to deliver stale data, deadlock on a lock, etc... For that matter... There may be actual interference. You might be able to find a harmonic that causes the other actuator to throw a seek error... Or worse. :smile:

And of course these come out just as I depart storage and return to cloud... (sigh)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
And of course these come out just as I depart storage and return to cloud... (sigh)
The folks they say are doing most of the testing are involved in cloud services. The two actuators must be addressed independently. Right now there are no hardware RAID controllers that they recommend and software like ZFS needs to put each actuator in a different vdev. There are issues to be resolved, but the performance graphs they showed looked really good.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The folks they say are doing most of the testing are involved in cloud services. The two actuators must be addressed independently. Right now there are no hardware RAID controllers that they recommend and software like ZFS needs to put each actuator in a different vdev. There are issues to be resolved, but the performance graphs they showed looked really good.

If they present as two 7Tb disks that are entirely or even sort of unrelated to other... That's going to be it's own special kind of hell. :oops:
 

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
If they present as two 7Tb disks that are entirely or even sort of unrelated to other... That's going to be it's own special kind of hell.

Why would that be? Laying out vdevs may become a bit more fun than usual, but otherwise, I fail to see additional complications.

With three normal drives, A B C, build one RAIDZ1 vdev A+B+C. With three twin drives A1/A2, B1/B2, C1/C2, build two RAIDZs - A1+B1+C1 and A2+B2+C2. Should one drive fail, you get two identically degraded vdevs. As heads are independent, rebuild time should not be affected.
 
Joined
Jul 2, 2019
Messages
648
Laying out vdevs may become a bit more fun than usual
I would think that the VDEV layout would need to consider making sure that the two LUNs are on different VDEVs/data sets. I suspect that the controller board could be a single point of failure...
 

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
I would think that the VDEV layout would need to consider making sure that the two LUNs are on different VDEVs/data sets. I suspect that the controller board could be a single point of failure...

Yes, exactly that. Spindle motor (generally anything preventing spin-up, including heads stuck to the platter) and helium leak are also single points. So one should assume the two halves always fail simultaneously and plan accordingly.
 
Joined
Jul 2, 2019
Messages
648
Maybe more interesting than fun (although, that depends what you're into :eek:)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If they present as two 7Tb disks that are entirely or even sort of unrelated to other...
That is what they said in the presentation, two 7TB drives... The two actuators (inside a single drive) each present to the OS as separate LUN, only works with SAS right now and they recommend a SAS3 controller for the bandwidth.
So one should assume the two halves always fail simultaneously and plan accordingly.
They did point out that each LUN of the drive needs to be in a separate failure domain so that it doesn't present as two drive failures in one array group (vdev in ZFS) but they don't suggest using it with hardware RAID right now. They don't have any controllers that are validated with it yet.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Got to tell you that I don't completely understand the claims being made with a dual actuator drive. The claims that more IOPS can be achieved is beyond me, if I understand it correctly it's basically two 7TB hard drives in one unit on a SAS connection. They share the spindle motor and case, and same real estate, otherwise nodda. That to me is no different from having two separate hard drives. I can see some benefits of physical space reduction and also some electrical use but stating that IOPS is doubled, to me it sounds like a politician spinning his/her agenda.

I can see that two drives could achieve double the IOPS but only when you are talking about the overall pool/vdev. I think the docs/video said that unless there was funding that the firmware would not happen to combine the two sets of heads into a single presence unit which I think would the the goal but it's all about the bottom dollar. Right now it's up to the system you install this into to make that happen.

And as previously stated, if you had a RAIDZ1 and one of these drive fails, your data is toast. A RAIDZ2 and you are living on the edge. All factors to take into consideration.

@Chris Moore Thanks for bringing this topic to our attention, it's fascinating what is and can be accomplished. To be honest I was thinking the head actuators were on opposite sides of the spindle, a duplicate set of heads having access to the same data as the normal set of heads and thus I could see throughput doubled and IOPS doubled, and they were treated as a single LUN. Well that is what I envisioned before I read the docs and saw the videos.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That to me is no different from having two separate hard drives.
The difference is that you effectively fit two hard drives in the space that one hard drive used and it only takes a little more power than one regular drive. So you could double the "drive" count (double the IO) without needing to double the space and double the power and double the heat.
In the presentation and the subsequent phone calls I have had with them, they have talked about cloud service providers using these drives to make the services more responsive and more cost effective.
And as previously stated, if you had a RAIDZ1 and one of these drive fails, your data is toast. A RAIDZ2 and you are living on the edge. All factors to take into consideration.
I agree, it appears to function similarly to having two drives, but you would want to map each LUN into a different failure domain, so that will be interesting.
 

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
And as previously stated, if you had a RAIDZ1 and one of these drive fails, your data is toast. A RAIDZ2 and you are living on the edge. All factors to take into consideration.

You may want to think about this drive as having two halves inside, say lower and upper. Both halves work independently but fail simultaneously. So, if you have three drives, you don't make one 6-wide RAIDZ1 of them. You make a pool of two 3-wide RAIDZ1s, one made of upper halves and the other of lower halves. This way, when the drive fails, you get two RAIDZ1s each missing one drive. Thus you get better IOPS but retain the same fault tolerance.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
They said they will send me 10 sample units for testing but they are supposed to be available on the market now.
Update: They decided that they would not be sending me any drives for testing because they don't think I (the organization I work for) will buy enough drives in the future to make it worth while to them. We have only bought (just the branch I am in) about 100 drives a year every year for the past seven years. That is nothing in comparison to the big data-centers that Google and Facebook, etc. are running.
I understand. Not happy, but I understand.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
You may want to think about this drive as having two halves inside, say lower and upper. Both halves work independently but fail simultaneously. So, if you have three drives, you don't make one 6-wide RAIDZ1 of them. You make a pool of two 3-wide RAIDZ1s, one made of upper halves and the other of lower halves. This way, when the drive fails, you get two RAIDZ1s each missing one drive. Thus you get better IOPS but retain the same fault tolerance.
I don't use SAS so I do have what might sound like a stupid question. How would a person manage the hard drive halves, the LUNs to ensure that the drives are assigned properly? I'm not sure how drive da0, da1, da2, etc... relates to LUN 0, LUN 1, LUN 2, etc and I actually didn't get the feel that it does relate directly. My question is just simply how easy it would be to manage to ensure you get the configuration you expect.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My question is just simply how easy it would be to manage to ensure you get the configuration you expect.
Unless the GUI for FreeNAS were modified, I expect the configuration would need to be done from the command line. That isn't usually very difficult, but it is not for the novice. These are data-center drives so most users will never see them. I have called myself looking and have not found a vendor that is stocking them yet, or even has them listed for pre-order.

I have not created a pool based on LUN before, but you can see the LUN number by using camcontrol devlist and the output looks similar to this:
1607722829559.png


Apparently Microsoft had some sample units before December 2019. Look at this:
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
That isn't usually very difficult, but it is not for the novice. These are data-center drives so most users will never see them.
LOL, you are a funny man. I have a premonition... As soon as a home user can buy a set of these drives we will see complaints about how difficult it is to configure TrueNAS to support these new drives. And since TrueNAS is considered a business server someone will make that argument that it needs to be easier to configure. Lets be honest, many home users feel that FreeNAS/TrueNAS even though it is a free product to them, they feel it's owed to them for the product to be easy and you should not need to have any technical skills to make it work. I myself think a disclaimer should be made stating something to the effect that technical knowledge/skill is required to configure and manage this application and hardware.

So the writing is on the wall, once these drives can be had by the home users it will become problematic. I'm a "Glass Half Empty" person today.
 

AlexGG

Contributor
Joined
Dec 13, 2018
Messages
171
As soon as a home user can buy a set of these drives we will see complaints about how difficult it is to configure TrueNAS to support these new drives.

But will we? I'm afraid that people will just treat the dual-head drives the same as regular drives. With no regard for fault domains whatsoever.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
But will we? I'm afraid that people will just treat the dual-head drives the same as regular drives. With no regard for fault domains whatsoever.
Exactly. It will become a problem when drives fail.
 
Top