Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Seagate 8TB Archive Drive in FreeNAS?

Status
Not open for further replies.

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I have no idea what the "Power-On Hours 8760 (24×7)" is supposed to indicate
This is the POH the drive is designed to run for a year. With consumer drives it's usually 2920 hours/year (365 days, 8 hours/day).
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
@Arwen has been using one or more of these drives (don't recall how many) since March, and I've seen no complaints from him about their performance or reliability. Maybe the drives you bought came from a bad batch. And how about the temperature of your drives?
...
Arwen is a she...

My Seagate 8TB SMR drive seems fine. I've done 5 x 1.2TB backups to it. First was
over-written due to FreeNAS pool version being higher than ZFS on Linux. But the
others are commulative, (since I have the space). Also, the scrubs, (at least one
before or after my backups), run fine.

The 8TB drive is put in a dual drive, hot-swap, eSATA enclosure with fan.

Do have some hot-swap problems getting FreeNAS to recognize this eSATA drive.
My procedure is getting better. Something about getting the drive spun up before
the eSATA cable is connected to the FreeNAS.

For my use, this 8TB SMR drive appears fine. It's not fast, get about 30MBps write
speed using "rsync" full copies. Reads up to 150MBps.
 

Z300M

Neophyte Sage
Joined
Sep 9, 2011
Messages
865
Arwen is a she...
Sorry, your highness. It's those baggy, loose-fitting clothes you were wearing. :)
My Seagate 8TB SMR drive seems fine. I've done 5 x 1.2TB backups to it. First was
over-written due to FreeNAS pool version being higher than ZFS on Linux. But the
others are commulative, (since I have the space). Also, the scrubs, (at least one
before or after my backups), run fine.

The 8TB drive is put in a dual drive, hot-swap, eSATA enclosure with fan.

Do have some hot-swap problems getting FreeNAS to recognize this eSATA drive.
My procedure is getting better. Something about getting the drive spun up before
the eSATA cable is connected to the FreeNAS.

For my use, this 8TB SMR drive appears fine. It's not fast, get about 30MBps write
speed using "rsync" full copies. Reads up to 150MBps.
 
Last edited:

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
It seems to me that @Arwen is using her SMR drive quite differently from the way @Mathias Johansson was using his SMR drives, so different outcomes should not be a big surprise.
Yes. In some respects, I am using the drive as they were designed, "Archive".

It's possible that the Synology device does things different in regards to drive readyness.

For example, some people have no trouble using WD Green drives with quick auto-parking
turned on. Perhaps because their drives are always doing something. Even trivial reads keep
it from parking. But others find that WD Green drives occasionally drop out from a RAID
set. Finally, some find the WD Greens un-usable in a RAID set, unless changing the time
for auto-parking.

My WAG would be that the Seagate SMR, (regardless of size), currently has too long of a
un-available time for the Synology NAS. The disk(s) are probably flushing their fast disk
cache to SMR space. During that time the drive may not accept new requests, (read or write).
It appears to be okay as a single disk ZFS pool, (using FreeNAS).

NOTE: I think the Seagate 8TB fast disk cache is 20GB.
 
Last edited:

9C1 Newbee

Senior Member
Joined
Oct 9, 2012
Messages
483
Sorry, your highness. It's those baggy, loose-fitting clothes you were wearing. :)
She has a very attractive internet connection too.
 

Mathias Johansson

Junior Member
Joined
Oct 14, 2013
Messages
21
@Arwen @Z300M Yesterday i rebuild the whole setup in an other chassi with all components exchanged . New motherboard, PSU , chassi etc.

I refuse to believe that the disks are this crappy. It could be some other external factor that eats the disks.

I will update you guys and girl.

//Mathias
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
Mathias Johnansson,

If you are starting from scratch with your SMR drives, you may want to simply write zeros to every block
to the individual disks, (before RAIDing them). Then run long SMART tests. If everything is good, RAID
them and run some straight forward data tests. And if using ZFS, scrub it!

For me, running backups IS the test. Also, a scrub is run on this SMR disk before or after each backup. If
the SMR drive experiences problems, then it's still likely my original data is intact. (Source is a 4 x 4TB in
a RAID-Z2 pool).

If this Seagate 8TB SMR is still a working fine in 6 months, and if regular 8TB disks are not cheap by then,
then I will likely buy a second Seagate 8TB SMR, (or >=10TB if they are available).
 

diskdiddler

Dedicated Sage
Joined
Jul 9, 2014
Messages
2,108
Seagate do now make 8TB "normal" (non SMR) drives - but they are NOT cheap, around $500 a pop, that's 3k to deck out my FreeNAS machine.
Ouch.
EDIT:
Oh and they are gross 7200RPM disks.
 

shnurov

Member
Joined
Jul 22, 2015
Messages
70
I'm using one of these in my offsite NAS and so far I haven't had issues with it.

Given that I need to replicate two sets 6tb and 8tb - I am planning on getting a second drive as the price difference is minimal between 6 and 8tb archive drives. I do recall when I was simply testing them the performance went from 140mbps to 10-15 without any logic.
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
...
I do recall when I was simply testing them the performance went from 140mbps to 10-15 without any logic.
Actually there is logic, it's just not explained in most of the documentation.

What I think is happening, is that the SMR drives have a non-shingled space, (I think it's 20GB for the Seagate
8TB model), for write cache. When it's full, it has to be flushed to shingled space. A full file, (no incremental),
1.4TB backup to my Seagate 8TB SMR averages 30MegaBytes per second.
 

shnurov

Member
Joined
Jul 22, 2015
Messages
70
Thanks for the explanation - it gets me a lot of answers that I was looking for - even thought I'm OK with the performance I'm getting now.
I am still in the process of setting up my replication, but then I would be limited by the upload at my studio - which is unfortunately 10MBit and waaaay below the avg 30mb/s you're referring to!
 

Doug183

Member
Joined
Sep 18, 2012
Messages
31
Thanks for posting because creating a long term archive solution is essential to my business - video archive. Currently I am using LTO-6 and Bru backup software and well, it has its pros and cons.

Pros:
- LTO is a proven, reliable on the shelf archive solution.
- Its cost effective at about 70 TB using Bru or Archiware. (I use Bru)

Cons:
- Its slow and time consuming. 27 TB backup and verify takes ~8 days at ~ 170Mbs copy speed (~4 days for back up and then a ~4 days verify pass). Do that for a second offsite copy and now its 16 days.
- The archive software is at best....passable. It doesn't handle hardware flakiness well and there is no easy way to verify it copied all your files. (just would love a simple rsync directory compare.)

So running 5 x 8TB Seagate archive drives in RaidZ1 or 6xRaidZ2 (32TB) would be a great solution, but I am concerned with their longevity and bit rot if they sit on the shelf. I am not so worried about re-sliver time, but also don't want to get myself into a hellish situation that I didn't anticipate. So here are a few questions.

1) If I set up either a 5x8TB RaidZ1 or 6xRaidZ2, as I copy large data of video files, what copy speeds will I see? (Note: this is a copy once procedure.)
1a) As the drives fill up, will the speed drop off badly or will it remain tolerable. (lose ~15% as opposed to drop from 300Mbs to 30Mbs)

2) How long will these drives be good sitting on a shelf?
2a) Will bit-rot set it as I am not scrubbing on a regular basis?
2b) If you do run a scrub once a year (or how often its recommended), can I get myself into a bad speed situation.

3) If a drive does go bad, I don't really need to spend time re-slivering. I just need the ZFS pool to limp along until the restore is done. Then I can destroy the Zpool and backup again.
3a) However, if I have to re-sliver, what kind of time are we talking about given my setup?

4) Anything I am missing in regards to reliability?
4a) In terms of speed. (I really want to avoid 30Mbs copies or restores, or re-slivers. That turns 4 days of data moving into nearly 21 days. 170Mbs -LTO vs 30MBs is ~5x multiplier.)


My ZFS setup.
- 30x4TB drives divided up into 10x4TB RaidZ2. Each Zpool is broken into a two datasets of 27TB and ~5 TB left over. (These sizes were created to allow the LTO robot to run unattended - its capacity is 11 slots of 2.5TB tapes ~ 27.5TB).
- Xeon(R) CPU E3-1230 V2 @ 3.30GHz
-Supermicro X9SCL/X9SCM
- 32 Gigs of ECC memory
-10 GB Myricom card (2 SFP+ ports) direct connected to computers.
- Using AFP to connect host computes ~ 350MBs copy speeds from mini-sas RAID5 to ZFS shares. (or SMB2 on Windows7 but get much slower transfer speed.)
 

AVB

Member
Joined
Apr 29, 2012
Messages
143
Absolutely. I have a 5TB SMR Seagate drive and it took FOREVER to put data on it. Its an upgrade of a backup drive in Win 10. At least half the data speed of the drives it replaced. Once I had data on it there doesn't seem to be any problems with read speeds though. Based on my experience I wouldn't use any of the SMR drives for raid of any type.

Those drives are SMR. Without special filesystem support (absent in ZFS) they may be quite slow on random write/rewrite patterns. I had no chance to test them in practice, but from theoretical knowledge I would avoid them, if possible.
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
Doug183,

At bit too many questions, and most beyond my knowledge. But, here it goes;

A 6 disk RAID-Z2 with 8 Tera-byte SMR disks should be good for data reliability.
Even if you store them on a shelf and just run a scrub once a year. My opinion
is that a 5 disk RAID-Z1 would not be a good idea when dealing with drives over
1 Tera-byte. Plus, I'd either store the disks in static bags and padded containers.
Or in enclosed drive sleds and padded containers.

For speed, if you start the copies as soon as the data is on your main pool, you
have the backup always in progress. The overhead of the backup should be very
minimal. The write speed of 4 data disks, (6 disk RAID-Z2), should be better than
my single disk 30 Mega-byte per second write speed. Don't know if you would get
120 Mega-bytes per second, (I'm a bit sleepy now, so my math skills are impaired :).

As for 2 copies, their is nothing stopping you from using LTO-6 tapes for one
copy, and a 6 disk RAID-Z2 as another copy. Always something to be said for
media diversity.

Last, are the Shingled drives suitable for your use? Most people here simply do
not have enough experience with them. And certainly not near your intended
use. Mine is cold storage / backups, so it's really only spinning during backups
and scrubs. Other than that, I just don't have the experience.
 

Yatti420

Neophyte Sage
Joined
Aug 12, 2012
Messages
1,436
BOOM disk number 4 crashed..

Checked the disk 16k bad sectors.

Tomorrow I will send them back, this disk are just bloody insane..

Now it will be 6 TB RED disks!
Just curious what hardware are you running?
 

rs225

Neophyte Sage
Joined
Jun 28, 2014
Messages
878
A couple questions, if anybody knows:

Do the Archive drives meet ACID conditions the way a normal drive does? I would suspect no, because the rewrite cycle opens a window for destroying old data.

Do the Archive drives have optimization for particular file systems, or do they recognize frequently overwritten areas and move them to non-shingled areas? For example, the ZFS uberblocks would perform better if the front and end section of the drive was stored in non-shingled space.

I hope this isn't a duplicate post. Has anyone else noticed frequent SSL certificate errors on the site for the past week or two?
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,163
Actually thinking about the Archive / SMR type drives a bit more, the non-shingled space should be SSD.
Meaning these would be ideal drives to be hybrids. Something like 16GB of SSD cache, that shrinks as bad
blocks are found, (since it's just cache not sized data space).

With SSD cache we could get a bit of fault tolerance in the drive without too much slow down. So backups
of the track(s) to be over-written would be copied to SSD, to help prevent data loss. In fact, the SSD would
mostly be write only, (just like a ZFS SLOG), since the RAM cache copy would be used re-create the over-
written track(s), (except in a recovery state).

Add in TRIM / DISCARD support, so the drive can optimize the free space, and these Archive / SMR drives
start to become quite usable.
 

Rainwulf

Member
Joined
Jul 12, 2015
Messages
67
So how are these 8TB Seagate Archive disks shaking out? Some of the original posters have had these now going on 6+ months.

I'm kicking around picking up 3 or 4 of these 8TB drives for a dedicated backup server. Drives would be setup as either RaidZ1 or RaidZ2.
3x 8TB RaidZ1 = 14.6TB (Est. $720)
4x 8TB RaidZ2 = 14.6TB (Est. $960)

Is the lack of TLER a show-stopper for using these with ZFS/RaidZ? I would be writing to these drives maybe a couple times per month when long-term backups run. I'm guessing that running RaidZ2 would lessen any negative effects of not having TLER as I could always just pull the drive that is hanging everything up and still have some redundancy, right?

The other alternative was to pick up two of the USB 3.0 8TB externally cased Archive drives and use those independently. However, being USB external drives, it would probably be difficult to run ZFS on them and even then, they would not correct errors when scrubbed (just recognize the error).

Thoughts?

My gigantic array is nearly 2 years old now.
Current setup is still a xeon E3-1220 with 16 gig of ram.

16 8tb drives in a single(!) vdev Raid Z2. 92.0 tb all up.

2 ssds in a mirrored boot config, with all logs and the systemdata set.
1 ssd in a L2ARC.

No real problems so far, apart from some TCP/IP tuning in net.inet.tcp.*

The machine is nearly half full, 99 percent of the time the activity is reading and the SMR drives are quite happily purring away doing their weird noises and background activity.

Very happy with my investment, no drives failed yet, no smart errors, scrubs on a 6 week schedule.
 
Status
Not open for further replies.
Top