FreeNAS extremely slow

Status
Not open for further replies.

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Every block is stored across every disk in the vdev. In effect.

Thus to read a block every disk needs to be read, and if there is a slow disk it will slow everything down.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
Every block is stored across every disk in the vdev. In effect.

Thus to read a block every disk needs to be read, and if there is a slow disk it will slow everything down.

I had no idea.
Is it just me who thinks that is insanity?

I would assume that a block is broken down and divided into sectors that get spread around according to some algorithm (but mainly accordingly to free space in a specific drive).
To reconstruct a file i would expect all sectors to be read and the file reconstructed. Assuming my case 9 disks + 2 parity, that would be a potential 1/9 sectors coming from a slow disk. i would expect the retrieval of those 1/9 sectors to be slow but the rest should be extremely fast as it should have the speed of 9 disks combined.

The only reason i can see for being slow all over the place is if it retrieves the sectors in a serial mode instead of parallel, but that would be a bit stupid if the system actually knows it will need the rest of the sectors and can easily look ahead.

I only ask this because i am using "slow" drives to start with (seagate archive series) but even the slowest estimates for those disks have a a read speed of 30mBps giving about 240mpbs. This is an extremely low speed and the disks are capable of much, much more.
But even that should almost saturate a gigabit connection. and 9 times that should saturate almost any connection i throw at it.

Perhaps the problem is the access mode? Perhaps SIPS only requests one block at a time preventing freeNAS from looking ahead and using the disks at their fullest?

I'm happy that i don't suffer from bitrot anymore and i have a degree of protection in case of disks dying and would not go back to consumer grade NAS, but it seems to me that freeNAS is definitely not as good as it should be, probably held back by it's satellite technologies (zfs, NFS, SIBS,etc).
I'm not seeing that being easy or even possible to correct, but doesn't change the fact that is a shame. I really am liking this software.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I only ask this because i am using "slow" drives to start with (seagate archive series) but even the slowest estimates for those disks have a a read speed of 30mBps giving about 240mpbs. This is an extremely low speed and the disks are capable of much, much more.
That'd be very nice if the drive didn't have to seek around to find what it needs. If that was the case, you would be happier with tape drives.
but the rest should be extremely fast as it should have the speed of 9 disks combined.
A lot of good that does you if you're stuck waiting for the last piece that's missing. The slowest drive dictates what kind of throughout you can expect.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I had no idea.
Is it just me who thinks that is insanity?

I would assume that a block is broken down and divided into sectors that get spread around according to some algorithm (but mainly accordingly to free space in a specific drive).
To reconstruct a file i would expect all sectors to be read and the file reconstructed. Assuming my case 9 disks + 2 parity, that would be a potential 1/9 sectors coming from a slow disk. i would expect the retrieval of those 1/9 sectors to be slow but the rest should be extremely fast as it should have the speed of 9 disks combined.

The only reason i can see for being slow all over the place is if it retrieves the sectors in a serial mode instead of parallel, but that would be a bit stupid if the system actually knows it will need the rest of the sectors and can easily look ahead.

I only ask this because i am using "slow" drives to start with (seagate archive series) but even the slowest estimates for those disks have a a read speed of 30mBps giving about 240mpbs. This is an extremely low speed and the disks are capable of much, much more.
But even that should almost saturate a gigabit connection. and 9 times that should saturate almost any connection i throw at it.

Perhaps the problem is the access mode? Perhaps SIPS only requests one block at a time preventing freeNAS from looking ahead and using the disks at their fullest?

I'm happy that i don't suffer from bitrot anymore and i have a degree of protection in case of disks dying and would not go back to consumer grade NAS, but it seems to me that freeNAS is definitely not as good as it should be, probably held back by it's satellite technologies (zfs, NFS, SIBS,etc).
I'm not seeing that being easy or even possible to correct, but doesn't change the fact that is a shame. I really am liking this software.

I'm referring to a ZFS block. Which is broken up into things like 'sectors' consisting of data and parity, which is then written to disk
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
I'm referring to a ZFS block. Which is broken up into things like 'sectors' consisting of data and parity, which is then written to disk


So some sanity is maintained. :)

A lot of good that does you if you're stuck waiting for the last piece that's missing. The slowest drive dictates what kind of throughout you can expect.

On my previous example i would expect that reading from 9 drives to be about 9 times faster than reading the entire thing from the slowest disk.

I do understand that the slowest disk speed will be a limiting factor but i also expect that the combined work of multiple drives to be better than if it was just the slowest drive doing all the work.
Even if the entire transfer is waiting in the last "sector" from the slow disk it should still be overall a lot faster than if it had to read the entire thing from it.

Right now i am transferring at about 25% of my gigabit connection. I assume that is because I am transferring from a crappy NAS into FreeNAS.
But I do expect to once populating FreeNAS the transfer speeds to max out my gigabit connection, or even both gigabits (trunking) if multiple transfers are happening (very likely)
...At least is why i have built this and the experience of FreeNAS I had before all my disks decided to go into that landfill in the sky.

i have three more seagate 8tb disks to be delivered soon... lets hope these are as good as the other 6.


While I might agree in this small limited view that yes it seems weird. However, if you learn more about what's going on behind the scene of the filesystem it put the challenge in perspective. Take 20 minutes to watch Matt Ahrens, the lead developer of ZFS, address some performance challenges with fragmented pools:
https://www.youtube.com/watch?v=AOidjSS7Hsg&index=1&list=PLeF8ZihVdpFfoEV67dBSrKfA8ifpUr6qC#

depasseg, i haven't watched this video yet but i will do it today or tomorrow as i am extremely interested in the subject. Thanks.
 
Last edited by a moderator:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello Mr.T
This was quite the read I must say. I'm thankful your and other contributors insights have been shared.
I'd like to contribute a bit of a 'list' in terms of how to approach the scenario as it stands today while looking ahead.
I feel like a lot of advice is put on the table, but some missing links can be filled in while adding emphasis to others.

Let's start with the hardware.
I've a shrugging feeling that your drives have been put through quite the harsh life, both in terms of power and potentially cooling.
I'd immediately (that is - mid crisis) make significant efforts to improve cooling. Drives will never be put to the test as hard as during the resilvers. It is now you want to have a proper cooling system sorted.

Step 1: assure good ventilation around the box, clear all air filters (but don't rub a vacuum cleaner against the chassis with components mounted......!)
Step 2: add or replace fans to significantly increase cooling. Potential noise issues can be handled later (via scripts), and are fixable. At the moment - cooling is the priority, until all sorts of RAIDZ2 recovery, salvation operations are complete and the system is considered stable, setup properly and data maintained.

The other part of my ambient related concerns is power. That multiple old drives have failed on you (compared to the norm) may suggest a faulty PSU - which may have been faulted due to bad quality power from the grid itself. This is where UPS comes into play. Get one. If not a real proper (aka - one that actually is sufficient to power down your rig in the event of power failure) - get ANY UPS to at least accommodate for disturbances in the grid. In particular since you've recently (IIRC) invested in a new PSU. If this was my situation, I'd consider this to be a high priority task.

Now looking to setup the box properly. It is slightly evident there are some rudimentary stuff that needs to be setup before leaving the box to sit again.
I'd go through the list of stuff I find most important - some of which you can already tick off your list.

1. Set up your email, to receive warnings. (you had this running right?)
2. SMART and Scrub scheduling. Follow this guide blindly (assure all drives/pools are included). It is tried and tested: https://forums.freenas.org/index.php?threads/scrub-and-smart-testing-schedules.20108/
3. Setup a list (not stored on the FreeNAS) which matches your serial numbers to the proper hdd. There is a script to do this, but can be done manually albeit more copy/paste work: look for Display drives identification infos (Device, GPTID, Serial). The important part is to not get blinded by the Device ("da1") since it may change - but focus on GPTID and the serial. I match these with a handwritten sticker on each drive, containing a number. The list will be used at any time it is of interest to find what physical device that needs to be in the center of attention. This saves a lot of hassle.
4. Configure the send config email script. It is LOVELY to know a recent version is available on the mail.

Now I'd look into the drives that are scattered around in several pools. If there aren't any particularly strong reasons to why you want them separated (obviously no redundancy at all) - I'd at least consider the following actions. Overall - convert them into a 'dirt box' Raidz1 containing perhaps a duplication of the most important files, and the least important once (or the one's with highest turn over rate generating the most wear on drives). Now you see, the overall goal of this maneuver is not only to remedy some utility of the drives - but to let you practice a proper burn in and drive validation. Do the whole shenanigan as prescribed in the how-to guides:
https://forums.freenas.org/index.php?threads/building-burn-in-and-testing-your-freenas-system.17750/
https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/

This can be done while other operations are ongoing. Obviously, the first step is to empty the drives from data. Also as a side note, you can have multiple datasets on the same pool, so any argument for having 4 pools (and a dataset each) is out the door.

So, to get the last piece (not necessarily last in the order of execution) would be to reread the basics. At some point you know you've skipped it through, but not followed through.
Yeah, the last step that I wish you'd do, is to read through the newbie materials again. Cyberjocks guide, ZFS primer (links in my sig) and the documentation. I can guarantee that even if some parts may not appeal to much to your interests - a whole new understanding of the systems in and out's will unfold. Some fragments stuck each time, most of the information is lost and needs revisiting to keep fresh. These fragments are the building blocks of knowledge.

Cheers, Dice
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
4. Configure the send config email script. It is LOVELY to know a recent version is available on the mail.
Frankly, that's the one email alert I'd like to get rid of. Alerts for dying drives, degraded pools, etc. are much more useful, by orders of magnitude.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I only ask this because i am using "slow" drives to start with (seagate archive series) but even the slowest estimates for those disks have a a read speed of 30mBps giving about 240mpbs. This is an extremely low speed and the disks are capable of much, much more.

The "record-scratch" sound that made in my head could have been heard from across town.

Those are SMR or "shingled" drives. The bits written to the platters literally overlap like shingles on a roof. Write speeds and latency in general are extremely poor with these drives, never mind the random writes you end up with after some amount of pool fragmentation.

SMR drives should be thought of as self-contained LTO tape with a SATA interface, not as general-purpose HDDs. If you really need that much data density, return the Seagates and pick up a conventional drive (probably spinning in helium)
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
Hi Dice,

Thank you for your post. i shall take it onboard and do as advised.

The PSU was brand new, but i do not know if it was faulty. I do have a new one so problem is adverted.
I do have an UPS but i have not had the time to set it up. I do have a surge protector though (better than nothing).

Cooling looks ok. The ambient temperature never goes above 20 degrees and it has plenty of fans. i don't think cooling is a problem at this point.
Putting the hand on the disks they seem quite cool and none is even warm (i have checked that plenty of times over the duration of this crisis).
Noise is not a problem. i have a server right beside the NAS and THAT one is noisy :)
If heat becomes problematic i'll probably have to shut down the entire thing down till i figure out some way to solve it as the place the NAS and servers are doesn't have any great way to circulate air. The fans help to a point but after a while the entire room starts to heat up and the fans are just pushing hot air. I have a swamp cooler but i dont want to put lots of humidity near my electronics.
Also... i don't want to spend thousands on it. If i have to choose between the server and NAS and paying my mortgage... i will choose mortgage.
But i will consider a proper air conditioning unit. Or look into relocate that kit to somewhere cooler, although the noise of it restricts where i can set it up and still be able to sleep

The disks had a very long and painful life. over 10 years on some of them. It was supposed to be a stopgap solution till i get new disks, but the new ones are the ones that failed.

The plan for the NAS is to have 2 volumes. One large one (11 disks on raidz2) and a small one of about 4 or 5 disks (on RAIDz1).
The small volume is for frequently accessed data, the large one is for cold storage and will hardly ever be accessed.
i am using 8tb seagate disks for the large volume but unfortunately i do not have the nor had till now the financial capacity to buy 11 disks of those in a go.

So... the plan was: get a bunch of 500gb disks that i had lying around, buy 1 8tb disk, and buy whatever was missing to make up 11 disks in 2tb toshibas (4 disks).
Get the volume up and running, and slowly buy the rest of the 8tb disks and replace them one at a time (about 1 a month).
I bought 3 extra 8tb disks and was replacing the older 500gb disks when this disaster struck.

That's how i ended up with a bunch of single disk volumes... i got the replaced disks back in to try to get enough empty space to take all data of the failing volume (i was pretty desperate at this point).

I have the entire pool resilvered now and it's up and running correctly.
I have removed the toshibas from the NAS and is now happily working.

I have 3 8tb disks arriving today or tomorrow to replace the smaller disks and that should leave me missing 2 disks to have the entire pool with the correct size.

My next steps are:
-Replace the PSU from the stopgap PSU to the new one i got for it.
-Get the UPS up and running
-Move the data from the single disks volumes into the big one and remove those disks.
-Fiddle with the SWAP to get a single dedicated disk for it.

As soon as i get the hardware into it's final form i'll move into the software and follow your post step by step.
At this point there is no reason to try to remedy any drive. The 2 tb ones that failed are brand new and in warranty, the 500gb ones (that in fact are over 10 years but didn't fail) are to be thrown out anyway.
I might get a single disk volume to use as a target for torrents... as the greatest killer of disks i know is torrent.
Let it kill a disk and replace it with another one of the 500gb ones.

Seeing all the pain i suffered at the hands of freeNAS one would expect to be worried or displeased with the software, but i am in fact extremely happy. The disks died and i still managed to get all the data back. Even when the unthinkable happened, 3 disks dying and two starting to chug out lots of SMART messages in a very short amount of time, i still managed to salvage ALL the data.

At this point there are only three things i can point at freeNAS as negatives: The need for ridiculous amounts of RAM (due to ZFS, but still) .
The lack of some inbuilt performance testing/reporting to know that some disk is slow and is hurting the overall performance.
And no way to set a disk to be used solely as SWAP instead of being using space all over the disks. (i really do not like this)

I know these are very minor grievances but some dev might come across this and decide it's a good idea and implement it.

Thank you all for your help on this
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
SMR drives should be thought of as self-contained LTO tape with a SATA interface

Exactly the purpose i am using them. :)

SMR drives should be thought of as self-contained LTO tape with a SATA interface, not as general-purpose HDDs. If you really need that much data density, return the Seagates and pick up a conventional drive (probably spinning in helium)


I need the density, but it is to be used the way it's inteded: Long term cold storage.
The performance is bad but i have managed to saturate the gigabit connection... and that's all it needs to do. These disks will be used a few times a month at most.
The reason to use FreeNAS with these is the bitrot protection.

What i can tell you about these disks by personal experience is: They are great!
Yes, they are shingled to get more data in the same space but are less than a fourth of the price of the next general purpose HDD and a lot cheaper of any helium filled one.
And so far they have been EXTREMELY reliable.


The general purpose HDDs are on the small volume that i expect to have "normal" usage.

But thank you for pointing out the technology drawbacks. I could have bought those drives thinking they would have different capacities than they have.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Overall - from what I gather in your last posts, you're pretty well set. Better than I anticipated. Good on you!
I'd agree on the point regarding FreeNAS vs mortgages. There is definitely a limit to the costs that one can sink into the NAS.

I might get a single disk volume to use as a target for torrents... as the greatest killer of disks i know is torrent.
Let it kill a disk and replace it with another one of the 500gb ones.
After testing the old drives through the burn-in procedure, looking to see what drives that spew additional errors and what drives that seem fine (including the solnet array test), If it were up to me, I'd look into still having torrents on raidz1 or mirrors of the old drives, rather than having them as single units. Well, that's me.

Good luck now.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Two more things (to let my mind release all tension in getting it all out there for you):
The disks are at their majority old 500gb that i bought over the years, i have about 7 2tb hdds and 5 8tb.

The ram is non ecc but the motherboard supports ecc ram and that is going to be the next upgrade as soon as i get this to work correctly and get the rest of the disks (6 more 8tb disks to go).
- Are you considering getting ECC RAM?
- What brands are the listed hdds?
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
- Are you considering getting ECC RAM?
- What brands are the listed hdds?

I just bought 32gb of EEC RAM for the server... was expensive but not terrible. It;s on my plan to get the ECC ram for the NAS after i finish buying all the needed disks.

i have 9 8TB seagate (4 on the NAS, 2 on a commercial NAS and 3 to be delivered today or tomorrow)
4 2tb toshiba... i think they all died
2 3tb seagate
2 1tb toshiba... one died the other one has a bunch of bad sectors and will be replaced as soon as i get a disk in
and a bunch of 500gb seagate, Hitashi

I also have 4 western digital 500gb disks 2 dead and 2 on another commercial NAS... but these are quite old... i didn't put them on FreeNAS because it would not boot with them in. Very strange.

I have spent 250£ on a motherboard/cpu, 200£ on 2 psu, 100£ on the LSI HBA, 100£ on power/sata/sas cables, £100 on crappy sata port multipliers that all died.
that on top of £200 on a few 2tb disks and a few thousands on 8tb disks.
and probably more on more stuff i cant remember
It's been expensive till now... but if i get a NAS with 60-70 tb of space on a single volume... it will be quite worth it :)
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I'm pretty sure this is not FreeNAS fault, but some (in my opinion) bad design choice by the developers of ZFS.
In my head it would be much better if the data block were to be read from the disk, the checksum verified and if it checks out just returned. That only requires the usage of the disk where the data is stored and the slowest disk would probably not be triggered.

The checksum helps with silent data corruption, but not redundancy. How would the redundancy work if the data was all on one disk? I'm not sure how else would they have designed it?

(One can use mirrors if more performance is needed while retaining redundancy.)
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
thnx.
Have you considered not getting more of those 8TB shingled drives, but rather mix it up with other brands for the remaining one's?
In the event that these already questioned SMR drives would cause you a lot of trouble ahead, you'd be a happier camper with a variety of drives rather than having all eggs in the same basket. (even though the basket has held up fine thus far).

edit:
I wasn't quite sure I was clear on the idea of burning-in the older drives, obviously this applies to the 'new drives', not yet committed to the pool!


I have spent 250£ on a motherboard/cpu, 200£ on 2 psu, 100£ on the LSI HBA, 100£ on power/sata/sas cables, £100 on crappy sata port multipliers that all died.
that on top of £200 on a few 2tb disks and a few thousands on 8tb disks.
and probably more on more stuff i cant remember
It's been expensive till now... but if i get a NAS with 60-70 tb of space on a single volume... it will be quite worth it
I can definitely relate. :cool:
Mostly in terms of money spent on items that no longer serve duty in the server.
In my case this includes materials and additional fans of about £200 for what started out as a "cheap DIY box"... yeah right.. :oops:
And a lot worse - a 1300£ supermicro box that was supposed to become an AiO box replacing the X11 mainsystem. I added RAM and replaced PSU ....that still requires noise control work to be acceptable. Did I mention it consumes +4x the power as my current mainbox it is supposed to upgrade? ....that "investment" is still sitting in a corner collecting dust and awaiting a revival attempt. :mad:
Even when reviving the box, I'd have about ~ £300 + £ 200 in redundant HBAs and PSU's that cannot be used anywhere.
I won't even add up the total number. I'd lash out. :mad:

The non-perks of becoming a FreeNAS enthusiast :rolleyes:

cheers,
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Frankly, that's the one email alert I'd like to get rid of. Alerts for dying drives, degraded pools, etc. are much more useful, by orders of magnitude.
Of course. This is a neat extra. I had enough when I reached some 100 backuped configs, starting to forget which one's were major and not. Now they end up once a week, flooding my mailbox.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
The checksum helps with silent data corruption, but not redundancy. How would the redundancy work if the data was all on one disk? I'm not sure how else would they have designed it?

(One can use mirrors if more performance is needed while retaining redundancy.)

I mentioned that on the idea that the volume would just be as fast as the single slowest disk.
It wouldn't make any sense.

It would have to either break the data down and use all disks in parallel and the slow disk be the bottleneck or write all data into a single disk.

Example (for a volume with 9 data disks + 2 parity) :
Scenario 1:
9mb chunk of data is broken into 9 blocks and written or read in parallel into each datadisk (being the slow disk the bottleneck but would never be able to set the pace.
In this case the read/write speed would be the time the slowest disk takes to read/write the 1/9 of the data assigned to it (i believe this is how zfs is implemented)

Scenario 2:
Data is written into a single disk. all 9 mb would be written into a single disk and the parity data would be written into the two parity ones.
In this case the read/write speed would be dictated by the speed of the disk where the data is stored. if it happens to be the slow disk it would set the pace for the entire volume, but only when it was reading/writting into that disk.

In both cases the slow disk would slow down the volume but the volume speed would never be as slow as the slowest drive.

The checksum guarantees that the data is valid but the redundancy is generated by the parity. on a raidz2 there are 2 parity bits that allow the reconstruction of the original data even if missing 2 bits of it.
I'm not sure about the ZFS implementation but assume that the parity is not all set into a single disk (if it was the slow disk would make the entire volume as slow as it), but instead the parity is spread across all disks.

TL; DR: The idea that a volume is as fast as the slowest disk is ridiculous unless MAJOR bad choice has been taken during implementation. the slow disk would impact the volume speed and be the bottleneck but never set the pace of the volume.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
thnx.
Have you considered not getting more of those 8TB shingled drives, but rather mix it up with other brands for the remaining one's?
In the event that these already questioned SMR drives would cause you a lot of trouble ahead, you'd be a happier camper with a variety of drives rather than having all eggs in the same basket. (even though the basket has held up fine thus far).

edit:
I wasn't quite sure I was clear on the idea of burning-in the older drives, obviously this applies to the 'new drives', not yet committed to the pool!

I understood that the burn in was for new drives as well. I was quite surprised with the idea of burning in old drives to see if they are up to scratch. That is a good idea and it hadn't occurred to me.

Those shingled drives are quite cheap at £201... the second cheapest are the WD purple at £280.
That is a big difference when buying 11 drives. (an extra grand that would buy 4 more drives)

The reason for going with raidz2 was exactly to reduce the risk of having a lot of the same drive. I would find quite unlucky to lose more than two drives at the same time.

Is it that risky to have all these SMR drives?
...now i'm getting nervous about this.

so far these drives have been great though. I had 2 on a consumer grade NAS and when i pulled them out they were extremely hot. They must have been cooking everyday for 2 years before i noticed how hot they got. (this is what pushed me to build my own NAS where i can guarantee the disks won't overheat)

In a earlier post you've mentioned there is no reason to have multiple volumes... i slept over this one and i can think of a reason to have many small volumes instead of a big one: Later expansion.
As you know freenas doesn't support adding or removing disks from a volume (increasing or reducing the number of disks). If i were to build a 20 disk volume with 1tb disks when it came to upgrade i would have to swap all 20 disks before i see the extra available space. that would make it quite expensive.
By having smaller 4-8 disk pools increasing the size of the pools can be done gradually.
I assume that this is not a problem for a big enterprise user. But for a "normal guy" buying 20 disks in a go can be prohibitive.

Another reason for having multiple volumes would be power consumption. If you spin down disks when not using having data segregated into volumes could prevent excessive spinup/spindowns. Data that hardly ever gets used stays in a volume, data that is frequently used stays in another and half the disks can spindown.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I understood that the burn in was for new drives as well. I was quite surprised with the idea of burning in old drives to see if they are up to scratch. That is a good idea and it hadn't occurred to me.

When I repurposed a set of old drives... I ran the burnin procedure over the drives... You should too ;)

Those shingled drives are quite cheap at £201... the second cheapest are the WD purple at £280.
That is a big difference when buying 11 drives. (an extra grand that would buy 4 more drives)

The reason for going with raidz2 was exactly to reduce the risk of having a lot of the same drive. I would find quite unlucky to lose more than two drives at the same time.

Is it that risky to have all these SMR drives?
...now i'm getting nervous about this.

The only risk with shingled drives is to performance. And if you're okay with that, then great.

so far these drives have been great though. I had 2 on a consumer grade NAS and when i pulled them out they were extremely hot. They must have been cooking everyday for 2 years before i noticed how hot they got. (this is what pushed me to build my own NAS where i can guarantee the disks won't overheat)

You can check HD temps with smart tools. There are scripts to provide the temps and mail you etc...

In a earlier post you've mentioned there is no reason to have multiple volumes... i slept over this one and i can think of a reason to have many small volumes instead of a big one: Later expansion.

You can have a single pool made out of multiple vdevs. You can then expand each vdev individually.

My personal example is a 24 bay NAS with 8 way RAIDZ2 vdevs. I can grow by adding more vdevs to fill the chassis... and then, I can expand by replacing 8 of the smallest drives in the smallest vdev at a time. The same could be achieved with 4 vdevs of 6 drives in a single pool etc.

As you know freenas doesn't support adding or removing disks from a volume (increasing or reducing the number of disks). If i were to build a 20 disk volume with 1tb disks when it came to upgrade i would have to swap all 20 disks before i see the extra available space. that would make it quite expensive.
By having smaller 4-8 disk pools increasing the size of the pools can be done gradually.

The same can be done in a single pool by having multiple 6-8 disk vdevs in the pool.

I assume that this is not a problem for a big enterprise user. But for a "normal guy" buying 20 disks in a go can be prohibitive.

It would not be a sane configuration to have a 20 disk wide vdev.

Another reason for having multiple volumes would be power consumption. If you spin down disks when not using having data segregated into volumes could prevent excessive spinup/spindowns. Data that hardly ever gets used stays in a volume, data that is frequently used stays in another and half the disks can spindown.

Well, that is possible. Pretty poor return on complexity investment in my opinion.
 
Status
Not open for further replies.
Top