FreeNAS extremely slow

Status
Not open for further replies.

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Is it that risky to have all these SMR drives?
...now i'm getting nervous about this.
I've not researched the matter thoroughly so don't take my caution's for law in this regard.
A few searches yeilds a couple of threads that might be worth reading:
https://forums.freenas.org/index.php?search/3548170/&q=SMR&o=date
https://forums.freenas.org/index.php?search/3548172/&q=shingle&o=date
https://forums.freenas.org/index.php?search/3548173/&q=seagate+archive&o=date
https://forums.freenas.org/index.php?search/3548174/&q=ST8000AS0002&o=date

I am also prospecting 8TB drives, slowly*.
I remember reading somewhere a while back that these were not ideal for ZFS use. Ie, that was the conclusion I took away from the reading. A lot of speculation could've been contributed, confirmed or dispelled since then.

Some food for thought, on at least getting ANY other drive than the SMR's - if that would be appealing.
Other cheaper (than WD reds) alternatives include Toshiba X300 HDWF180EZSTA (3000SEK), Seagate ST8000VN0002 (256mb cache, NAS rated)(3000SEK), Segate Desktop *shruG* ST80000DM002(3200SEK). Segate Archive drives (2500SEK)
At 20% cheaper than the Seagate ARchive drive comes Intenso's OEM usb3 external drive, which has been documented to recently use Segate Desktop drive - as per above (2000SEK)
In my market, these drives are all cheaper than WD reds at 3300SEK.

Ie, the idea of getting different drives stems from a philosophy at which its extreme would argue you'd never get more than one drive at once, all to be of different kinds. Then there are lesser purist versions where it is argued to get 'a drive once in a while' (kind of like you are doing) to avoid batch-related issues (since drives operating together see very similar wear).
Since SMR are somewhat uncharted territory over the long term, it keeps me watching out.

In a earlier post you've mentioned there is no reason to have multiple volumes... i slept over this one and i can think of a reason to have many small volumes instead of a big one: Later expansion.
As you know freenas doesn't support adding or removing disks from a volume (increasing or reducing the number of disks). If i were to build a 20 disk volume with 1tb disks when it came to upgrade i would have to swap all 20 disks before i see the extra available space. that would make it quite expensive.
By having smaller 4-8 disk pools increasing the size of the pools can be done gradually.
I assume that this is not a problem for a big enterprise user. But for a "normal guy" buying 20 disks in a go can be prohibitive.
I sense you confuse terminology of "vdev" and "pool". *waves frenetically with the newbie guide by cyberjock*
A pool consists of at least one vdev, but can contain any number of vdevs. A set of 11 drives configured in Raidz2 is referred to as a vdev.
Later expansion can be achieved via sizing the vdev, or zfs stripe width to fit needs (see readings in my signature for an inspirational blog post on the topic).
For example, you could choose to get 7 drives wide raidz2 in one vdev as your big pool. Expanding by adding another vdev of 7 drives.. set in raidz2. Boom.

I slept on the matter too, and could probably clarify a couple of other reasons that may be more or less valid to consider multiple pools (there might be other)
- When both requiring high IOPS for VM's (mirrors - low space utilization efficiency) AND storage where speed matters less - RaidZ3 for larger storage arrays. This motivates 2 pools.
- When having a a sort of 'download/temp/scrap' significant chunk of data that has such high turnover there might be arguments for avoiding further fragmentation on the main pool, and minimizing wear on the main pool's drives. This setup could be anything from raidz1 or mirrors. I've tried both.
~ this last point is somewhat of a dodgy argument in which I'm not confident to what extent it applies. The scenario is having a significant performance/speed discrepancy between two vdev's joined in a single pool. The vdev's will be 'striped' for new incoming data, the performance should be still increased. The advice that the slowest drive determine the speed of the set of drives should apply to vdev - but not in the case of multiple vdevs in the same pool as far as I am informed. I'm not 100% confident on this. In the same vein, also not necessarily a problem depending on circumstances - drives of different sizes. For example a 7x drive raidz2 of 500GB drives are joined in a pool by 7x drive raidz2 of 8TB drives will not allow for perfect striping due to the difference in size. This leads to the anticipated theoretical idea of doubling performance from adding a second vdev is destroyed. This also applies to adding a 2nd vdev, to a pool which already contains a lot of data. due to the CoW the data will - once edited - somewhat be balanced, but - naturally there is no such thing as "defragmentation" and "balancing" built into ZFS, or for users to actively pursue. This motivates upgrades to happen before pools become filled to the brim.

(also if you'd stop at ~7 drives Raidz2 for the 8TB's in the first vdev - you could ad the second later on - fusing with different drives - rather than detonating budget to get all the way up to the stripe width of 11...)

Cheers, Dice
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
When I repurposed a set of old drives... I ran the burnin procedure over the drives... You should too ;)

I didn't know that burning in older disks was a "thing". I am definitely going to do this

You can check HD temps with smart tools. There are scripts to provide the temps and mail you etc...

I was referencing a consumer grade NAS. The reason for why i moved to freeNAS was the control and scale.
In that crappy NAS there is no warning of anything... not even that the disks are extremely hot

You can have a single pool made out of multiple vdevs. You can then expand each vdev individually.

My personal example is a 24 bay NAS with 8 way RAIDZ2 vdevs. I can grow by adding more vdevs to fill the chassis... and then, I can expand by replacing 8 of the smallest drives in the smallest vdev at a time. The same could be achieved with 4 vdevs of 6 drives in a single pool etc.

I'll look into this... having multiple vdevs seems adding complexity but freenas might make it work easily.

Well, that is possible. Pretty poor return on complexity investment in my opinion.
On my specific usecase having the 11 disks of the cold storage spinning down might come back as a few watts of energy saved.

I need to save as much energy as possible to feed the over 1000W consumed by the server. or start to buy stocks on my power supply company and change my name to BP, shell, mobil
:)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I need to save as much energy as possible to feed the over 1000W consumed by the server.
If your NAS is using 1kW idling, you really have a problem and it's not hard drives.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
At 20% cheaper than the Seagate ARchive drive comes Intenso's OEM usb3 external drive, which has been documented to recently use Segate Desktop drive - as per above (2000SEK)


To start off... is this a 8tb? Where can i find one? i googled it but could only find a 1tb one :(

About the rest: mind blown!

I see that i had the basics of pool - volume - vdevs not well settled. It had never occurred me to have more than a 1 to 1 relationship between them

I am less than thrilled about the large amounts of data space "wasted" in all kinds of stuff for freeNAS... 2gb per disk for swap (that i already removed), a big descrepancy between the volume and the vdev/pool wasting in my small pool about 50gb not starting with the parity space "loss".

In my small volume i have 4 disks of 500gb on raidz1... 2gb per disk for swap and a "loss" of 50gb on the volume-dataset difference and a full disk for parity is a lot of space gone.
Instead of 2tb i end up with something 1.2 or 1.3tb.
Not to speak with the fact that it needs to be under 80% to have the maximum performance.

While i don't have the big pool i am struggling for space so every little matters.

Wouldn't having multiple vdevs just increase the amount of disks "lost" to parity?

For example, you could choose to get 7 drives wide raidz2 in one vdev as your big pool. Expanding by adding another vdev of 7 drives.. set in raidz2. Boom.
so... in tis case wouldn't i end up with 14 data disks and 4 parity disks instead of 16 disks of useful data and 2 parity?
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
If your NAS is using 1kW idling, you really have a problem and it's not hard drives.

NOOOooooo.. that would be insane... i think...

It's the Server sitting besides the nas that is the cause of all my energy consumption woes.
Old HP server with 4 cpus... probably from when people thought that bigger clock speed was worth turning on an extra nuclear power plant or two.

The NAS is running an intel Avoton... tdp of 12w i believe
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The intenso was apparently on a short term deal and was bumped in price just a few days ago, now priced at around the same as the Archive drive. That puts them out the window in my book (since all sorts of warranty is obviously broken when disassembling a drive and re purposing it in FreeNAS)

You really need to get your arse and read the introduction documents. Clearly there are a lot of confusion that may prohibit solid understanding on what grounds the communities advice rests. It is pretty crucial now that you're about to get into a huge sized pool and must be aware of the risks. At this point your statements - indicates your are not aware. What stripe width are you considering? - I'm getting nervous you're way off here.

You also need to abandon the mindset of FreeNAS <eating space that should belong to you>.
It leads to no good. I promise you that it will go away as soon as you're no longer struggling to find a place to put your data. Ie - not worrying about what pool to put it on, and have swallowed the economic hit after getting sufficient drives to give you the needed space.
Quitting looking at the TB number specified on the sticker on the top of drives is key. Forget about 'NTFS referencing the storage'. You know by now it does not apply at all. Obviously price per TB usable space is horrible compared to a NTFS box with scattered drives. That's the price that is paid for an exceptional system. Using biduleohms calculator (see my sig) should guide all 'expectations' regarding storage space.
Once that is achieved, you'll end up in a situation where you can be fundamentally happy about the security of your data.

The correct mindset is to think of FreeNAS is - as a mastermind that needs to be fed cookies. It eats more than it gives back. Give it enough cookies (resources) until it performs to your expectations. Profit from unprecedented data security.

It just takes a while to accept that point.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59

I was reading through a bunch of these... and even Cyberjock who dislikes seagate doesn't find them terrible.
It seems that people thing they are not "fast enough"... i would agree if it wasn't the fact these are Seagate archive disks... that i am going to use to archive stuff.
In this situation speed is less important and space is king.

According to this:
http://www.zdnet.com/article/google...drives-like-this-even-if-they-lose-more-data/

Google also thinks we need more space and we should break away from the "standard" hdd size.
Larger disks have less IOPS and that is a problem for some people and some uses but there is no performance drop by simply stacking more platters and making disks chunkier.

It's worth noting as well that speed is not paramount to all case uses... amazon Glacier service is backed by a warehouse full of magnetic tapes and a robotic arm that physically gets tapes and put's them on the reader/writer.
Not only magnetic tapes are slow, the fact that you can only have 1 tape running at a time is also slow... and they seem to be making money with that.


Then there are lesser purist versions where it is argued to get 'a drive once in a while' (kind of like you are doing) to avoid batch-related issues (since drives operating together see very similar wear).

I had never thought of this... i just bought disks sparingly because i'm not made of money. I never thought of batch issues.
Bah... something new to worry about.
Specially because i just bought 3 disks in a go. :(

I'll just burn in that and hope for the best
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
I promise you that it will go away as soon as you're no longer struggling to find a place to put your data.

I know... that's why i cant wait to get the pool at max size. with about 50tb of space i ont be struggling so much.


What stripe width are you considering? - I'm getting nervous you're way off here.

I went for a very large stripe width. 11 disks (9 of which are for data).
That was because of the above mentality and because when i started doing this i had absolutely no money.
Now in hindsight and my current financial capacity i might have done with 9 disk pool (7 data).

It wont be a huge problem as the big pool is to store stuff that will hardly ever be used.
To use as an everyday thing i am happier with 4 disk raidz1 vdevs or 6 raidz2 vdevs (depending on how important the data is in there).
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I mentioned that on the idea that the volume would just be as fast as the single slowest disk.
It wouldn't make any sense.

It would have to either break the data down and use all disks in parallel and the slow disk be the bottleneck or write all data into a single disk.

Example (for a volume with 9 data disks + 2 parity) :
Scenario 1:
9mb chunk of data is broken into 9 blocks and written or read in parallel into each datadisk (being the slow disk the bottleneck but would never be able to set the pace.
In this case the read/write speed would be the time the slowest disk takes to read/write the 1/9 of the data assigned to it (i believe this is how zfs is implemented)

Scenario 2:
Data is written into a single disk. all 9 mb would be written into a single disk and the parity data would be written into the two parity ones.
In this case the read/write speed would be dictated by the speed of the disk where the data is stored. if it happens to be the slow disk it would set the pace for the entire volume, but only when it was reading/writting into that disk.

In both cases the slow disk would slow down the volume but the volume speed would never be as slow as the slowest drive.

Need to think IOPS and bandwidth separately. In a 9 disk RAIDZ the bandwidth may potentially be 8x (N-1) higher than a single disk, clearly. But the IOPS will be the same as a single disk. The system has to wait for that slow disk to complete the current IOP before starting the next one. This is why one slow disk kills the vdev performance.

And while the bandwidth advantage helps, the actual write/read time is usually dwarfed by the seek time of the disk(s). Unfortunately, in practice, the cases where performance matters most (mail servers, file servers, iSCSI storage for virtualization, database servers) also happen to be the cases that care a lot about IOPS performance, not much bandwidth performance.

There is no way around this that I can see. I'm not sure how else they would have designed it. But this is why people use mirrors when they want more performance. In this case, on writes you are no better, still limited to a single disk IOPS for the writes. But on reads you get Nx the IOPS and potentially bandwidth, which helps on random reads.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I know... that's why i cant wait to get the pool at max size. with about 50tb of space i ont be struggling so much.
...you mean - your chosen vdev width with acceptable trade offs is completed? ;)
I went for a very large stripe width. 11 disks (9 of which are for data).
That was because of the above mentality and because when i started doing this i had absolutely no money.
Now in hindsight and my current financial capacity i might have done with 9 disk pool (7 data).
So, to hammer in the proper use of terminology this is implied when saying "11 drive raidz2".

Are you dead set on this number?
It is quite a lot outside the norm on the forums. For reasons.

To use as an everyday thing i am happier with 4 disk raidz1 vdevs or 6 raidz2 vdevs (depending on how important the data is in there).


I'm fairly tempted to give you another important piece of the puzzle. In terms of how to handle future upgrades.
It would come clear to you if you put a little effort on the reads. Fairly tempted.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Worth pointing out that TB != TiB

Drives are labeled in TB and FreeNAS reports space/capacity in TiB

Also, 2GB of swap is 0.025% of an 8TB drive and is irrrelevant.

If you want to use 11 8TB drives, it sounds like raidz3 is the right approach. Circa 64TB. or add one more drive for 72TB
 
Last edited:

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
...you mean - your chosen vdev width with acceptable trade offs is completed? ;)
If you want to use 11 8TB drives, it sounds like raidz3 is the right approach. Circa 64TB. or add one more drive for 72TB

Thanks for your input. This pool is for long term storage. Being stupidly wide is of no concern.
I went through the forums and read why a vdev too wide is not a good idea before settling into this number.
The biggest concern is if a drive fails during a very long resilver. But these drives wont see much action if at all. The odds of it going wrong are very low.
And it still beats the current situation of loose drives scattered.

The "small" volume was to be used for everyday use and is quite small and practical. That one is the one likely to give me trouble as i expect it to be used everyday.


Unfortunately, in practice, the cases where performance matters most (mail servers, file servers, iSCSI storage for virtualization, database servers) also happen to be the cases that care a lot about IOPS performance, not much bandwidth performance.

When i started this thing i just looked into a large bunch of disks for long term storage... with bitrot and raidZ1 as an added bonus.
I ended up with going with raidz2 just because of the risk of something going wrong on a such wide vdev.

Since i had a few disks lying around i built the small volume for everyday use.

I never thought of using it as anything else... till now.
After i replace the disks of the big pool.... i will end up with a bunch of empty disks... and i do have a bunch of disks on a vmware server... a iSCSI target for it would be great.
I'll have to think about it and read up on how other people designed theirs.
I dont have DBs or anything IO intensive so... this might be a good option, but i'll have to enable the port trunking first (i'm scared of doing it because i have no faith on my switch and at the moment dont have the money to buy a new one)

...and i don't know if i have enough free sata ports for anything else


I'm fairly tempted to give you another important piece of the puzzle. In terms of how to handle future upgrades.
It would come clear to you if you put a little effort on the reads. Fairly tempted.

Please do. I might as well learn a bit more while i'm at it.
 

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
Ok... Over the weekend i sat and started to follow all the advice given to me on this thread.
I have set up the SMART test that weren't running and made the scrubs run by weekly instead of monthly.

I then was looking into the burn in tests... and that's where i got stuck.
The people who talk about all this assume users know what a DD command is.
I can work out what the commands do... but i have no idea of how long to run the read and write commands for.
The tutorial by jgreco doesn't go into that other as referring to "it can take up to a month".

That is almost like ISPs advertising internet... up to 50mbps... so you can't complain if you only get 500kbps.
:)

I have noted as well that the size difference between the volume and dataset is substancial (volume 10tb, dataset 8tb).
I assume this is because freenas needs to store metadata of the stuff being stored but 2 tb in a 10tb volume is 20%. Feels too much specially because is hardly using any of that extra space.
Is there a way to force freenas to use that space a bit more intelligently?

I'm sorry about all these noobie questions, but when i start to have to use the console is when i start to worry.
I used DOS back in the day but am a windows guy down to the bone. I simply dislike to have to know all the commands.
My memory isn't what it used to be and memorising things is something i will avoid if i can. Old age is not nice to the brain.
(yes, that is my excuse to prevent a flame war by all the purists who think the the console is the only way of doing things)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The volume is in TiB and is the raw storage ie data+parity. The dataset is just the data capacity, ie less parity.

Your disks are bought in TB not TiB.

ie 8 x 1TB disks = 8TB = 7.27TiB raw. In RAIDZ2 gives 6TB of data capacity which is 5.45TiB, less a bit for overhead.

At the Terra scale, you 'lose' just over 9% from converting from TB to TiB. Truth is of course, you don't lose anything... you never had it to start with.

What parity level are you running at? Is that the other 10% you're missing? I use 25% parity ie 8 way RAIDZ2.
 
Last edited:

MR. T.

Explorer
Joined
Jan 17, 2016
Messages
59
Hi.

What parity level are you running at? Is that the other 10% you're missing? I use 25% parity ie 8 way RAIDZ2.

I am running raidz2 on 11 disks (9 data 2 parity).
The difference between just data and data+parity would account for that... i think.
I don't have access to the system right now so i cant be 100% sure.

I know for sure is that on the small volume (4 disks of 500gb each raidz1) i left the torrent to run till it ran out of space on the dataset and there is still plenty of it left on the volume.

While i'm at it... i'll ask something that is tangential:
I was advised i should have just 1 pool with multiple vdevs instead of multiple pools.
I need to read on the terminology as the freenas UI only refers to volumes and datasets but will assume for the time being that volume=vdev... and somehow i can merge them into a resource pool.
the question is:
How should i manage disks of different sizes? I have 8* 8tb disks (that's obvious what to do),1* tb, 3* 2tb, 1* 1tb, and about 12* 500gb

at the moment i have a volume with 4* 500gb disks in raidz1. all other 500gb disks are scattered as they were on the volume that contains the 8tb disks (i have slowly been replacing them). I am considering to add them to the existing volume to make it two times 4 disks in raidz1

But the 2tb and 3tb disks... i'm not sure what to do with them... it seems to me that i just don't have enough of them to do anything useful with them and i should just remove them and perhaps use them on a consumer NAS.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hi.



I am running raidz2 on 11 disks (9 data 2 parity).
The difference between just data and data+parity would account for that... i think.
I don't have access to the system right now so i cant be 100% sure.

So, 18% parity.

I know for sure is that on the small volume (4 disks of 500gb each raidz1) i left the torrent to run till it ran out of space on the dataset and there is still plenty of it left on the volume.

While i'm at it... i'll ask something that is tangential:
I was advised i should have just 1 pool with multiple vdevs instead of multiple pools.
I need to read on the terminology as the freenas UI only refers to volumes and datasets but will assume for the time being that volume=vdev... and somehow i can merge them into a resource pool.

I dislike how the FreeNAS gui uses different terminology to what is used in ZFS.

You can have zero or more pools. This is a FreeNAS volume.

Each pool is made out of one or more vdevs.

Each vdev is made out of one or more partitions (ie drives)

It is vdevs which have redundancy, ie you have an 8 wide raidz2 vdev which has double disk redundancy.

In the FreeNAS GUI you can add a new vdev to a pool (ie volume) by extending it, rather than creating a new pool from the same set of disks.

the question is:
How should i manage disks of different sizes? I have 8* 8tb disks (that's obvious what to do),1* tb, 3* 2tb, 1* 1tb, and about 12* 500gb

If you want to you can play games with partitions.

For example... say you had a 3TB drive and a 2TB and a 1TB.

You can partition the 3TB drive into a 2TB and 1TB partition... and then make a 2TB mirror and a 1TB mirror.

Maybe not so useful.

But perhaps it would make sense to treat your 3 2TB drives as 6 1TB partitions and combine with your 1TB drive for a 7 wide RAIDZ2. You could then lose any 2 1TB partitions... or even the whole 2TB drive.

Maybe it would be a good idea to partition your larger drives into 1TB partitions.

at the moment i have a volume with 4* 500gb disks in raidz1. all other 500gb disks are scattered as they were on the volume that contains the 8tb disks (i have slowly been replacing them). I am considering to add them to the existing volume to make it two times 4 disks in raidz1

But the 2tb and 3tb disks... i'm not sure what to do with them... it seems to me that i just don't have enough of them to do anything useful with them and i should just remove them and perhaps use them on a consumer NAS.

Yes, probably using them for something else is the best thing, but if you did want to use them, the safest thing to do, while maximizing them would be to partition the 3TB into a 2TB and 1TB. Then you have 4 2TB partitions... and 2 1TB partitions.

You could take it further and partition the 1TB partitions into 500GB partitions... and combine that with your 500GB partitions/drives. Being careful to make sure that if the whole drive fails it won't take your vdev with it.

BUT this partition game, with multiple partitions on one drive, in the same pool is NOT recommended!

There are serious performance implications for a start.

If you did make a dodgy vdev up out of partitions, it would probably be wise not to add it to your primary pool... once you add a vdev you can't remove it.
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
BUT this partition game, with multiple partitions on one drive, in the same pool is NOT recommended!
This should be put in bold.

edit: and in red + bigger size.
BUT this partition game, with multiple partitions on one drive, in the same pool is NOT recommended!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Well done Stux :)
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I dislike how the FreeNAS gui uses different terminology to what is used in ZFS.

You can have zero or more pools. This is a FreeNAS volume.

Each pool is made out of one or more vdevs.

Each vdev is made out of one or more partitions (ie drives)

It is vdevs which have redundancy, ie you have an 8 wide raidz2 vdev which has double disk redundancy.

In the FreeNAS GUI you can add a new vdev to a pool (ie volume) by extending it, rather than creating a new pool from the same set of disks.

Maybe this will help too: https://forums.freenas.org/index.php?threads/comprehensive-diagram-of-the-zfs-structure.38865/ ;)
 
Status
Not open for further replies.
Top