The Truth about stripped vdevs ?

Status
Not open for further replies.
Joined
Nov 11, 2014
Messages
1,174
I read the manuals , the guides from the forum and when I tough I understood vdevs , pools, etc. and read Constantine blog which put me back in confusion state:

"The more vdevs you stripe together, the faster your pool becomes in terms of aggregate bandwidth and aggregate IOPS, for both reads and writes.
Notice the caveat involved in the little word "aggregate": Your single little app waiting for its single IO to finish won't see a shorter wait time if your pool has many vdevs, because it'll get assigned only one of them."


Is this true ? Is vdev stripping exactly like Raid0 works or it's different stripping ? If I READ one large file from pool(3 mirrors striped in 3vdevs) is this mean I will not be able to read this single file 3 times faster because it will not come from all 3vdevs ? Can somebody demystify "striped vdevs" in real world situation ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It is NOT exactly like a RAID0. Each vdev has it's own transactions so what is said is correct.

I'm sorry but I won't go into the real-world situation because it's pretty complex. Needless to say, putting data on vdevs is very dynamic and not a straight-cut like a RAID0.
 
Joined
Nov 11, 2014
Messages
1,174
So to simplify it: for for a single large file - will it read 3x times faster from a pool with 3vdes(3 mirrors) ?


P.S. Is there place I can learn more about it ? I want make sure I understand what pool will best serve my needs before moving a lot of data to it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So to simplify it: for for a single large file - will it read 3x times faster from a pool with 3vdes(3 mirrors) ?


P.S. Is there place I can learn more about it ? I want make sure I understand what pool will best serve my needs before moving a lot of data to it.

To learn more, read the ZFS code. That's literally the best place.

As for large files, ZFS preferentially chooses where to write data based on how full a vdev is, how busy a vdev is at the moment that data needs to be written, etc. It's not a static thing where you can determine where data is going to be written. It's determined at the moment you have to write data.

Keep in mind that if you are running Gb LAN, you are virtually guaranteed to be bottlenecked in throughput at the Gb LAN card. A single hard drive can nearly saturate Gb LAN, so having 3 vdevs means the pool *might* be able to read it 3x faster than your Gb LAN (or maybe even faster than that), but it still has to push the data out the "tiny" Gb LAN card.

So keep things in perspective. Push the I believe button, and let life go on.
 
Joined
Nov 11, 2014
Messages
1,174
I am testing the speed copying files from one pool to another with shell command "cp -iprv /mnt/pool1/data /mnt/pool2/data". A Gb LAN can very easy became a 10Gb lan when I add a card to it , which will happens sooner or later, but slow pool will be remain slow pool. I am trying to make ready for 10Gb LAN or to move files locally from one pool to another which is still fast option.
I will probably make a one pool with 8 drives in RaidZ2 , similar to what you did with 10.
Can you think of any reason not to use 5900rpm drives instead of 7200 rpm for movies , music, backups storage ?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
For a high density nas I think always going with 5400rpm drives makes sense. Less heat and less power draw. Performance won't be noticeable for the most part.
 
Joined
Nov 11, 2014
Messages
1,174
For a high density nas I think always going with 5400rpm drives makes sense. Less heat and less power draw. Performance won't be noticeable for the most part.

That is my reason. HGST Coolspin is actualy 5900 Rpm , but that's the idea : Lower power Consumption, run 6-7 Degrees Celsius cooler, and last longer, unless of course I am missing something , which is why I ask you guy for opinion.

Otherwise HGST NAS is the next choice.
 
L

L

Guest
What happens inside of zfs is that a transaction group is opened and all io's going to a pool can be accumulated into the queue. If you have one little file going to the drives can sometimes become like waiting for a bus to come. If you have a lot of ois to move to disk, they are then combined and ZFS decides how to send all of them to all the vdevs.

Most filesystems behave serially. Then send one block to one disk location and then work on the next one. Most do this because they don't know the underlying disk structure and don't have file knowledge. They need to ensure that requests to disk don't happen out of order. ZFS does transaction processing so it has both file and disk knowledge and can flush many ios to disk at the same time.

The best way to learn about this is google and read anything by Adam Leventhal or Matt Ahrens. There are also some excellent older youtube videos done by George Wilson(he has a fantastic one explaining vdevs) who is my favorite explainer of all things zfs. There are also a lot of older docs from Jeff Bonwick or Bill Moore.
 
Joined
Nov 11, 2014
Messages
1,174
What happens inside of zfs is that a transaction group is opened and all io's going to a pool can be accumulated into the queue. If you have one little file going to the drives can sometimes become like waiting for a bus to come. If you have a lot of ois to move to disk, they are then combined and ZFS decides how to send all of them to all the vdevs.

Most filesystems behave serially. Then send one block to one disk location and then work on the next one. Most do this because they don't know the underlying disk structure and don't have file knowledge. They need to ensure that requests to disk don't happen out of order. ZFS does transaction processing so it has both file and disk knowledge and can flush many ios to disk at the same time.

The best way to learn about this is google and read anything by Adam Leventhal or Matt Ahrens. There are also some excellent older youtube videos done by George Wilson(he has a fantastic one explaining vdevs) who is my favorite explainer of all things zfs. There are also a lot of older docs from Jeff Bonwick or Bill Moore.

Thank you very much Linda , I'll search for it.
 
Status
Not open for further replies.
Top