How to increase Disk Performance

Status
Not open for further replies.
Joined
Sep 15, 2015
Messages
6
All,
Looking for assistance on what is the performance bottleneck on this configuration.
and/or
What can be purchased/changed to increase performance.

Why:
We are trying to increase the Disk Read/Write speed. Would like to get to 600MB


CPU - Intel Xeon E5-2609 v2 Ivy Bridge-EP 2.5 GHz 10MB L3 Cache LGA 2011 80W BX80635E52609V2 Server Processor
Mem - 16 GB (2 x 8) - 40-Pin DDR3 SDRAM ECC Unbuffered DDR3L 1600 (PC3L 12800)
MotherBoard - ASRock EP2C602-4L/D16 SSI EEB Server Motherboard Dual LGA 2011 Intel C602
Disks - WD Red 4TB NAS Hard Disk Drive - 5400 RPM Class SATA 6 GB/S 64 MB Cache 3.5-Inch (WD40EFRX)
Num of Disks - 6
Raid - raidz2



#Performance Testing
NOTE: I disabled compression before testing
NOTE: The dd tests were performed locally on the freenas server.


########
# TEST 1
dd if=/dev/zero of=/mnt/LV0/Dataset0/WorshipArts/anthony_smb/test1.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 80.456880 secs (260655397 bytes/sec)

dd of=/dev/zero if=/mnt/LV0/Dataset0/WorshipArts/anthony_smb/test1.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 60.022868 secs (349392171 bytes/sec)


########
# TEST 2
dd if=/dev/zero of=/mnt/LV0/Dataset0/WorshipArts/anthony_smb/test2.dat bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 48.514957 secs (216134582 bytes/sec)

dd of=/dev/zero if=/mnt/LV0/Dataset0/WorshipArts/anthony_smb/test2.dat bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 32.440217 secs (323233347 bytes/sec)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Num of Disks - 6
Raid - raidz2
Note: I typed all this without looking at your performance testing.

You are going to need more disks. This is not exact because there is overhead in the data and a bunch of other factors, but the bandwidth is limited by the speed of an individual disk multiplied by the number of non-parity disks. In your case, with RAID-z2 and 6 disks, you have 4 disks that are non-parity disks. This is a very, very rough number. I almost don't want to put it out there because someone will pick it apart. If you have drives that are transferring 130 MB/s and you have 4 of them, you would think that should give you about 420 MB/s of transfer to disk, but that would be wrong because of the overhead I mentioned earlier.
To get the speed you seek, you need an additional vdev, like you have now, added to the pool. That would give you 12 drives total in 2 vdevs, with 6 drives in each vdev. That would give you 4 drives of parity (2 for each vdev) and 8 drives of data. This is rough because there is checksum data also that means you don't get the full capacity in either speed or storage of the 8 drives for data. This setup should net you about 700 MB/s of speed for sequential access. Probably closer to 650 MB/s, but it is an estimate based on my observations.
The whole discussion changes if you want random IO.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Why:
We are trying to increase the Disk Read/Write speed. Would like to get to 600MB
Theoretically, if you setup you same six disks as a pool of mirrors, you could get about 520 MB/s and if you added two more disks (another mirror set) to the pool it would get you to about 650 MB/s with only 8 drives invested. That would be a whole different pool layout and it would give you much better random IO also.
What is this for?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Disks - WD Red 4TB NAS Hard Disk Drive - 5400 RPM Class SATA 6 GB/S 64 MB Cache 3.5-Inch (WD40EFRX)
Manufacturer spec for this drive is 150 MB/s but that is probably the average, not sure. I looked up some test results on www.storagereview.com and the drive actually gives between 196 at a high down to a low of 111, so your mileage may vary.

It is normal to need more drives when you need more speed. I have a server here at work that is running 60 drives in a big storage pool.
 
Joined
Sep 15, 2015
Messages
6
Chris thanks for all of your information. The end users are looking to do some photo/video editing from the NAS directly. We know we would need to move to 10GB on the network connections, but want to make sure the disks can get there first. I will review the data you posted earlier today today. Thanks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Chris thanks for all of your information. The end users are looking to do some photo/video editing from the NAS directly. We know we would need to move to 10GB on the network connections, but want to make sure the disks can get there first. I will review the data you posted earlier today today. Thanks.
Here is a link to some information on how to do the math. In practice, it rarely matches the math exactly.
http://wintelguy.com/2013/20130406_disk_perf.html
If they want to be able to "fill the pipe" on a 10GB network, they will probably need between 16 and 24 drives depending on the exact performance characteristics of the drives being used and it is always better to plan for higher capacity than required to allow for system overhead and latency.
 
Joined
Sep 15, 2015
Messages
6
So adding more memory at this time would not increase the disk speed. We should look at adding more disks it appears.

Chris,
Thanks for the information. You gave me lots data to read through tonight.

Thanks
Anthony
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So adding more memory at this time would not increase the disk speed. We should look at adding more disks it appears.

Chris,
Thanks for the information. You gave me lots data to read through tonight.

Thanks
Anthony
The mechanical interface of the disk is the slow point. The more IO you need, especially if you need random IO, the more disks you need.
The organization where I work has a system that uses about 208 spinning disks in addition to a number (I didn't count) of SSD in order to satisfy the requirements they have. The individual disks are not very high capacity because they didn't need to be, and the quantity of disks is based on the IO requirements. They could have used about 24 high capacity disks to have as much storage, but it wouldn't have given them the IO because the number of disks matters.

PS. Adding RAM can improve performance in certain types of work, but for video editing (if you are working with large files) I think it will be less useful.
 
Last edited:

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
The disks will almost always be the bottleneck in small configs. Benchmark on in various scenarios (streaming read/write, random read/write for various block sizes) and extrapolate from there. For streaming performance you can generally assume 100 MB/s per data drive in somewhat normal workloads. Brand new 10+ TB drives can peak into the mid 200 MB/s range, but fall off to mid to low 100 MB/s ranges at the end of the LBA range.
 
Joined
Apr 9, 2015
Messages
1,258
For some real world testing and information I generally look to https://calomel.org/zfs_raid_speed_capacity.html their testing is done with 7200RPM drives so your results will be slower. It can also depend on how much data you already have on the pool. An empty pool will be faster than one that is mostly full.

Your config may vary some from the testing due to multiple factors but it should give you something very close if the same possible config is there.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
An empty pool will be faster than one that is mostly full.
That is true of hard drives in general and is part of the reason for the guidance to keep the pool less than 80% full.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
That is true of hard drives in general and is part of the reason for the guidance to keep the pool less than 80% full.
Keeping a fixed percentage of free space is good for small pools, but if they're large, keeping a few terabytes free is far more economical.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Keeping a fixed percentage of free space is good for small pools, but if they're large, keeping a few terabytes free is far more economical.
ZFS performance tanks when the pool hits 90% of capacity. The file system software is written that way. The reason for the advice is to keep people from hitting the 90% capacity line.
Some of the advice you are giving out does not appear to be valid for FreeNAS and is not consistent with the best practices guidance that is offered to keep people from having problems with their system.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
ZFS performance tanks when the pool hits 90% of capacity. The file system software is written that way. The reason for the advice is to keep people from hitting the 90% capacity line.
Some of the advice you are giving out does not appear to be valid for FreeNAS and is not consistent with the best practices guidance that is offered to keep people from having problems with their system.
ZFS performance drops when nearly full due to fragmentation of free space (which is due to workload more than anything else). A fixed percentage as a general rule is fine, but there are many many exceptions.

For very large pools throwing away that much space makes very little sense IMO.

Because of the way ZFS allocates blocks, all sectors spanning the drives will get used in an active pool over time anyway.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For very large pools throwing away that much space makes very little sense IMO.
Many of the people asking questions on this forum are not experienced with storage, so the advice given should be geared toward keeping them from having problems.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
Many of the people asking questions on this forum are not experienced with storage, so the advice given should be geared toward keeping them from having problems.

Sure, which is why I qualified my statement about free space. Someone inexperienced with storage probably shouldn't be building 50+ TB pools, either, but these days that's super easy with large drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
shouldn't be building 50+ TB pools, either, but these days that's super easy with large drives
We have people doing just that, using 10 TB drives and putting 6 or 8 of them in a RAID-z2 pool and not understanding why they are not able to maximize their 10 GB network connection.
 

Waco

Explorer
Joined
Dec 29, 2014
Messages
53
We have people doing just that, using 10 TB drives and putting 6 or 8 of them in a RAID-z2 pool and not understanding why they are not able to maximize their 10 GB network connection.

It can be done, it just requires very specific workloads and a smidge of tuning. :)
 
Status
Not open for further replies.
Top