Fill 6 mirror stripe with /dev/zero

Status
Not open for further replies.

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
Hi All,

With no compression I am filling a disk to 77% to see how fast a scrub will run with no fragmentation. I figured I would see each disk start at the fast end and get slower until the dd was finished. 6 or so hours to fill it and 2 hours for the scrub. 12 drive, 3.0TB sas 7200 rpm 3.5 drives. What I saw was a saw tooth and I am not sure why:

1545060359502.png


This is how I filled the disk :
for i in $(awk 'BEGIN {for(i=0;i<1300;i++) print i}'); do dd if=/dev/zero of=ddfile$i.dat bs=2048k count=5100; done

This is the volume in question:

Filesystem Size Used Avail Capacity Mounted on
detvolaa 3.2T 92K 3.2T 0% /mnt/detvolaa

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
detvolaa 16.3T 12.6T 3.66T - - 0% 77% 1.00x ONLINE /mnt


zpool status -v detvolaa
pool: detvolaa
state: ONLINE
scan: resilvered 14.7G in 0 days 00:02:31 with 0 errors on Mon Dec 17 07:04:04 2018
config:

NAME STATE READ WRITE CKSUM
detvolaa ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/6c34431d-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/6d6b2014-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/95d6aa2d-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/97c50e2b-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/b0e198cd-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/b60c38f7-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/d051b1c1-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/d1dcaec9-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/29df61ea-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/2b8baab9-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
gptid/3c57e2f7-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/3e0c9d1f-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0

errors: No known data errors
root@freenas[/mnt/detvolaa/noComp]#
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
With no compression I am filling a disk to 77% to see how fast a scrub will run with no fragmentation. I figured I would see each disk start at the fast end and get slower until the dd was finished.
Exactly what command were you running to fill your pool with data or were you trying to do one drive at a time?

Never-mind, I see now.
This is how I filled the disk :
for i in $(awk 'BEGIN {for(i=0;i<1300;i++) print i}'); do dd if=/dev/zero of=ddfile$i.dat bs=2048k count=5100; done
Still, I am confused by what you are trying to accomplish and I wonder why you are doing it?
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
Chris,

I wanted to see how long it took to fill the zpool and how the disks fared. I did not expect to see a saw tooth but I expected a ramp from 148 megabytes / second per disk to something slower. I was just trying to fill it and then see how long a scrub would take. I assume fragmented scrubs take longer? I am just preparing for a time when scrubs take so long they cannot fit into a maintenance window, i.e. one drive in the pool is having issues but has not 100% puked yet. LOL

I should have my cables to see if the SAS drives and dual ports make any difference. I think I would need a 12 drive set of SSD's to see dual ports in use. I am not sure BSD uses the second loop on SAS. I wish I had faster SAS drives or newer ones to test with.

Thanks,
Joe
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I was just trying to fill it and then see how long a scrub would take. I assume fragmented scrubs take longer?
A scrub of the pool walks the "tree" of data nodes checking the data against the checksums. So, part of the speed of a scrub is determined by disk acces speed, part by the processing capability of the system and part by how much data there is to be analyzed. The speed will vary to some degree with fragmentation, but the real determining factor is the amount of data and pool configuration. I don't think that filling a pool with zeros ( /dev/zero ) is going to provide a very accurate estimate of how long a scrub takes. I have several systems that I manage at work. One of them has a pool made up of 4 TB drives, 62 of them, and it has a reported capacity of 232 TB with 160 TB allocated. That pool takes about seven days to complete a scrub. I have another pool that is made up of 6 TB drives, 60 of them, and it has a reported capacity of 326 TB with 160 TB allocated. It takes about 5 days to complete a scrub. It is actually the same data because one system is a backup of the other. The difference in the time to scrub is down to the speed of the disks being different between 4 TB and 6 TB disks.

There are a couple of explanations for the saw-tooth action. Flushing and filling of the cache is the most likely.

What are you trying to find out with this exercise.
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
trying to do some sanity testing so that when it is in production and the scrubs go from daily to 3 weeks to finish I know it is abnormal. I am going to do a recommendation to get some kind of reporting via RDD chart so we can see the history of scrub events.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
trying to do some sanity testing so that when it is in production and the scrubs go from daily to 3 weeks to finish I know it is abnormal. I am going to do a recommendation to get some kind of reporting via RDD chart so we can see the history of scrub events.
I think it is a standard feature, but I can't recall, I may have set it up a long time ago and forgotten, my system at home sends me an email when it starts a scrub and when it finishes. It has a pool that finishes in about four hours and another pool that finishes in about 9 hours. Small data, fast scrub. It would be nice to have better reporting of scrub info through the GUI. It would also be nice to be able to have the scrub pause during the work week and pick back up where it left off when the next weekend comes, or run in the evenings, after hours.
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
Yep, I get the emails as well, a graph would be better as it would take just a quick glance and we could see that the previous day was x minutes and today with only a small amount of data added it is x minutes longer. I hate this spinning rust. LOL
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
12.7 tb of /dev/random and a scrub takes me about 6 hours with no fragmentation.

root@freenas[~]# zpool status -v detvolaa
pool: detvolaa
state: ONLINE
scan: scrub repaired 0 in 0 days 05:54:49 with 0 errors on Tue Dec 18 11:12:15 2018
config:

NAME STATE READ WRITE CKSUM
detvolaa ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/6c34431d-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/6d6b2014-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/95d6aa2d-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/97c50e2b-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/b0e198cd-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/b60c38f7-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/d051b1c1-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/d1dcaec9-00c1-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/29df61ea-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/2b8baab9-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
gptid/3c57e2f7-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0
gptid/3e0c9d1f-00c2-11e9-8345-246e968a4118 ONLINE 0 0 0

errors: No known data errors
 
Status
Not open for further replies.
Top