GrumpyBear
Contributor
- Joined
- Jan 28, 2015
- Messages
- 141
I'm reasonably confident of my hardware now and have some live data on it. Now I'm starting to investigate typical (atypical?) tasks one of which is dealing with disk failures. While I *think* I understand the process (detach/wipe/replace) I want to try it out now rather than try to figure out with a degraded array in production.
I have a 8 3TB disk RAIDZ3 configuration and want to simulate a disk failure and while practicing the steps for replacing a disk in a degraded array I'm interested in how long the resilvering process takes in my specific scenario. The catch is that while I only have 2.75GB currently that number will go up over time so I am interested in testing the resilver speed with more content on the array and would welcome suggestions on how to simulate this.
Googling found these results which some members have pointed out offer only a idea of the best case performance for the specific build.
What I propose to do is to use dd with if=/dev/urandom rather than /dev/zero to create additional files to load my array up to about 50% utilization (6TB) and then while replacing a single disk note how long the resilvering takes.
The majority of files in my usage case would be 15-25MB "RAW" image files; 1.5 - 2GB H264 video files (DVD backup with 5.1 pass-through) and 15-20GB video files (H264/5 BluRay backups). Note that the RAW image files are internally compressed and H264 files are processed and so would be more likely to be random from a compression standpoint.
The majority of the files would be RAW image files (not compressible) with xml side-car files (extremely compressible) and finished client JPEGs (HQ so very compressible) and these I suspect would create a random I/O mix. The video files would create a sequential I/O mix with a much smaller number of files but at significantly larger file sizes.
Assuming 10000 additional RAW image files (and the same number of sidecar and JPEGs)
10000 * (25MB + 3MB + 2KB) = 273.5GB
So we need 3TB of video. I've been transitioning from DVD to BluRay so I would estimate the future mix to be about 10:1 in favor of BluRay with no anticipated need for 4K (my eyes aren't that discerning nor is my display either capable or big enough)
60 1.75GB video files = 105GB
170 15.5GB video files = 2975GB
Total 3080GB
As dd is single threaded and /dev/urandom is also single threaded and additionally CPU bound I propose to use 6 threads to generate the simulated video files and a single thread for the image files and preserve a single thread.
Suggestions? Improvements?
I have a 8 3TB disk RAIDZ3 configuration and want to simulate a disk failure and while practicing the steps for replacing a disk in a degraded array I'm interested in how long the resilvering process takes in my specific scenario. The catch is that while I only have 2.75GB currently that number will go up over time so I am interested in testing the resilver speed with more content on the array and would welcome suggestions on how to simulate this.
Googling found these results which some members have pointed out offer only a idea of the best case performance for the specific build.
What I propose to do is to use dd with if=/dev/urandom rather than /dev/zero to create additional files to load my array up to about 50% utilization (6TB) and then while replacing a single disk note how long the resilvering takes.
The majority of files in my usage case would be 15-25MB "RAW" image files; 1.5 - 2GB H264 video files (DVD backup with 5.1 pass-through) and 15-20GB video files (H264/5 BluRay backups). Note that the RAW image files are internally compressed and H264 files are processed and so would be more likely to be random from a compression standpoint.
The majority of the files would be RAW image files (not compressible) with xml side-car files (extremely compressible) and finished client JPEGs (HQ so very compressible) and these I suspect would create a random I/O mix. The video files would create a sequential I/O mix with a much smaller number of files but at significantly larger file sizes.
Assuming 10000 additional RAW image files (and the same number of sidecar and JPEGs)
10000 * (25MB + 3MB + 2KB) = 273.5GB
So we need 3TB of video. I've been transitioning from DVD to BluRay so I would estimate the future mix to be about 10:1 in favor of BluRay with no anticipated need for 4K (my eyes aren't that discerning nor is my display either capable or big enough)
60 1.75GB video files = 105GB
170 15.5GB video files = 2975GB
Total 3080GB
As dd is single threaded and /dev/urandom is also single threaded and additionally CPU bound I propose to use 6 threads to generate the simulated video files and a single thread for the image files and preserve a single thread.
Suggestions? Improvements?
Last edited by a moderator: