leenux_tux
Patron
- Joined
- Sep 3, 2011
- Messages
- 238
A topic for consideration/discussion...for the home/small businesses user....
I am in the middle of a server rebuild at the moment, sourcing new hardware, cases, rack mount cabinet, more hard drives etc. I have also bought another IBM m1015 controller so now I have two to play with, however, a thought has occured to me whilst going through this exercise... Will hard drives get to big for a ZFS type filesystem ?
I currently have a mixture of 1TB and 2TB hard drives in my two pools. I have purchased a bunch more 2TB drives to extend one of my pools (currently 4x2TB, raidz1...bad choice but this procedure is the fix.)
Looking at the choice of hard drives out there now, 3TB, 4TB, 6TB, I'm sure I read 8TB is on the horizon? how much of a strain on hard drives is as ZFS rebuild? Replacing a 2TB drive in a raidz1 took over 5 hours per disk for me when I moved one of my pools from 4x1TB drives to 2TB. If you had a (for example) 80% full raidz2 containing 4x6TB drives and had to replace a drive, how long would it take and how much strain would it put on the other drives ?
I am wondering where the "sweet spot" is. Will it be that SSD drives become so cheap that we use multiples of these type of drives instead of small numbers of huge capacity drives ?
I hope this makes sense and invokes some healthy debate.
Leenux_tux
I am in the middle of a server rebuild at the moment, sourcing new hardware, cases, rack mount cabinet, more hard drives etc. I have also bought another IBM m1015 controller so now I have two to play with, however, a thought has occured to me whilst going through this exercise... Will hard drives get to big for a ZFS type filesystem ?
I currently have a mixture of 1TB and 2TB hard drives in my two pools. I have purchased a bunch more 2TB drives to extend one of my pools (currently 4x2TB, raidz1...bad choice but this procedure is the fix.)
Looking at the choice of hard drives out there now, 3TB, 4TB, 6TB, I'm sure I read 8TB is on the horizon? how much of a strain on hard drives is as ZFS rebuild? Replacing a 2TB drive in a raidz1 took over 5 hours per disk for me when I moved one of my pools from 4x1TB drives to 2TB. If you had a (for example) 80% full raidz2 containing 4x6TB drives and had to replace a drive, how long would it take and how much strain would it put on the other drives ?
I am wondering where the "sweet spot" is. Will it be that SSD drives become so cheap that we use multiples of these type of drives instead of small numbers of huge capacity drives ?
I hope this makes sense and invokes some healthy debate.
Leenux_tux