Chewie71
Cadet
- Joined
- Sep 26, 2012
- Messages
- 9
I have one of the BackBlaze pods, default hardware as sold by Protocase.
http://www.protocase.com/products/index.php?e=Backblaze
Hardware Specs are on this page.
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
But basically...
8GB RAM
2 x Syba PCI Express SATA II 4-Port RAID Controller Card SY-PEX40008
9 x CFI-B53PM 5 Port Backplane (SiI3726)
45 x 3TB WD Red NAS hard drives
If I remember correctly, 8 backplanes connect off the PCIe SATA cards and one off the MB.
I've install FreeNAS 8.2.0-p1 onto a thumb drive. I've read the forums quite a bit about the most performant way to assemble the ZFS pool. It seems the conventional wisdom says to create a pool, and populate it with a bunch of 6-disk vdevs. I've partly done this and my setup is below...
ZFS_POOL
- raidz2
- - ada0p2
- - ada1p2
- - ada2p2
- - ada3p2
- - ada4p2
- - ada5p2
- raidz2
- - ada6p2.nop
- - ada7p2.nop
- - ada8p2.nop
- - ada9p2.nop
- - ada10p2.nop
- - ada11p2.nop
- cache
- - ada45p1
- spares
- - ada44p2
- - ada43p2
- logs
- - ada42p2
- - ada41p2
Every time I add another 6 disk raidz2 vdev to the pool, it's going to eat up two hard drives. So out of the 135TB it looks like I may only get a little over half of that as usable space....which seems like an awful waste to me. Maybe I'm designing this wrong? I'm open to suggestions.
The other question I have which I've seen asked about but not really answered...is the siisch timeout problem. I switched to FreeNAS from the default Debian install because the port multiplier issue was giving me fits in Linux. I also wanted a better way to manage the storage and the ZFS support was another reason to switch. But in some early testing when I mistakenly had created a single vdev in a pool using ALL the drives...I was starting to see the siisch timeout problems in FreeNAS too. Maybe with my ZFS redesign those will go away, but I'm not too hopeful.
I'm waiting to do anything further though until I get a better understanding of the ZFS design and whether I should make any changes.
Thanks,
Matt
http://www.protocase.com/products/index.php?e=Backblaze
Hardware Specs are on this page.
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
But basically...
8GB RAM
2 x Syba PCI Express SATA II 4-Port RAID Controller Card SY-PEX40008
9 x CFI-B53PM 5 Port Backplane (SiI3726)
45 x 3TB WD Red NAS hard drives
If I remember correctly, 8 backplanes connect off the PCIe SATA cards and one off the MB.
I've install FreeNAS 8.2.0-p1 onto a thumb drive. I've read the forums quite a bit about the most performant way to assemble the ZFS pool. It seems the conventional wisdom says to create a pool, and populate it with a bunch of 6-disk vdevs. I've partly done this and my setup is below...
ZFS_POOL
- raidz2
- - ada0p2
- - ada1p2
- - ada2p2
- - ada3p2
- - ada4p2
- - ada5p2
- raidz2
- - ada6p2.nop
- - ada7p2.nop
- - ada8p2.nop
- - ada9p2.nop
- - ada10p2.nop
- - ada11p2.nop
- cache
- - ada45p1
- spares
- - ada44p2
- - ada43p2
- logs
- - ada42p2
- - ada41p2
Every time I add another 6 disk raidz2 vdev to the pool, it's going to eat up two hard drives. So out of the 135TB it looks like I may only get a little over half of that as usable space....which seems like an awful waste to me. Maybe I'm designing this wrong? I'm open to suggestions.
The other question I have which I've seen asked about but not really answered...is the siisch timeout problem. I switched to FreeNAS from the default Debian install because the port multiplier issue was giving me fits in Linux. I also wanted a better way to manage the storage and the ZFS support was another reason to switch. But in some early testing when I mistakenly had created a single vdev in a pool using ALL the drives...I was starting to see the siisch timeout problems in FreeNAS too. Maybe with my ZFS redesign those will go away, but I'm not too hopeful.
I'm waiting to do anything further though until I get a better understanding of the ZFS design and whether I should make any changes.
Thanks,
Matt