jamiejunk
Contributor
- Joined
- Jan 13, 2013
- Messages
- 134
I'm looking to bounce what we're doing off some knowledgeable people. I figured this would be the place to look.
You can see a graphical representation of what we're doing and our specs here.
https://docs.google.com/file/d/0B3_C5r_n0NxLMUp6c1RrejNKMVk
We've been running some sort of ZFS san since 2008. Performance has almost always been an issue for us. We use the storage for typical stuff. NFS shares for home directories and NFS shares for Vmware shared storage.
We don't have huge data in the grand scheme of things, but we have thousands of staff and students with lots of little preference and home directory files. We never come close to maxing out our gigabit ethernet, but with so many little files the lack disk IO has been killing the sans performance. I ran a ls -R | wc -l on the home directories last night and we're at about 19 million files.
Also any dumb luck that could come our way with storage has. In 2009 we had some sort of power event. At the time we had a san which backed up to another san. Both of these drive shelfs had SAS drives in them. When the power event happened it killed over 30 sas drives in the both disk shelfs.
Luckily we only lost one weeks worth of work because we had a 3rd backup to a SATA shelf.
We recently invested in an APC Symmentra so hopefully we won't see that issue again. But because of our past issues we are very data paranoid. Also with literally millions of small files, backing up has been a nightmare. If we needed to restore an entire pool from a backup it could take weeks to copy all the data. Not because it's a lot size wise, but rather because there are so many small files. The file copy speed is at a crawl.
So our plan is to move around some existing hardware, and purchase some new hardware.
The new hardware is
Head Unit:
Supermicro SYS-2027R-72RFTP
128 Gigs of ram
Single 2.40GHz Xeon E5-2609 4-Core Processor
Stec SAS ZeusRam for ZIL
Hopefully a Stec SAS Zeus IOPS for L2ARC
Both connected to LSI 2208 on motherboard
JBOD:
Supermicro 216BAC-R920LPB
24 - 512 gig Samsung 840 Pro SSDs connected to the
LSI 9207-4i4E PCI-E SAS2 PCI Controller in head unit
It seems the 512 gig Samsung 840 Pro SSDs seem to be sold out everywhere though. Since i'm in a time crunch to get this done we may be looking at other SSDs.
Running FreeNAS OS - Installed on a 4GB Apacer SATADOM SLC SSD
Setup as Raidz2 4 devices per vdev.
Total of 6 Vdevs
This "hotstor" SAN will do a zfs send to the "Warmstor" san every 5 minutes.
Also it will back up to the offsite "coldstor" nightly.
So worst case if the hotstor data san goes down or looses it's pool we can remap the servers to use the warmstor and be up and running in a few minutes. If we lose both pools we'll have the offsite backup.
You can see a graphical representation of what we're doing and our specs here.
https://docs.google.com/file/d/0B3_C5r_n0NxLMUp6c1RrejNKMVk
We've been running some sort of ZFS san since 2008. Performance has almost always been an issue for us. We use the storage for typical stuff. NFS shares for home directories and NFS shares for Vmware shared storage.
We don't have huge data in the grand scheme of things, but we have thousands of staff and students with lots of little preference and home directory files. We never come close to maxing out our gigabit ethernet, but with so many little files the lack disk IO has been killing the sans performance. I ran a ls -R | wc -l on the home directories last night and we're at about 19 million files.
Also any dumb luck that could come our way with storage has. In 2009 we had some sort of power event. At the time we had a san which backed up to another san. Both of these drive shelfs had SAS drives in them. When the power event happened it killed over 30 sas drives in the both disk shelfs.
Luckily we only lost one weeks worth of work because we had a 3rd backup to a SATA shelf.
We recently invested in an APC Symmentra so hopefully we won't see that issue again. But because of our past issues we are very data paranoid. Also with literally millions of small files, backing up has been a nightmare. If we needed to restore an entire pool from a backup it could take weeks to copy all the data. Not because it's a lot size wise, but rather because there are so many small files. The file copy speed is at a crawl.
So our plan is to move around some existing hardware, and purchase some new hardware.
The new hardware is
Head Unit:
Supermicro SYS-2027R-72RFTP
128 Gigs of ram
Single 2.40GHz Xeon E5-2609 4-Core Processor
Stec SAS ZeusRam for ZIL
Hopefully a Stec SAS Zeus IOPS for L2ARC
Both connected to LSI 2208 on motherboard
JBOD:
Supermicro 216BAC-R920LPB
24 - 512 gig Samsung 840 Pro SSDs connected to the
LSI 9207-4i4E PCI-E SAS2 PCI Controller in head unit
It seems the 512 gig Samsung 840 Pro SSDs seem to be sold out everywhere though. Since i'm in a time crunch to get this done we may be looking at other SSDs.
Running FreeNAS OS - Installed on a 4GB Apacer SATADOM SLC SSD
Setup as Raidz2 4 devices per vdev.
Total of 6 Vdevs
This "hotstor" SAN will do a zfs send to the "Warmstor" san every 5 minutes.
Also it will back up to the offsite "coldstor" nightly.
So worst case if the hotstor data san goes down or looses it's pool we can remap the servers to use the warmstor and be up and running in a few minutes. If we lose both pools we'll have the offsite backup.