I tend to be cautious when purchasing hardware, so I intend to largely paint-by-numbers according to Cyberjock's hardware guidelines. However, as my hardware experience is ENTIRELY on consumer grade components, I'm hoping that someone here would be kind enough to confirm that they think this stuff will all fit together nicely.
This build will be a 10x6TB Seagate SATA single VDev RaidZ2 build intended predominantly to store and stream large files (mostly video and music) to a 4 person family. I will also store some photo, tax, business, and miscellaneous document information on there. This will be a read-often environment with up to 3 concurrent users, and writes will be performed in batches once or twice a week by a single concurrent user. I'll probably also disable atime in order to further decrease writes.
I will probably experiment with a couple jails and see if I can stream something using the Plex or MediaBrowser plugins, but this is of a secondary concern. Down the road, assuming I feel I have the overhead, I might do some additional experimentation with cameras and such, but I don't want to get hung up on the idea of that.
I intend to use the X10SL7-F motherboard so that I can utilize all of the drives without the headache of using another card. The integrated LSI controller will, of course, be flashed to IT mode.
I'll use a Xeon E3 1231v3 as I'm positive it supports ECC RAM (curse that confusing i3 mess). I'll be using 32GB of the stuff, and I'll just get one of the Supermicro approved models for my board.
I'd like to use a Supermicro 2u/3u/4u case like the 2U Supermicro CSE-826A-R800LPB SuperChassis 12 bay SAS SATA Mini-I-Pas, but I'm not sure that, like consumer cases, any microATX board will fit in any microATX case. If that's the case, then I'll just get whatever Supermicro 2u/3u/4u microATX board is inexpensive at the time and has redundant power supplies of 700 watts or greater. Given Supermicro's reputation, I'm guessing these power supplies should be fine.
I'll be purchasing a UPS once the build is finished and the burn-in process is complete. I'll just choose a decently priced one from this list.
Finally, I'll just be booting off a pair of SandDisk jump drives. I believe the Cruzer is well recommended around here, and they're plentiful at local Best Buys/Targets/Wal-Marts.
Anyone have any stupid mistakes to point out or otherwise constructive comments?
Case Study
**********
Hardware Platform
Motherboard - Supermicro X10SL7-F-O - 238.99
CPU - InteL Xeon 1231 v3 - 242.99
RAM - 32 GB (4 x 8 GB) Samsung DDR3-1600 8GB/1Gx72 ECC - 279.96
Boot Drive - Kingston SSDNow V300 60GB 2.5 inch SATA3 Solid State Drive - 45.95
10 x 6 GB Seagate SATA STBD6000100 Hard Drives - 619.90
2U Supermicro CSE-826TQ-R800LPB SuperChassis 12bays - 299.00
PSU - CP1500PFCLCD - 205.99
8 pin and 24 pin extension cables - 6.78
SuperMicro chassis drive bay screws - 6.99
Build Total: $1946.55
Process up to this point (Mistakes and All)
***************************************
Assembled Hardware
Mental note for server hardware noobs: I initially cursed Supermicro for their awkward chassis design decisions, then realized that almost everything I thought was going to be awkward to assemble slid out easily or otherwise allowed me much easier access than I thought it would at first glance. Fans, panels, etc. all slide out. The only real complaints I had with my chassis were the short 8 pin/24 pin cords and the relatively small space available to route 11 SATA cords.
Second server hardware noob note: I saw some vague references on this forum to people not having hard drive screws when building their systems, which I didn't really understand because the HDs come with screws. Those screws will not work on the SuperMicro chassis. You need Supermicro drive bay screws which are flush with the edge of the drive bays. Part # = MCP-410-00005-0N
Powered on and made sure all moving parts were moving, there were no BIOS beep warnings, etc.
Configured BIOS to keep server powered off in case of power loss. I have a UPS, so if this ever occurs it's likely to be one of those nasty situations that could result in some rolling brownouts for a while.
Entered LSI Configuration Utility and confirmed that the firmware was out of date and in IR mode
Flashed to IT mode using these instructions: https://forums.servethehome.com/ind...g-the-lsi2308-on-an-x9srh-7f-to-it-mode.1734/
Re-entered LSI Configuration Utility to confirm version 16 and IT mode
Ran Memtest, but noticed that my CPU was getting warm. Attempted to cool it down by removing chassis panel - do NOT do this, your drives will get too warm as it will pull air in through the panel access and not over the drives.
Went into BIOS, set fan mode to "Full" and reattached chassis panel
Installed FreeNAS
Verified that the OS can see the bare drives. The back panel is indeed "just a circuit board" that passes the SATA connections through to the motherboard.
Verified that a momentary press of the power button didn't immediately shut down the server
Reserved a static IP address via DHCP reservation in my router for IPMI connectivity
Ran short SMART Tests
Ran long SMART Tests
Ran conveyance tests
Executed BadBlocks with the 4096 option using tmux through the GUI shell, but could only start 4 of the 10 drives because I received the "Couldn't create panel" error trying to open the next one
Tmux closed and I couldn't get back into the shell
Configured SSH, connected to the machine through Putty, and attached to my tmux session
Reconfigured the tmux layout using the ctrl+B, space option and was able to start BadBlocks on my remaining drives
Set up 2Factor Authentication on my Gmail account
Integrate Gmail communication into FreeNAS
Set up a 14 day recurring scrub schedule for my boot drive
Set up the smart service to check drive temps every hour, send me an informational email if one is over 40 degrees, and send me a critical email if one is over 45 degrees
Set up UPS integration. Initially saw the issue here: https://forums.freenas.org/index.php?threads/data-for-ups-is-stale.20898/page-3 . I'll have to restart FreeNAS after hard drive burn in to test, it seems.
Restarted FreeNAS
Reran long SMART tests
Set up SMART test schedule (every 2 days for short tests, twice a month for long tests)
Created 10 x 6TB RaidZ2 volume
Created a dataset with atime off and set up as a Windows Share Type. Compression and Dedupe are off
Created a CIFS share with guest access
Now copying files over
This build will be a 10x6TB Seagate SATA single VDev RaidZ2 build intended predominantly to store and stream large files (mostly video and music) to a 4 person family. I will also store some photo, tax, business, and miscellaneous document information on there. This will be a read-often environment with up to 3 concurrent users, and writes will be performed in batches once or twice a week by a single concurrent user. I'll probably also disable atime in order to further decrease writes.
I will probably experiment with a couple jails and see if I can stream something using the Plex or MediaBrowser plugins, but this is of a secondary concern. Down the road, assuming I feel I have the overhead, I might do some additional experimentation with cameras and such, but I don't want to get hung up on the idea of that.
I intend to use the X10SL7-F motherboard so that I can utilize all of the drives without the headache of using another card. The integrated LSI controller will, of course, be flashed to IT mode.
I'll use a Xeon E3 1231v3 as I'm positive it supports ECC RAM (curse that confusing i3 mess). I'll be using 32GB of the stuff, and I'll just get one of the Supermicro approved models for my board.
I'd like to use a Supermicro 2u/3u/4u case like the 2U Supermicro CSE-826A-R800LPB SuperChassis 12 bay SAS SATA Mini-I-Pas, but I'm not sure that, like consumer cases, any microATX board will fit in any microATX case. If that's the case, then I'll just get whatever Supermicro 2u/3u/4u microATX board is inexpensive at the time and has redundant power supplies of 700 watts or greater. Given Supermicro's reputation, I'm guessing these power supplies should be fine.
I'll be purchasing a UPS once the build is finished and the burn-in process is complete. I'll just choose a decently priced one from this list.
Finally, I'll just be booting off a pair of SandDisk jump drives. I believe the Cruzer is well recommended around here, and they're plentiful at local Best Buys/Targets/Wal-Marts.
Anyone have any stupid mistakes to point out or otherwise constructive comments?
Case Study
**********
Hardware Platform
Motherboard - Supermicro X10SL7-F-O - 238.99
CPU - InteL Xeon 1231 v3 - 242.99
RAM - 32 GB (4 x 8 GB) Samsung DDR3-1600 8GB/1Gx72 ECC - 279.96
Boot Drive - Kingston SSDNow V300 60GB 2.5 inch SATA3 Solid State Drive - 45.95
10 x 6 GB Seagate SATA STBD6000100 Hard Drives - 619.90
2U Supermicro CSE-826TQ-R800LPB SuperChassis 12bays - 299.00
PSU - CP1500PFCLCD - 205.99
8 pin and 24 pin extension cables - 6.78
SuperMicro chassis drive bay screws - 6.99
Build Total: $1946.55
Process up to this point (Mistakes and All)
***************************************
Assembled Hardware
Mental note for server hardware noobs: I initially cursed Supermicro for their awkward chassis design decisions, then realized that almost everything I thought was going to be awkward to assemble slid out easily or otherwise allowed me much easier access than I thought it would at first glance. Fans, panels, etc. all slide out. The only real complaints I had with my chassis were the short 8 pin/24 pin cords and the relatively small space available to route 11 SATA cords.
Second server hardware noob note: I saw some vague references on this forum to people not having hard drive screws when building their systems, which I didn't really understand because the HDs come with screws. Those screws will not work on the SuperMicro chassis. You need Supermicro drive bay screws which are flush with the edge of the drive bays. Part # = MCP-410-00005-0N
Powered on and made sure all moving parts were moving, there were no BIOS beep warnings, etc.
Configured BIOS to keep server powered off in case of power loss. I have a UPS, so if this ever occurs it's likely to be one of those nasty situations that could result in some rolling brownouts for a while.
Entered LSI Configuration Utility and confirmed that the firmware was out of date and in IR mode
Flashed to IT mode using these instructions: https://forums.servethehome.com/ind...g-the-lsi2308-on-an-x9srh-7f-to-it-mode.1734/
Re-entered LSI Configuration Utility to confirm version 16 and IT mode
Ran Memtest, but noticed that my CPU was getting warm. Attempted to cool it down by removing chassis panel - do NOT do this, your drives will get too warm as it will pull air in through the panel access and not over the drives.
Went into BIOS, set fan mode to "Full" and reattached chassis panel
Installed FreeNAS
Verified that the OS can see the bare drives. The back panel is indeed "just a circuit board" that passes the SATA connections through to the motherboard.
Verified that a momentary press of the power button didn't immediately shut down the server
Reserved a static IP address via DHCP reservation in my router for IPMI connectivity
Ran short SMART Tests
Ran long SMART Tests
Ran conveyance tests
Executed BadBlocks with the 4096 option using tmux through the GUI shell, but could only start 4 of the 10 drives because I received the "Couldn't create panel" error trying to open the next one
Tmux closed and I couldn't get back into the shell
Configured SSH, connected to the machine through Putty, and attached to my tmux session
Reconfigured the tmux layout using the ctrl+B, space option and was able to start BadBlocks on my remaining drives
Set up 2Factor Authentication on my Gmail account
Integrate Gmail communication into FreeNAS
Set up a 14 day recurring scrub schedule for my boot drive
Set up the smart service to check drive temps every hour, send me an informational email if one is over 40 degrees, and send me a critical email if one is over 45 degrees
Set up UPS integration. Initially saw the issue here: https://forums.freenas.org/index.php?threads/data-for-ups-is-stale.20898/page-3 . I'll have to restart FreeNAS after hard drive burn in to test, it seems.
Restarted FreeNAS
Reran long SMART tests
Set up SMART test schedule (every 2 days for short tests, twice a month for long tests)
Created 10 x 6TB RaidZ2 volume
Created a dataset with atime off and set up as a Windows Share Type. Compression and Dedupe are off
Created a CIFS share with guest access
Now copying files over
Last edited: