12 bays, rack mounted platform as primary storage

Status
Not open for further replies.

Kean

Dabbler
Joined
Sep 28, 2016
Messages
11
I was looking for a storage platform for at least 3 months. Previously I was leaning towards QNAP or Synology (4 bays, rack mounted). Finally I realized that almost 1000$ storage (without the drives) for only 4 disks is not really efficient, especially if i want to have some redundancy (risky RAID5 or RAID10). So I decided to go towards custom build storage server with FreeNAS as best open source OS.
I've made a lot of readings, diggings and finally I'm ready to show what I found.

Platform purpose
Pure storage without any additional APPs or Jails. I will use CIFS, NFS and iSCSI as a target for several crucial XenServer VMs to introduce live motion.
At day one I will start with 6 disks (RAID Z2). 6 remaining will be installed once needed

Platform HW (plus means I have it already)
MB: Supermicro X10SDV-2C-7TP4F
CPU: Intel® Pentium® processor D1508 (embedded)
Memory: 16GB DDR4 2133 ECC-R (HMA42GR7AFR4N-TF)
+PS: Seasonic SS- 400 H2U
+Case: X-Case RM 212 Pro 2u with 12 Hotswap trays
Boot device: 2x Supermicro 16GB SATA DOM (SSD-DM016-PHI )
HDDs: 6x WD Red 4TB 3,5" (WD40EFRX)

Open points:
1. How do you see this HW selection?
2. Should I consider 32GB memory from the beginning?
3. How about this CPU? Is it powerful enough to handle Z2 redundancy and fly with 10Gb/s?
4. What about LSI2116 embedded into MB - is it working in IT mode out of the box?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
1. How do you see this HW selection?
It's fine.
2. Should I consider 32GB memory from the beginning?
Meh, might as well try 16GB first. The extra 16GB are painless to add. You could, however, go with a single 32GB RDIMM for even more ease of upgrading.
3. How about this CPU? Is it powerful enough to handle Z2 redundancy and fly with 10Gb/s?
It should be at least reasonable, but the 10GbE controller in the SoC will have a hard time doing the full 10Gb/s in FreeBSD. Probably closer to 7-8.
4. What about LSI2116 embedded into MB - is it working in IT mode out of the box?
It only does IT mode. But you have to flash it to the appropriate version.
 
Joined
Feb 2, 2016
Messages
574
The hardware looks fine, @Kean, but I'm concerned about the drive configuration.

That said, I'm not sure a large RAIDZ2 pool is right for XenServer VMs. Especially if you're looking to do live migrations. XenServer likes IOPS and a single RAIDZ2 pool doesn't have a lot of them. You're better off doing a striped mirror (which will, unfortunately, reduce your available storage space from 16TB to 12 TB).

Look at your actual storage needs: if you're using iSCSI, keeping your pool at 50% full or less is a thing. An NVMe SSD for SLOG could be a good investment, too.

If your XenServer VMs are relatively small, consider SSD storage for them while leaving your bulk data on conventional drives. We went that route, it was fairly inexpensive and it GREATLY improved performance.

If you haven't ready this thread about XenServer FreeNAS Storage, please do so. Lots of good jumping off points there.

Cheers,
Matt
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Hardware seems fine but I suspect you might need a good slog device if using NFS for your datastore. Your pool layout should be mirrors not raidz2 if your using it to host vm disks.

Sent from my Nexus 5X using Tapatalk
 

Kean

Dabbler
Joined
Sep 28, 2016
Messages
11
Thanks for your comments!
Regarding VMs and iSCSI, I need to rate it as second prio. This is home network, not enterprise.
I have ~8VMs, where some of them are idling almost constantly. I will not put DB VM to iSCSI as I understand that I have no sufficient IOPS in such layout.
Several VMs are responsible mostly for network functions, so there is almost no IO operations, just logs.
One VM is responsible for WWW. I rate is as crucial, but I put reliability on top of super performance. If it will be too slow, I will move it back to XenServer machine on local drives.

To conclude, as I understood, you propose to create one Z2 pool for pure storage and one SSD small pool for VM housing. Right?
 
Joined
Feb 2, 2016
Messages
574
Low I/O? Home usage? You'll be more than fine, @Kean .

A pair of SSDs mirrored to host your VMs will feel amazing but may approach decadent and unnecessary.

Cheers,
Matt
 

Kean

Dabbler
Joined
Sep 28, 2016
Messages
11
The project is not dead! In fact it's running for 4 months. I just forgot to share some info and pics :)
SPEC of my build is exactly as in first post.

Whole building process was quite smooth until... drives detection. Because of some unknown reason only part of the drives were detected in LSI BIOS. The more worrying was that all of them were detected in slot 255 in same enclosure. The tricky part was that drives connected via SAS-4xSATA cables were working great. So the problem was in the Case SAS backplane or cables. Finally I figured out that the problem lies in Supermicro SAS cable (CBL-SAST-0508-01) and compatibility with my case.
This cable is equipped in Side Band signaling link. It's used in communication between SAS controller and SAS backplane (eg. LEDs, indication, etc.).
I decided to sacrifice one of 3 cables and cut side band wires. It worked! After that all other activities were sooo smooth.

After some first tests I found that SAS controller in this case in my rack is somehow hot. It reaches even 75*C without load after 1 hour of runtime. I decided to stick one 50x50x15mm fan to cool it down.

Below you can find first part of the pics (FANs mainly)
Boxed fans:
https://i83i.imgup.net/P101020055d5.JPG

CPU FAN:
https://c56i.imgup.net/P1010204f8f0.JPG

SAS FAN:
https://m07i.imgup.net/P1010205892f.JPG

SAS cables (still with SB :) )
https://w20i.imgup.net/P1010206e635.JPG

Case:
https://v18i.imgup.net/P10102077d8a.JPG

What next?
1. I would like to go deeper in fine tuning but i would like to get more info how to effectively manage FreeNAS and what should be done to predict problems and drawbacks. All steps from documentation are already implemented. Do you have any hints?
2. Coral transition.
3. Network enhancements. I would like to do a LACP to unload link during heavy file transfers. Finally I would go for 10Gb/s, but switches are so expensive that it will take some time.
4. Put some iSCSI volumes to play with it and decide if performance is ok for production for small VMs.
5. Introduce some off site, cloud based recovery backup for non critical files (critical files are already mirrored)
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The project is not dead! In fact it's running for 4 months. I just forgot to share some info and pics :)
SPEC of my build is exactly as in first post.

Whole building process was quite smooth until... drives detection. Because of some unknown reason only part of the drives were detected in LSI BIOS. The more worrying was that all of them were detected in slot 255 in same enclosure. The tricky part was that drives connected via SAS-4xSATA cables were working great. So the problem was in the Case SAS back plain or cables. Finally I figured out that the problem lies in Supermicro SAS cable (CBL-SAST-0508-01) and compatibility with my case.
This cable is equipped in Side Band signaling link. It's used in communication between SAS controller and SAS backplain (eg. LEDs, indication, etc.).
I decided to sacrifice one of 3 cables and cut side bands wires. It worked! After that all other activities were sooo smooth.

After some first tests I found that SAS controller in this case in my rack is somehow hot. It reaches even 75*C without load after 1 hour of runtime. I decided to stick one 50x50x15mm fan to cool it down.

Below you can find first part of the pics (FANs mainly)
Boxed fans:
P101020055d5.JPG

CPU FAN:
P1010204f8f0.JPG


SAS FAN:
P1010205892f.JPG


SAS cables (still with SB :) )
P1010206e635.JPG


Case:
P10102077d8a.JPG
Images are broken. Just copy-paste them into the message and XenForo will work its magic.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The project is not dead! In fact it's running for 4 months. I just forgot to share some info and pics :)
SPEC of my build is exactly as in first post.

Whole building process was quite smooth until... drives detection. Because of some unknown reason only part of the drives were detected in LSI BIOS. The more worrying was that all of them were detected in slot 255 in same enclosure. The tricky part was that drives connected via SAS-4xSATA cables were working great. So the problem was in the Case SAS backplane or cables. Finally I figured out that the problem lies in Supermicro SAS cable (CBL-SAST-0508-01) and compatibility with my case.
This cable is equipped in Side Band signaling link. It's used in communication between SAS controller and SAS backplane (eg. LEDs, indication, etc.).
I decided to sacrifice one of 3 cables and cut side band wires. It worked! After that all other activities were sooo smooth.

After some first tests I found that SAS controller in this case in my rack is somehow hot. It reaches even 75*C without load after 1 hour of runtime. I decided to stick one 50x50x15mm fan to cool it down.

Below you can find first part of the pics (FANs mainly)
Boxed fans:
https://i83i.imgup.net/P101020055d5.JPG

CPU FAN:
https://c56i.imgup.net/P1010204f8f0.JPG

SAS FAN:
https://m07i.imgup.net/P1010205892f.JPG

SAS cables (still with SB :) )
https://w20i.imgup.net/P1010206e635.JPG

Case:
https://v18i.imgup.net/P10102077d8a.JPG

What next?
1. I would like to go deeper in fine tuning but i would like to get more info how to effectively manage FreeNAS and what should be done to predict problems and drawbacks. All steps from documentation are already implemented. Do you have any hints?
2. Coral transition.
3. Network enhancements. I would like to do a LACP to unload link during heavy file transfers. Finally I would go for 10Gb/s, but switches are so expensive that it will take some time.
4. Put some iSCSI volumes to play with it and decide if performance is ok for production for small VMs.
5. Introduce some off site, cloud based recovery backup for non critical files (critical files are already mirrored)

Please do not use external services for the images. Upload them to the forum, instead.
 
Status
Not open for further replies.
Top