manifest3r
Dabbler
- Joined
- Jul 3, 2014
- Messages
- 16
Hey Everyone,
I was in the market for a Synology/QNAP 4-bay NAS, but I stumbled upon this the A2SDI-4C-HLN4F motherboard for $61 shipped and now I am all in to build a TrueNAS Core server. My main goal is to have a stable and reliable system that I can update regularly without any issues. I am currently holding ~1TB of archival data on my Proxmox server and my long-term goal is to offload it to my NAS.
Here are my current build specs:
The OS will be booted via USB SSD. I'm not 100% sure if this next step will work - as the A2SDI-4C-HLN4F only supports a "Total combined PCIe lanes and SATA ports is up to 8." My plan is to add a SAS HBA, which will take up 2 PCIe lanes - leaving me with 6 available SATA ports. From my research I've read that the m.2 PCIe lane is independent so I am planning on using that for the m.2 to PCIe adapter for a 10Gbps NIC (I am aware I won't get full bandwidth but anything faster than 1Gbps is fine).
It is a bit frankensteinish - especially in UNAS NSC-810 case (810A was OOS), but I think I'll give it a go. I don't plan on holding any VMs on this server as I have my Proxmox server for Kubernetes and general virtualization. Although this NAS is mainly for archival data, I am going to actively link an Immich container to it for a Google Photos replacement, which is why I am going with a 10Gbps NIC.
Questions...
1. Is a 10Gbps SFP+ NIC the best way to connect to my Proxmox server? I have connected via Fibre Channel iSCSI target in the past back when FreeNAS was a thing.
2. Do I need a cache?
3. Trying to keep TDP low - would sticking with SATA drives be a better alternative considering HBAs use a constant ~10W? It'll also free up a couple SATA ports.
EDIT: For anyone stumbling on this thread, you can have up to 16 drives on this motherboard. 8 from the on-board SATA AND 8 more from a PCIe HBA. Source: https://www.reddit.com/r/unRAID/comments/h0pxak/a2sdi4chln4f_raidhba_card_question/
I was in the market for a Synology/QNAP 4-bay NAS, but I stumbled upon this the A2SDI-4C-HLN4F motherboard for $61 shipped and now I am all in to build a TrueNAS Core server. My main goal is to have a stable and reliable system that I can update regularly without any issues. I am currently holding ~1TB of archival data on my Proxmox server and my long-term goal is to offload it to my NAS.
Here are my current build specs:
Part | Model | Price |
Motherboard | A2SDI-4C-HLN4F ITX | 61.63 |
CPU | Intel Atom C3558 | 0 |
RAM | 16GB ECC RAM | 0 |
PSU | Enhance Electronics ENP-7025B 250W 80 Plus Bronze Power Supply | 28.62 |
SSD for TrueNAS OS | Lexar NS100 128GB 2.5” SATA III Internal SSD, Solid State Drive, Up To 520MB/s Read (LNS100-128RBNA) | 10 |
USB to SSD | SATA to USB 3.0 Adapter Cable, SNANSHI USB 3.0 to SATA III Adapter Cable with UASP SATA to USB Converter for 2.5" Hard Drives Disk HDD and Solid State Drives SSD(Power Adapter is not Include) | 5.99 |
Case | UNAS NSC-810 | 218.17 |
Hard Drive | HGST 4TB | 170 |
17.99 | ||
10Gbps NIC | 2x Intel X520? (not including SFP+ cable) | 17/ea |
SFP+ Cable w/ port - 1meter (3.3ft) | Twinax-Passive-Compatible-SFP-H10GB-CU0-3M-Ubiquiti | 15.99 |
The OS will be booted via USB SSD. I'm not 100% sure if this next step will work - as the A2SDI-4C-HLN4F only supports a "Total combined PCIe lanes and SATA ports is up to 8." My plan is to add a SAS HBA, which will take up 2 PCIe lanes - leaving me with 6 available SATA ports. From my research I've read that the m.2 PCIe lane is independent so I am planning on using that for the m.2 to PCIe adapter for a 10Gbps NIC (I am aware I won't get full bandwidth but anything faster than 1Gbps is fine).
It is a bit frankensteinish - especially in UNAS NSC-810 case (810A was OOS), but I think I'll give it a go. I don't plan on holding any VMs on this server as I have my Proxmox server for Kubernetes and general virtualization. Although this NAS is mainly for archival data, I am going to actively link an Immich container to it for a Google Photos replacement, which is why I am going with a 10Gbps NIC.
Questions...
1. Is a 10Gbps SFP+ NIC the best way to connect to my Proxmox server? I have connected via Fibre Channel iSCSI target in the past back when FreeNAS was a thing.
2. Do I need a cache?
3. Trying to keep TDP low - would sticking with SATA drives be a better alternative considering HBAs use a constant ~10W? It'll also free up a couple SATA ports.
EDIT: For anyone stumbling on this thread, you can have up to 16 drives on this motherboard. 8 from the on-board SATA AND 8 more from a PCIe HBA. Source: https://www.reddit.com/r/unRAID/comments/h0pxak/a2sdi4chln4f_raidhba_card_question/
Last edited: