MurtaghsNAS
Dabbler
- Joined
- Jul 21, 2021
- Messages
- 17
I am a new user, looking to set up a home NAS. SCALE seems to be an ideal choice for my use case of a media storage server with a few small, single user apps (Nextcloud, MythTV, Twonky) running "virtually." I currently have them running as VMs, but I should be able to refactor them as Docker instances as I get used to Docker. I am in the planning and architecting stage, so SCALE being in Beta doesn't bother me, as about when it is scheduled to be ready, I'll be ready to buy my gear.
When I first saw SCALE supported clusters, I saw it as the typical enterprise "collection of equals" used to guard against catastrophic node failure. As a home user, this didn't excite me because I was planning a single node, and can't realistically afford node redundancy. But after hearing people repeatedly talk about "storage nodes" and "compute nodes", I realized there was a possibility I had misunderstood the clustering design, and that it could really be much more exciting. Let me see if I understand properly.
Let's start with a node that is a perfectly balanced storage node. It has just enough processor power and memory to perfectly handle OS and storage tasks, and not a bit more. I decide I want to take advantage of SCALE's "virtualization" (VM, Docker, K8) features. Of course, this perfect storage node does not have the horsepower to handle "virtualization." Would I be able to add to the cluster a compute-rich but storage-poor node such as an Intel NUC to handle the "virtualization" load?
This Lego-brick style of clustering is really exciting to me because of the upgrade possibilities down the road. Right now the single node design I am working on is a bit overkill. I am allocating an over-large compute budget to the design because I do want the virtualization, but don't know what my needs and wants are going to be 5 years down the line. If I have the ability to add compute as a cluster feature down the road, I can right-size the initial storage node now, saving money.
Am I understanding TrueNAS SCALE's cluster design correctly that it is a Lego-brick style design where compute nodes and storage nodes can be added as needed over time, or am I misconstruing things?
When I first saw SCALE supported clusters, I saw it as the typical enterprise "collection of equals" used to guard against catastrophic node failure. As a home user, this didn't excite me because I was planning a single node, and can't realistically afford node redundancy. But after hearing people repeatedly talk about "storage nodes" and "compute nodes", I realized there was a possibility I had misunderstood the clustering design, and that it could really be much more exciting. Let me see if I understand properly.
Let's start with a node that is a perfectly balanced storage node. It has just enough processor power and memory to perfectly handle OS and storage tasks, and not a bit more. I decide I want to take advantage of SCALE's "virtualization" (VM, Docker, K8) features. Of course, this perfect storage node does not have the horsepower to handle "virtualization." Would I be able to add to the cluster a compute-rich but storage-poor node such as an Intel NUC to handle the "virtualization" load?
This Lego-brick style of clustering is really exciting to me because of the upgrade possibilities down the road. Right now the single node design I am working on is a bit overkill. I am allocating an over-large compute budget to the design because I do want the virtualization, but don't know what my needs and wants are going to be 5 years down the line. If I have the ability to add compute as a cluster feature down the road, I can right-size the initial storage node now, saving money.
Am I understanding TrueNAS SCALE's cluster design correctly that it is a Lego-brick style design where compute nodes and storage nodes can be added as needed over time, or am I misconstruing things?