Why scale? because you can add more storage servers and scale horizontally. Can core do that?
For VM storage? Sure. You keep adding hard drives to your Core installation until you get up past a hundred. I feel that the practical limit is probably less than 200. With modern 20TB HDD's, this gives you 4PB of raw space, up to 2PB of pool space, or somewhere between 100TB-1PB of actual usable VM storage space, depending on what you want your free space reserve and mirroring redundancy to be.
Once you get to that point, you add a second server and create a new datastore on that.
If you were mistaking Gluster for some magic way to add dozens of petabytes transparently to your server, forget it. Gluster is an abstraction layer and inherently introduces lots of latency and significant reduction in potential IOPS due to all the additional handling and indirection. Gluster is meant as a way for an organization to have distributed redundant resources such as filesharing, but is never going to be a high performance solution.
and why are there misconceptions about using SSD for NAS?
Bluntly, 'cuz people are stupid. Stupid people do stupid things and then when it blows up, rather than learning from the mistake and figuring out how to do better next time, they just say stupid things about their misadventures and then generalize. I've talked repeatedly on these forums about our adventures here at SOL with SSD on hardware RAID1 (not ZFS) for VM storage, something that had been considered heretical a decade ago, but turns out to be practical if deployed intelligently on compatible workloads. Keywords "Intel 535" in search.
It should be obvious that some SSD's, such as the Intel S3710, are absolutely rock solid competitors to HDD's for most applications, with incredible endurance, but on the other hand you also have the 870 QVO (not EVO) which comes in an 8TB 2.5" model but with very limited endurance. Both of these are PROBABLY stupid choices for SSD for NAS, but might not be in certain circumstances. I fully expect the 870 QVO's would be awesome in a write-once-read-many (WORM) archival scenario for example. Great place to dump your only very slowly changing ISO library.
Drives such as
WD's Red SA500 drives are targeted specifically at the middle-of-the-road NAS market, so not only do people use SSD's for NAS, but also vendors target the segment. These drives are not going to be good for heavy VM write environments, but are expected to be reasonable replacements for general NAS workloads.
SFP+ can be better than RJ45, but if RJ45 works, some people might prefer them for different reasons. Sure i can use SFP+ but like i said as long as RJ45 works, choosing it can't be the end of the world.
That's just it, though, RJ45 does not work well. The reason it saw no serious uptake is because it's a crappy technology, which I talk about in the 10 Gig Networking Primer. If you don't care to listen, that's fine, I can't make you. But I'd much rather have a single 40Gbps SFP+ low latency link from my NAS to my hypervisors than a LACP'd pile of cruddy 10GbaseT copper links. The latest hotness is 25Gbps/100Gbps.
If i said raid 10 for ZFS, well communication is all about explaining things. It is pretty much mirroring the disks so i can use half and be able to lose half the disks. That was the message. Like i said am new to this, so i don't think i have to be expert with the correct words
"Well your ideal path to nirvana storing things is to have a lot of very fast thinky things and memory things and links between them that are not draggy, plus you need to be able to talk fast to all the other things without making it hard."
There. I've given you the actual summary answer to your questions, but done it with none of the correct words or proper details. Real helpful huh. Communication is indeed about explaining things, and no one here would dare accuse me of too few words or too little effort. No one expects you to be expert with the correct words. However, it's important to communicate clearly and accurately, and I already explained that we run into lots of
Dell owners who tend to show up and expect to be able to use a PERC RAID controller, often in RAID mode.
so it really is important to clearly convey your meaning. We're not mad at you for using the wrong words, but please do try to cooperate if you're asked to use the right ones. I provided a link to the Terminology and Abbreviations Primer above to make it easier for you to self-educate if you'd like.
What scares me about NFS? is the extra latency due to traveling via network compared to local storage to the server. As you can see from performance results, the latency can be a killer compared go a local drive.
So then attach all your storage directly to your hypervisors via the latest and fastest high end low latency PCIe 4 based NVMe SSD's and call it a day.
The latency in a single HDD seek is perhaps 10ms, and the latency in a 10G network is much less than that. If your working set is sitting in the NAS's ARC or L2ARC, you can beat HDD seek times very consistently and get crazy good performance out of the stuff that is being frequently accessed, while also paying only a mild penalty for the convenience of having it on shared storage. Shared storage also means that you can have multiple hypervisors with access to the VM's, which is really convenient if you have vMotion or other rapid migration capabilities.
At the end of the day, every storage strategy has upsides and downsides. Expense, capacity, speed, latency, you play all these factors off each other. You might find that there isn't just one strategy that suits all your needs. Our hypervisors here tend to have a combination of NVMe SSD, hardware RAID1 LSI3108/3508 with SSD and HDD, Synology iSCSI as our "low performance" tier, and TrueNAS NFS/iSCSI for certain capacity workloads. Each of these has different strengths and weaknesses.