I understand that RAM is marginal, but it's a system for personal use.
The quoted statement is equivalent to many others I have seen here over the years. So the content below is really not meant to be picking on the author. Rather, I finally got motivated to write something more general (perhaps I will eventually make a resource out of this).
There are two aspects when running a system with fewer resources (RAM, CPU power, network bandwidth, etc.) than specified as the minimum by the vendor:
- Performance
- Stability
Most people think primarily (or exclusively) about performance. It is more or less the logical consequence of personal experience: If I give the system less RAM than recommend, it will slow down (sometimes dramatically). With the power we have in systems for personal use (PC and laptops) today, there is very little software that can be affected so badly, that it will actually crash or destroy data. That was pretty different 20+ years ago, when I ran a J2EE application server, an Oracle 8i database, and an Eclipse IDE on a laptop with 256 MB of RAM and a single core CPU with 800 MHz (not to forget the 20 GB HDD with 4200 RPM). Many people have not experienced this first hand. Lucky you! Pressing the
SAVE
button and literally waiting 20 minutes for the save operation (of a 100 KB XML file) to complete at 11pm is not a fun thing. If things didn't crash that is.
The challenge with ZFS is that it was designed as an enterprise storage system. It comes from an era (development started in 2001) when there were no SDDs, and HDDs had less than 100 GB. So in those days, if you needed e.g. 100k IOPS from your SAN (NAS as we know it today didn't really exist yet), you simply had to get enough HDDs (each with 200-300 IOPS) to reach that number. To make things a bit more fun, the redundancy that was required as a safety measure, meant that you needed not only 100k [IOPS] / 300 disks. But you needed the corresponding number of RAID-6 arrays/LUNs, or RAIDZ2 vdevs in the context of ZFS. So we are talking about 1000+ very expensive enterprise HDDs. This is the environment that ZFS was designed for. Yes, you could get cheaper storage systems from SUN. But in many cases it would be a 6/7-digit figure.
Why am I telling you all this? Because the design makes certain assumptions about the underlying hardware. Of course a lot has changed since then and systems are way more stable and robust in many ways. But the installations that have undergone thorough vendor testing, as well as being used in production by companies and serious(!) hobbyists, have in common that they follow the hardware recommendations. So the battle-hardened knowledge about ZFS comes from systems that are conventional (in a way one could even call it boring). Whenever you stray away from that path, the risk of something bad happening increases. In other words, not necessarily all of the assumptions are double-checked. If you happen to violate one that is not, anything can happen.
This is why, when you use less RAM than recommended as the minimum, there is a risk to loose data. The same happens with RAID controllers (or USB bridges), or anything else that limits the control of ZFS over the access to the disks. Of course there will also be a performance degradation. But that is less of a problem.
I hope this is useful to some folks. Feedback is more than welcome.