That - again - depends. But first things first.
To set that value simply enter as a command on your TrueNAS:
iocage set memoryuse=deny=16G <jailname>
That sets and persists the limit. There is no GUI for that. You can see all values for your jail with:
iocage get all <jailname>
How much memory is available to a single process is dependent on certain kernel parameters set at process creation by the parent process. You can check the maximum values (for root) inside your jail by invoking a shell inside your jail via ssh or
iocage console <jailname>
and then:
Code:
root@freenas[~]# iocage console cloud
[...]
root@cloud:~ # limit
cputime unlimited
filesize unlimited
datasize 33554432 kbytes
stacksize 524288 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1883043
pseudoterminaunlimited
kqueues unlimited
memorylocked unlimited
maxproc 63694
sbsize unlimited
swapsize unlimited
A process can limit itself to smaller values and this is frequently done at service startup. The startup process limits itself, then "forks" a child process that inherits these limits and cannot raise them again. The child runs in the background serving e.g. HTTP requests or whatever.
For example you can set an upper memory limit of 2G in the Elasticsearch config file. The Elasticsearch startup routine will then limit the memory to said 2G before creating the child process that does "Elasticsearch things".
To set anything smaller than "unlimited" on a system or user basis you can edit (inside the jail!) the file
/etc/login.conf
. After every change to that file you need to regenerate the database that is used by the system instead of the text file via:
cap_mkdb /etc/login.conf
.
Puzzled?
Yes, on a standard Unix system every process can gobble up all of the system's memory and there is nothing in place to prevent it from doing that. It's all about everyone playing nice. You can write a five-line program in C, start it, and your memory is gone ...