It died again tonight, this time in a slightly different way. Looks like it rebooted at around 7:45am. This is the last output from the ssh-console showing a top (sorted by size):
last pid: 77855; load averages: 0.11, 0.10, 0.46 up 0+20:05:10 07:45:17
86 processes: 1 running, 85 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.1% interrupt, 99.9% idle
Mem: 9736K Active, 359M Inact, 735M Laundry, 59G Wired, 2304M Free
ARC: 55G Total, 2193M MFU, 52G MRU, 569M Anon, 232M Header, 100M Other
53G Compressed, 57G Uncompressed, 1.08:1 Ratio
Swap: 10G Total, 10G Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
236 root 23 20 0 271M 219M kqread 8 10:06 0.09% python3.6
4233 root 1 20 0 186M 157M select 3 182:07 0.00% smbd
27272 root 25 20 0 184M 158M buf_ha 8 83:27 0.00% smbd
3524 root 15 24 0 183M 133M umtxn 5 0:34 0.00% uwsgi-3.6
54963 root 1 20 0 170M 151M select 6 0:00 0.00% smbd
3002 root 11 20 0 166M 125M nanslp 11 13:28 0.00% collectd
20726 root 1 20 0 165M 148M select 0 0:01 0.00% smbd
77763 root 1 22 0 163M 147M zio->i 3 0:00 0.00% smbd
77852 root 1 20 0 161M 145M select 10 0:00 0.00% smbd
77832 root 1 20 0 161M 145M select 6 0:00 0.00% smbd
77811 root 1 20 0 161M 145M select 0 0:00 0.00% smbd
77833 root 1 20 0 161M 145M select 9 0:00 0.00% smbd
77855 root 1 20 0 161M 145M select 9 0:00 0.00% smbd
2698 root 1 20 0 161M 145M select 10 0:04 0.00% smbd
2785 root 1 20 0 118M 103M select 0 0:00 0.00% smbd
2775 root 1 20 0 118M 102M select 10 0:42 0.00% smbd
2876 root 1 20 0 115M 100M kqread 0 0:07 0.01% uwsgi-3.6
2709 root 1 20 0 78372K 62868K vmpfw 2 0:00 0.00% winbindd
2703 root 1 20 0 77064K 61316K zio->i 10 0:02 0.00% winbindd
3750 root 1 20 0 65232K 59232K wait 7 0:02 0.00% python3.6
344 root 2 22 0 64740K 54404K usem 8 0:01 0.00% python3.6
2570 root 1 20 0 60008K 27672K uwait 7 0:43 0.12% dtrace
2568 root 1 20 0 60008K 27672K uwait 2 0:37 0.12% dtrace
2569 root 1 20 0 60008K 27672K uwait 7 0:36 0.06% dtrace
343 root 2 20 0 54528K 47764K piperd 11 0:01 0.00% python3.6
2439 root 5 20 0 46268K 31496K buf_ha 11 13:26 0.03% python3.6
2732 root 8 20 0 44480K 17556K select 0 2:16 0.01% rrdcached
2855 root 1 20 0 38852K 21528K select 11 0:00 0.00% winbindd
2856 root 1 20 0 37128K 21392K select 0 0:00 0.00% winbindd
2695 root 1 20 0 29412K 16448K select 1 0:07 0.03% nmbd
2008 root 2 20 0 20516K 9204K buf_ha 8 0:02 0.00% syslog-ng
2435 root 1 20 0 18864K 11468K select 11 0:11 0.00% snmpd
244 root 1 52 0 15532K 11404K piperd 5 0:00 0.00% python3.6
...
And this is the last HDD-Statistics from another ssh-shell:
dT: 1.064s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 0 0 0 0.0 0 0 0.0 0.0| ada0
0 0 0 0 0.0 0 0 0.0 0.0| ada1
0 0 0 0 0.0 0 0 0.0 0.0| da0
0 0 0 0 0.0 0 0 0.0 0.0| da1
0 0 0 0 0.0 0 0 0.0 0.0| da2
0 0 0 0 0.0 0 0 0.0 0.0| da3
0 0 0 0 0.0 0 0 0.0 0.0| da4
0 0 0 0 0.0 0 0 0.0 0.0| da5
0 0 0 0 0.0 0 0 0.0 0.0| da6
0 0 0 0 0.0 0 0 0.0 0.0| da7
0 0 0 0 0.0 0 0 0.0 0.0| da8
0 0 0 0 0.0 0 0 0.0 0.0| da9
0 0 0 0 0.0 0 0 0.0 0.0| da10
0 0 0 0 0.0 0 0 0.0 0.0| da11
0 0 0 0 0.0 0 0 0.0 0.0| da12
0 0 0 0 0.0 0 0 0.0 0.0| da13
0 0 0 0 0.0 0 0 0.0 0.0| da14
0 0 0 0 0.0 0 0 0.0 0.0| da15
Previously when it died it never rebooted. It just got completely stuck. You were able to login to ssh (enter username and password) but then nothing happened anymore, no prompt, no MOTD, nothing. Same on the local console. You were able to enter the number 9 for shell on the console menu but the prompt never appeared.
Poolstatus:
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:06 with 0 errors on Thu Mar 21 03:45:06 2019
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
errors: No known data errors
pool: pool0
state: ONLINE
scan: scrub repaired 0 in 0 days 22:26:45 with 0 errors on Sun Feb 24 22:27:07 2019
config:
NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
gptid/5cff0baf-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/5dd5b5f6-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/5ebad554-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/5fab3abe-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/608d7c8b-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/6173b0eb-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/625e999c-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/63597239-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/644c1e19-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/65462af9-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/66533eab-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/6760ec47-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/688715ca-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/69ab3979-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/6ae807c4-f942-11e8-918e-ac1f6b4da51e ONLINE 0 0 0
gptid/6f1736ea-fdd6-11e8-9145-ac1f6b4da51e ONLINE 0 0 0
errors: No known data errors
Not sure where to go from here... :-/