rungekutta
Contributor
- Joined
- May 11, 2016
- Messages
- 146
So I recently built and set up my system:
SuperMicro X11SSM-F
Intel i3 6100
Samsung 16GB DDR4 ECC 2133Mhz
4x 3TB WD Red
2x 3TB Seagate NAS
The 6 drives arranged in RAIDZ2
32GB SuperDOM for boot
FreeNAS version 9.10-STABLE.
The system has been rock-solid from start and through burn-in with the exception of when I try to use jails. Frankly this is not working well and I'm a little unsure how to debug.
First time I had a problem was when I played around with setting up a Minecraft server. I found that by starting and stopping jails, the whole server would sometimes reboot.
More recently I tested setting up Plex in a jail, following the instructions from a thread in this forum and installing the Plex server from pkg. Worked ok... until I realised my SMB shares via the CIFS-service no longer works from Mac OS X (10.11.5). Finder fails to connect to the server and then drops the whole server off the "shared" sidebar. Needs a force-restart of Finder itself to resolve, but then consistently fails again. I didn't realise the connection at first, but when I stop the Plex jail everything snaps back to life and works as before again.
The Plex jail has its own static IP, separate from the static IP FreeNAS itself which the CIFS service is also explicitly bound to. VIMAGE enabled on the jail. Still, they clearly interfere with each other somehow.
Final icing on the cake arrived a few minutes later after I had stopped the jail and the server decided to spontaneously reboot again. No clues in the logs as to what happened.
The is a relatively fresh install, not much config added yet, just a few few datasets and a few users. CIFS, NFS, SMART and SSH services running.
I'm surprised jails can be this unstable on a fresh and almost vanilla install... unless I had some kind of hardware problem which ONLY manifests itself when running jails? I have otherwise hammered it pretty hard with memtest and I/O via dd and the sharing services through burn-in, and not a glitch.
Am I missing something obvious here...?
Jun 20 20:48:23 alaska devd: Executing '/etc/rc.d/dhclient quietstart igb1'
Jun 20 22:56:06 alaska kernel: epair0a: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0a: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0b: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0b: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: bridge0: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: bridge0: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: promiscuous mode disabled
Jun 20 20:56:10 alaska devd: Executing '/etc/rc.d/dhclient quietstart igb1'
Jun 20 22:56:10 alaska kernel: igb1: link state changed to UP
Jun 20 22:56:10 alaska kernel: igb1: link state changed to UP
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs list -H -o name '/mnt/pool1/jails/.warden-template-standard'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs get -H origin '/mnt/pool1/jails/plex'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs list -H -o name '/mnt/pool1/jails/.warden-template-standard'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs get -H origin '/mnt/pool1/jails/plex'
Jun 20 23:02:08 alaska syslog-ng[1510]: syslog-ng starting up; version='3.6.4'
Jun 20 23:02:08 alaska kernel: ifa_del_loopback_route: deletion failed: 48
Jun 20 23:02:08 alaska Freed UMA keg (udp_inpcb) was not empty (120 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (udpcb) was not empty (1169 items). Lost 7 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcptw) was not empty (540 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcp_inpcb) was not empty (119 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcpcb) was not empty (44 items). Lost 15 pages of memory.
Jun 20 23:02:08 alaska hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
Jun 20 23:02:08 alaska hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
Jun 20 23:02:08 alaska Fatal trap 12: page fault while in kernel mode
Jun 20 23:02:08 alaska cpuid = 0; apic id = 00
SuperMicro X11SSM-F
Intel i3 6100
Samsung 16GB DDR4 ECC 2133Mhz
4x 3TB WD Red
2x 3TB Seagate NAS
The 6 drives arranged in RAIDZ2
32GB SuperDOM for boot
FreeNAS version 9.10-STABLE.
The system has been rock-solid from start and through burn-in with the exception of when I try to use jails. Frankly this is not working well and I'm a little unsure how to debug.
First time I had a problem was when I played around with setting up a Minecraft server. I found that by starting and stopping jails, the whole server would sometimes reboot.
More recently I tested setting up Plex in a jail, following the instructions from a thread in this forum and installing the Plex server from pkg. Worked ok... until I realised my SMB shares via the CIFS-service no longer works from Mac OS X (10.11.5). Finder fails to connect to the server and then drops the whole server off the "shared" sidebar. Needs a force-restart of Finder itself to resolve, but then consistently fails again. I didn't realise the connection at first, but when I stop the Plex jail everything snaps back to life and works as before again.
The Plex jail has its own static IP, separate from the static IP FreeNAS itself which the CIFS service is also explicitly bound to. VIMAGE enabled on the jail. Still, they clearly interfere with each other somehow.
Final icing on the cake arrived a few minutes later after I had stopped the jail and the server decided to spontaneously reboot again. No clues in the logs as to what happened.
The is a relatively fresh install, not much config added yet, just a few few datasets and a few users. CIFS, NFS, SMART and SSH services running.
I'm surprised jails can be this unstable on a fresh and almost vanilla install... unless I had some kind of hardware problem which ONLY manifests itself when running jails? I have otherwise hammered it pretty hard with memtest and I/O via dd and the sharing services through burn-in, and not a glitch.
Am I missing something obvious here...?
Jun 20 20:48:23 alaska devd: Executing '/etc/rc.d/dhclient quietstart igb1'
Jun 20 22:56:06 alaska kernel: epair0a: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0a: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0b: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: epair0b: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: bridge0: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: bridge0: link state changed to DOWN
Jun 20 22:56:06 alaska kernel: igb1: promiscuous mode disabled
Jun 20 20:56:10 alaska devd: Executing '/etc/rc.d/dhclient quietstart igb1'
Jun 20 22:56:10 alaska kernel: igb1: link state changed to UP
Jun 20 22:56:10 alaska kernel: igb1: link state changed to UP
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs list -H -o name '/mnt/pool1/jails/.warden-template-standard'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs get -H origin '/mnt/pool1/jails/plex'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs list -H -o name '/mnt/pool1/jails/.warden-template-standard'
Jun 20 22:57:01 alaska manage.py: [common.pipesubr:61] Popen()ing: /sbin/zfs get -H origin '/mnt/pool1/jails/plex'
Jun 20 23:02:08 alaska syslog-ng[1510]: syslog-ng starting up; version='3.6.4'
Jun 20 23:02:08 alaska kernel: ifa_del_loopback_route: deletion failed: 48
Jun 20 23:02:08 alaska Freed UMA keg (udp_inpcb) was not empty (120 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (udpcb) was not empty (1169 items). Lost 7 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcptw) was not empty (540 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcp_inpcb) was not empty (119 items). Lost 12 pages of memory.
Jun 20 23:02:08 alaska Freed UMA keg (tcpcb) was not empty (44 items). Lost 15 pages of memory.
Jun 20 23:02:08 alaska hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
Jun 20 23:02:08 alaska hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
Jun 20 23:02:08 alaska Fatal trap 12: page fault while in kernel mode
Jun 20 23:02:08 alaska cpuid = 0; apic id = 00