Please have a look at my security run output

Status
Not open for further replies.

CLSegraves

Explorer
Joined
Sep 13, 2013
Messages
84
I'll post this here since it's kind of a noob question. My nas generated the following security run output last night

Code:
NAS01.local kernel log messages:
> ada1 at ahcich3 bus 0 scbus3 target 0 lun 0
> ada1: <ST2000DM001-1CH164 CC27> s/n Z340AF2K detached
> Waiting (max 60 seconds) for system process `vnlru' to stop...done
> Waiting (max 60 seconds) for system process `bufdaemon' to stop...done
>
> Waiting (max 60 seconds) for system process `syncer' to stop...Syncing disks, vnodes remaining...2 0 0 0 done
> All buffers synced.
> GEOM_ELI: Device ada1p1.eli destroyed.
> GEOM_ELI: Detached ada1p1.eli on last close.
> (ada1:ahcich3:0:0:0): Periph destroyed
> ada1 at ahcich3 bus 0 scbus3 target 0 lun 0
> ada1: <ST2000DM001-1CH164 CC27> ATA-9 SATA 3.x device
> ada1: Serial Number Z340AF2K
> ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> ada1: Command Queueing enabled
> ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
> ada1: quirks=0x1<4K>
> ada1: Previously was known as ad10
> Timecounter "TSC-low" frequency 1247193782 Hz quality 1000
> GEOM_ELI: Device ada0p1.eli created.
> GEOM_ELI: Device ada1p1.eli created.
> vboxdrv: fAsync=0 offMin=0x2e8 offMax=0xc74

-- End of security output --

and I'd appreciate it if someone would take a quick look at it and

1) tell me what's going on
2) tell me if there is anything I need to be concerned about/correct/etc.

Everything seems to be working fine.

Thanks,
Chris

edit: I should mention I momentarily pulled a drive yesterday to check a model number. That put the pool in degraded state, however I reinserted and rebooted to clear the 'error'. Other than that, the nas has not been touched since I reassembled it last week (moved it from a big desktop case to a smaller hot swap nas case).
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
It's telling you that you pulled a drive and rebooted. Possibly that you changed a port or a cable. Ada1 vs ad10.

If the pool is fine, and the light green you are good... this is just a heads up.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Speaking as another noob, just because one has hot swap hardware it doesn't make it a good idea to remove a drive currently in use by zfs (or any other file system for that matter). It sounds remarkably risky to me. Just saying.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
It is a very high risk to hot swap drives using FreeNAS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
We've had users lose data because:

1. They pulled the wrong disk.
2. The act of pulling the disk caused a voltage fluctuation that tripped off other disks in the pool.
3. Electrical problems that occur with some backplanes.
4. Some hardware supports hotswap, but the driver doesn't support hotswap. Therefore you might not have hotswap after all, and your first time finding out is when you trash your pool.

It's something that, unless you know for 100% certainty it's compatible, is something to be avoided. I always do cold-swaps no matter what.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
We've had users lose data because:

1. They pulled the wrong disk.
2. The act of pulling the disk caused a voltage fluctuation that tripped off other disks in the pool.
3. Electrical problems that occur with some backplanes.
4. Some hardware supports hotswap, but the driver doesn't support hotswap. Therefore you might not have hotswap after all, and your first time finding out is when you trash your pool.

It's something that, unless you know for 100% certainty it's compatible, is something to be avoided. I always do cold-swaps no matter what.

Particularly, Hot Swapping is only possible if the host side connector supports it. Most PSU SATA connectors don't, whereas most backplanes have some pins recessed and extra capacitors to handle the sudden increase (or decrease) in load and confine it to the drive's hot swap circuitry.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
All those reasons Cyberjock said but I've had issues with even high quality hot swap bays were when the drive was inserted, the SATA connector didn't perfectly line up and shorted out the power connector when it was forced into place by the latching lever. Lost the drive and the drive bay at the same time. This was on a government server system, not my FreeNAS system. It would have happened even if the power was off however with power on there was the risk of data loss. With power off and then turning it on, the system wouldn't have had a power surge while the OS was running.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
All interesting points. Guess I'll stick to cold swapping in the future.
That is the best practice. There should be no exceptions to this. I have heard the argument once where someone said "I have a high availability server so I can't shut it down." however if that person truly had a high availability server then they would have redundant servers for the data which does allow for you to shut down and replace the drive.

The story I told, that drive had been inserted and removed a few times by the same individual without incident. Just so happens one time it didn't work as planned. Guess what, no more hot swaps at all because it was a $10K server (Government price of course) that died.
 
Status
Not open for further replies.
Top