Cache Disk Removed Automaticaly

Status
Not open for further replies.

Papasmerf

Dabbler
Joined
Mar 28, 2014
Messages
15
I am sure their are going to be a lot of questions but I am going to start out with the general setup.

This has happened on 2x different systems that I have, hardware configuration is a little different between the two but the symptoms are the same. Both systems run a 12x drive Raid 10 array, both systems have around 128GB of memory and both array's have 1x intel 100gb or 250 gb ssd drive I am using as a cache device. I have created the zpools though the gui and have also added the ssd as a cache device through the GUI. When the ZFS status check runs and emails me the report they both show that a device has been removed. The device that is removed in both cases is the SSD cache disk. On both systems I am running FreeNAS 9.3

It may be important to note that the SSD cache disk is connected to a different on board sata controller. It is separate from the HBA that runs the rest of the drives that make up the array. Please let me know what screen shots and diag outputs you need me to post.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What controller does the cache drive use? Also you need to read the rules for posting questions and provide detailed hardware specs.
 

Papasmerf

Dabbler
Joined
Mar 28, 2014
Messages
15
Since this is two different systems with very different hardware, I was thinking that this was a different problem.

System 1
Dell C2100
2x L5630
128GB Memory
Dell H700 Controller -> 12x 1TB Western Digital Black Drives
Intel SATA Controller -> 1x 100GB Intel S3700
2x Onboard 1GB Brodcom

System 2
HP DL180
2x E5620
128GB Memory
HP HP410 Controller -> 12x 2TB HP (Seagate ReBranded)
HP SATA Controller -> 1x 250GB Intel S370 (Believe it is a Silicon Image Contoller)
2x Intel 10GB PCIe
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We do not suggest the use of hardware RAID controllers for attaching pool drives.
 

Papasmerf

Dabbler
Joined
Mar 28, 2014
Messages
15
In production environments I use an LSI HBA not a raid controller, this however is not a production environment.

This is a testing lab environment that I have built for a POC project, if I loose the data or something happens to it no big loss. I am also not concerned with performance on either of these boxes either.

I know that this is not a supported configuration, each of the 12x drives in each of the systems is setup as independent Raid 0 arrays. I just want to be clear that the Raid 10 that I am talking about in the previous posts is not a hardware raid configuration but a ZFS raid configuration.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
In production environments I use an LSI HBA not a raid controller, this however is not a production environment.

This is a testing lab environment that I have built for a POC project, if I loose the data or something happens to it no big loss. I am also not concerned with performance on either of these boxes either.

I know that this is not a supported configuration, each of the 12x drives in each of the systems is setup as independent Raid 0 arrays. I just want to be clear that the Raid 10 that I am talking about in the previous posts is not a hardware raid configuration but a ZFS raid configuration.

Why the heck would you have a lab environment for testing that does not match production!? That defeats, almost in whole, the *reason* for a test environment. Your test environment should mirror what you use in production. Without following this basic philosophy all you are doing is testing your test machine before you test your production machine. There is no test *for* your production machine.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
This is a testing lab environment that I have built for a POC project.
Would that be Physical Optics Corporation?
 

Papasmerf

Dabbler
Joined
Mar 28, 2014
Messages
15
I would agree that you should always mirror the production setup but in this case it comes down to budget as well as this POC (Proof of Concept) has nothing to do with storage and is not storage orientated in anyway. The storage does not affect the outcome of this project and as disclosed performance has nothing to do with the end result of the POC. Lets just say that in order to replicate the production would require about six figures and not a viable option for this POC.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
P*** Off Cyberjock? :)

'Cuz I'm not sure what other concepts there are to prove here....
If I could, I would've liked that twice :D
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Probably. Nobody else is seeing it, so it is probably something with your hardware. Best advice is to go and report on what's getting logged when this happens. Could be a defective SSD.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
So I take it I am SOL with the original issue that I asked about?

Possible. I find it odd that you'd be seeing this identical issue on two different hardware configurations. If you were dropping spinning drives I'd be the first to point the finger at the RAID cards, but your SSDs shouldn't be any worse off on the Intel controller other being capped to 3Gbps. The SIL controller in the HP I'd be more likely to find fault with

I'm using C2100s myself for this purpose; that said, I bought H200s for them.

The DL180 - well, I do hate the P410, but it's not in the data path.
 
Status
Not open for further replies.
Top