5th Boot Drive Destroyed by FreeNAS What am I doing wrong

Status
Not open for further replies.

djlax152

Dabbler
Joined
Mar 15, 2012
Messages
48
Hello all, not sure if i have the right category for this but I wanted to post something in regards to my boot devices always eventually becoming corrupt. I am a light FreeNAS user with some basic knowledge and have used FreeNAS now for about 6 years and periodically keep up with the updates.

I have this nagging problem that is really irritating and is scaring me a little. Over the past 6 years my boot devices have become corrupt usually after on a reboot. I can't figure out why this is happening and am dumbfounded to find a solution. The latest and current corruption i received and error (device destroyed). I don't have a record of all the errors that were spit out over the years although i can say that each time i have researched whatever error it came back with it was always related to the boot device in some way and when i connect the boot device to other computers there is always some kind of message stating that the drive is screwed up.

1st install (didn't know what i was doing) - Sata to Spinning disk 2.5 WD drive
2nd Install Sata to CF card adapter - This was a miserable failure, CF card went corrupt after just a few months
3rd Try Sata to SSD - Eventually the SSD went corrupt and didn't boot
4th try USB jump drive - Eventually went corrupt
5th try high quality USB drive - Eventually went corrupt (today)

so for my 6th attempt i have no idea what to do. What am I doing wrong? i am not using cheap parts and I can't understand what is taxing the boot device so much that no matter what i connect or how i connect a storage device my boot device always burns out and doesn't seem to last very long. Any thoughts? Thank God importing my volume each time has worked flawlessly (kudos to FreeNAS team for that!) I'm worried i may not be so lucky next time and i will loose everything!

Thank you

AsRock Rack E3C224D41-14S
Intel XEON E3-1220v3 3.1GHz 8MB Cache LGA1150
Intel SSD 530 120GB Cache
6 x 2TB WD Blacks
 
Last edited by a moderator:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Please don't worry over losing your data because of a boot drive failure.
The complete separation of the data pool from the OS is just one of
many solid benefits of FreeNAS. :cool:
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@BigDave is absolutely correct, I would recommend using a SSD as your boot device as these provide very good longevity and I have yet to hear of anyone saying that thier SSD boot device stopped working. You don't need an expensive SSD, anything that is 16GB to 128GB and is cheap. I've seen some pretty darn cheap SSDs from some very no-name brands. They have older slower controller chipsets but they are still lightning fast compared to a USB Flash drive. If you want to stay witha name brand then look for something on sale. I like Silicon Power and Adata as my cheap lineup, then I prefer Samsung EVO drives of course. I have never tried Kingdian versions but they are super cheap and like I said, the older and slower chipsets, hence why they are cheaper. A Kingdian 60GB is about $35USD, a 32GB is about $23USD. I'm going to buy one fro the heck of it to see how well they really work, maybe when I build my next server.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
@BigDave is absolutely correct, I would recommend using a SSD as your boot device as these provide very good longevity and I have yet to hear of anyone saying that thier SSD boot device stopped working. You don't need an expensive SSD, anything that is 16GB to 128GB and is cheap. I've seen some pretty darn cheap SSDs from some very no-name brands. They have older slower controller chipsets but they are still lightning fast compared to a USB Flash drive. If you want to stay witha name brand then look for something on sale. I like Silicon Power and Adata as my cheap lineup, then I prefer Samsung EVO drives of course. I have never tried Kingdian versions but they are super cheap and like I said, the older and slower chipsets, hence why they are cheaper. A Kingdian 60GB is about $35USD, a 32GB is about $23USD. I'm going to buy one fro the heck of it to see how well they really work, maybe when I build my next server.
The OP just said that he had a SSD boot drive failure.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The OP just said that he had a SSD boot drive failure.
I didn't see that.

Well if this is the case then I would suggest there is something wrong with the hardware.

@djlax152 please provide a detailed listing of your hardware. Also I would recommend that you perform the customary burnin tests again, the Memtest86 and CPU Stress Test. Ensure your hardware is working fine.

Also, you didn't specify what version of FreeNAS you were running, please specify and if you selected Legacy or UEFI as the boot option. Note, Legacy option has been the most reliable to get the system up and running.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
And location of system dataset.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
The OP just said that he had a SSD boot drive failure.
I read that part of the original post, but stand firm on my recommendation.

After 6 years, if you take out the SSD from the list of failures, I'm not really that shocked over the list
of other devices that have given out.

@joeschmuck is right about looking at the hardware though, there could be heat/power/other issues going on.

More details would be nice, my curiosity is piqued now...
 

djlax152

Dabbler
Joined
Mar 15, 2012
Messages
48
Thanks guys for all your replies, i took your advice big Dave and put a Samsung SSD via Sata directly attached to the Mobo. My System Dataset is located on my main volumn protected by RAID. Is that a bad idea, when i first set it up i was under the impression that was ideal. Again thanks for all your help and hopefully i can have this boot device running for a long time. Oh and i am on 11.00 now but previously i was on 9.10
 

djlax152

Dabbler
Joined
Mar 15, 2012
Messages
48
I didn't see that.

Well if this is the case then I would suggest there is something wrong with the hardware.

@djlax152 please provide a detailed listing of your hardware. Also I would recommend that you perform the customary burnin tests again, the Memtest86 and CPU Stress Test. Ensure your hardware is working fine.

Also, you didn't specify what version of FreeNAS you were running, please specify and if you selected Legacy or UEFI as the boot option. Note, Legacy option has been the most reliable to get the system up and running.


AsRock Rack E3C224D41-14S
Intel XEON E3-1220v3 3.1GHz 8MB Cache LGA1150
Intel SSD 530 120GB Cache
6 x 2TB WD Blacks
32GB of ECC RAM Can't remember the brand but its on the mobo compatibility list

Booting from BIOS
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Thanks guys for all your replies, i took your advice big Dave and put a Samsung SSD via Sata directly attached to the Mobo. My System Dataset is located on my main volumn protected by RAID. Is that a bad idea, when i first set it up i was under the impression that was ideal.
Locating the System Dataset on your data pool is considered by most users to be ideal. I however dance to the beat of a different drummer when it comes to the location of the System Dataset. I will explain my thoughts about this below...

It is my understanding that as of version 9.3, the boot pool has changed to the ZFS file system and no longer
runs entirely in RAM as it has in the past. This means that there is now an increased amount of reading and
writing going on while FreeNAS is running, even at idle.
Under the Reporting tab of the GUI you can see the default activity constantly writing (shown below).

reporting_tab.JPG
Notice that MY dataset is kept on my boot pool which is an SLC enterprise quality SSD.

Now here is my reasoning for having this constant writing activity take place on my boot device, rather than
my data pool. My data pool is made up of six very expensive 4TB WD Red NAS harddrives, while the boot
device is a $40 USED 32GB Intel X25E SSD. My choice is to obviously have the wear and tear be on the
device that costs $40, rather than have that write activity on all SIX AT ONCE :eek:of my hard disks that
cost about $140.00 each!!!

I have no desire to start a debate about the pros and cons of all this, it's just an explanation of my personal
practices :p

EDIT: grammer/spelling
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Locating the System Dataset on your data pool is considered by most users to be ideal. I however dance to the beat of a different drummer when it comes to the location of the System Dataset. I will explain my thoughts about this below...

It is my understanding that as of version 9.3, the boot pool has changed to the ZFS file system and no longer
runs entirely in RAM as it has in the past. This means that there is now an increased amount of reading and
writing going on while FreeNAS is running, even at idle.
Under the Reporting tab of the GUI you can see the default activity constantly writing (shown below).

View attachment 20456
Notice that MY dataset is kept on my boot pool which is an SLC enterprise quality SSD.

Now hear is my reasoning for having this constant writing activity take place on my boot device, rather than
my data pool. My data pool is made up of six very expensive 4TB WD Red NAS harddrives, while the boot
device is a $40 USED 32GB Intel X25E SSD. My choice is to obviously have the wear and tear be on the
device that costs $40, rather than have that write activity on all SIX AT ONCE :eek:of my hard disks that
cost about $140.00 each!!!

I have no desire to start a debate about the pros and cons of all this, it's just an explanation of my personal
practices :p
I have my system dataset on the boot pool also. I just use Spinning Disk for the system dataset and boot pool

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I have no desire to start a debate about the pros and cons of all this, it's just an explanation of my personal
practices :p
I think it's good to allow everyone the choice to choose how they run thier system(s). And I'm not agreeing or disagreeing here on how @BigDave configures his system. I too do many things my own way. It's up to me to take in all the information and choose what I feel is best for me. How else do you learn if you are right or wrong if you don't put it out there.

I have my system dataset on the boot pool also. I just use Spinning Disk for the system dataset and boot pool

Sent from my SAMSUNG-SGH-I537 using Tapatalk
So you use a spinning disk I gather for your boot pool and dataset. Not sure if you had a typo here or not.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So you use a spinning disk I gather for your boot pool and dataset. Not sure if you had a typo here or not.

Voice recognition on a cell phone is not always what you'd like it to be and my proofreading skills are not always perfect either.

Edit: I have a pair of spinners in a mirror for my boot pool and I use the same pool for my system dataset. I have considered moving my swap to that pool also to avoid the problem of the system crashing when a storage disk is removed or catastrophically fails.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I have considered moving my swap to that pool also to avoid the problem of the system crashing when a storage disk is removed or catastrophically fails.
Well, if you offline the disk, the system will swapoff any swap on that disk, so that wouldn't cause a problem. But yes, catastrophic failure of the disk could cause a problem. I believe mirrored swap is supposed to be coming in 11.1, which should pretty well eliminate this problem.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I believe mirrored swap is supposed to be coming in 11.1, which should pretty well eliminate this problem.
I guess I will wait and see. I had also contemplated getting an SSD to use for the swap and turning swap completely off for the storage drives.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The old advice was that "if your system is using swap, it's under-resourced." I don't think that's the case any more--whether that's something that's happened between FreeBSD 8 and 11, something in the tunables, something related to the ZFS boot pool, or something else I haven't guessed. But my fairly-lightly-utilized server with 128 GB of RAM is still using a little bit of swap.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The old advice was that "if your system is using swap, it's under-resourced." I don't think that's the case any more--whether that's something that's happened between FreeBSD 8 and 11, something in the tunables, something related to the ZFS boot pool, or something else I haven't guessed. But my fairly-lightly-utilized server with 128 GB of RAM is still using a little bit of swap.
My systems never hit swap at all before the upgrade to FreeNAS 11, so something certainly changed. Also, I have one system with 16GB of memory and the other with 32GB of memory and both of them end up with about the same amount of swap being used after running for a week or two. Since they were (until recently) identical except that the 32GB system ran Plex, and the other did nothing except provide a samba share, I have a hard time seeing how either of them should be using swap. Then there is the fact that before upgrading to FreeNAS 11 neither of them used swap at all and the useage has not changed. It is curious.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
But my fairly-lightly-utilized server with 128 GB of RAM is still using a little bit of swap.

Think it's related to inactive memory increasing. Which is exacerbated by processes with randomish memory allocations, like rsync across a file system, or to an iSCSI drive. Eventually the inactive ram exceeds the cushion of free memory tuned into the Arc, and then you swap. Simply, the bug in FreeBSD is that inactive isn't being released early enough.
 
Status
Not open for further replies.
Top