(Its a long one..sorry.. when you're completely bored, take a few days to read. LOL.)
I'm running FreeNAS 9.2.1.7 (upgrade direct from 8.3.1) on a 1U rackmount. It works great here at my home office. So great that I log in once a week, look at the log, looks good.. I'm out.
Recently I've been thinking about disk failures and stuff because I just have regular SATA 2TB (4) drives and this NAS runs 24x7x365 with no problems. On my first install I went thru this site:
http://www.engadget.com/2012/02/01/how-to-set-up-a-home-file-server-using-freenas/
Currently my ZFS pool is doing a scrub and looks like so:
pool: Fatty
state: ONLINE
scan: scrub in progress since Sat Aug 16 09:22:07 2014
316G scanned out of 1.57T at 167M/s, 2h12m to go
0 repaired, 19.60% done
config:
NAME STATE READ WRITE CKSUM
Fatty ONLINE 0 0 0
gptid/f8cb5c38-d2e8-11e2-bf4f-003048d37fe6 ONLINE 0 0 0
gptid/f938cc14-d2e8-11e2-bf4f-003048d37fe6 ONLINE 0 0 0
gptid/dc0e919b-df75-11e2-9d9f-003048d37fe6 ONLINE 0 0 0
(I said I had 4 drives... one is a spare and I just removed it)
I *USED* to have a spare but then after reading, there's no point having one, correct? It doesnt do anything if one disk goes bad, FreeNAS doesnt automatically jump to the spare drive and use it instead? If not, no point having one plugged in just sitting there.
Currently my total size is 5.4T, so at that time it means my config is JBOD... ZFS Stripe (I think..I cant remember what I selected..). If thats the case, I have 0 redundancy, correct? I assumed that from that link I posted (excerpt below), that the hot spare was a "save my ass" thing just in case a disk went bad.
"In our case, we'll go with ZFS Stripe. If you have a better disk setup than us -- say, three 1TB drives -- you'll want to choose RAID-Z or ZFS Stripe with two drives and configure the third drive as a spare in the ZFS Extra settings."
I removed the hot spare just now because I read it doesnt do anything other than generate wear and tear on a feature that isnt enabled.
I'm running FreeNAS 9.2.1.7 (upgrade direct from 8.3.1) on a 1U rackmount. It works great here at my home office. So great that I log in once a week, look at the log, looks good.. I'm out.
Recently I've been thinking about disk failures and stuff because I just have regular SATA 2TB (4) drives and this NAS runs 24x7x365 with no problems. On my first install I went thru this site:
http://www.engadget.com/2012/02/01/how-to-set-up-a-home-file-server-using-freenas/
Currently my ZFS pool is doing a scrub and looks like so:
pool: Fatty
state: ONLINE
scan: scrub in progress since Sat Aug 16 09:22:07 2014
316G scanned out of 1.57T at 167M/s, 2h12m to go
0 repaired, 19.60% done
config:
NAME STATE READ WRITE CKSUM
Fatty ONLINE 0 0 0
gptid/f8cb5c38-d2e8-11e2-bf4f-003048d37fe6 ONLINE 0 0 0
gptid/f938cc14-d2e8-11e2-bf4f-003048d37fe6 ONLINE 0 0 0
gptid/dc0e919b-df75-11e2-9d9f-003048d37fe6 ONLINE 0 0 0
(I said I had 4 drives... one is a spare and I just removed it)
I *USED* to have a spare but then after reading, there's no point having one, correct? It doesnt do anything if one disk goes bad, FreeNAS doesnt automatically jump to the spare drive and use it instead? If not, no point having one plugged in just sitting there.
Currently my total size is 5.4T, so at that time it means my config is JBOD... ZFS Stripe (I think..I cant remember what I selected..). If thats the case, I have 0 redundancy, correct? I assumed that from that link I posted (excerpt below), that the hot spare was a "save my ass" thing just in case a disk went bad.
"In our case, we'll go with ZFS Stripe. If you have a better disk setup than us -- say, three 1TB drives -- you'll want to choose RAID-Z or ZFS Stripe with two drives and configure the third drive as a spare in the ZFS Extra settings."
I removed the hot spare just now because I read it doesnt do anything other than generate wear and tear on a feature that isnt enabled.