CRC errors, EMPTY Dir TREES, AND a Working FireFox Sync...for FREE

Status
Not open for further replies.

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Hi All, Very new to both FreeNAS and forums, but have a bit a of solitary linux exp.

Love nothing more than configuring stuff, and playing with hardware.

fav stuff:

LINUX
XEN
zoneminder
bitcoin
nagios
owncloud
asterisk
mythtv
XBMC


Pet hates:
windows
and electric bills!


Stumbled on FreeNAS and thought I'd give it a go

Still putting my first box together out of old bit and debugging its many problems. Found FreeNAS a Really eased the transition to BSD and ZFS. from linux and ext

I have collected up discs out of all my boxes and ended up with 5*2TB Reds in Z1 as main NFS, ISCSI, TFTP, media server vol. and no sure if its a great idea but a 3*2TB purple, suv drives in a 6TB stripe for CCTV and live Tv (myth), not so very important. I have a 8TB SeaBackPLUS USB3 EXTERNAL thing that I have encrypted ZFS and have been trying to replicate my main 5*2TB Z1 to nightly.

Very Very please to have firefox sync working with owncloud in a jail ....against all probability ( think they seem to have stopped supporting it ages ago) Not managed since OC8.1. For anyone interested..

remove .mozilla
Install FF28 from http://ftp.mozilla.org/pub/firefox/releases/ on desktop
copy mozilla sync 1.4 (only official support for OC6) from apps.owncloud site to your .............../jails/customplugin_1/usr/pbi/owncloud-amd64/www/owncloud/apps/ and extract.
sync FF28 using own server and your oc id , sync url..., and password
then used apt to update to latest FF
to my amazement.....
IT WORKS!
passwords/books/history/etc
will try to sync android next...not so easy if I remember
no idea FF for windows...DONT CARE ;-)

I have had one or two CRC errors (nothing in smarts) which appear to have stopped now I have swapped leads, power cables, pci-e cards about a bit...

My main mystery at the moment is why after replication (reports up to date) , my encypt 8TB vol appears EMPTY! first time was a heart stopper. If detached and re imported with key and pass all files, snaps etc are present and correct. But until I reimport the volume my logs are filled with

Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/.warden-template-pluginjail) failed: No such file or directory
Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/.warden-template-standard) failed: No such file or directory
Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/customplugin_1) failed: No such file or directory
etc etc etc

cant find anything else in any logs?
just a mystery of the empty volume...?

The files seem to disappear a few minutess after replication?

The volume is there and showing mounted, zpool status and zfs list both look normal and all online

ONLY PROBLEM ! no files in an empty directory tree , until I export and import the volume.

is there a way to increase to verbose on logs as I can find anything much?

Reading more online I'm loosing faith in my choice of Z1 and its ability to rebuild.. after several years of living with 3*2TB and a separate 3*1TB all ext Linux md stripe only, I was looking for a bit of security.

Hoping some here might be able to offer some pointers:

are 5*2TB RED's in Z1 anywhere near a reliable solution?

what happens to Purple AV drives when offered upto the ZFS/FREENAS Gods and a stripe?

anyone know why a vol might be an empty Dir tree till reattached?


Brgds

Florence

FreeNAS-9.10.1-U2
AMD FX4130 Quad-Core Processor
GIGABYTE 990FX
Memory
16081M
5*2TB RED Z1, 3*2TB Purple stripe, 1* 8TB USB3
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I would suggest breaking your post up into different questions to post into the relevant help& support sections.

Meanwhile, with 8 2TB drives... if you can, you should consider reforming your array as an 8-way RaidZ2. Will give you 12TB of storage, with dual disk redundancy. And there's no reason not to put your myth-tv/security system onto the same big pool.

Then just don't backup the data you don't want to backup.

Unless you absolutely need 14TB
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Wow Hi Stux, Thanks alot for your reply. I most certainly would have done were it not for the unknown purple THINGS? LOL. 5 are 2TB REDs of almost the same version.. Three unfortunately are PURPLE?! suv drive. I used to have them in a dedicated zoneminder box... they were just striped across with md and then formated ext and have been great for 6 mnths or so...what is it they say... If it anit Broke! na there was a point... I wanted to centralize my storage and move zoneminder to two VM's living in my XEN Pool..

one of the things I was very keen to find about was mixing drive flavours.

In a time long ago...RAIDs fell apart sooner or later if you mixed drives or used consumer versions

I have no clue how ZFS handles things at a low level yet. Would Love to have your input and gain from your experience in such things. would they mix ok? The purple 6TB is just for CCTV of nothing really. and the wifes sat tv recordings of Coronation St and Cheshire housewives, totally expendable ;-)..from mythtv, which again is of squash-able size..she never watches again and there must be 2TB..

Yes DO Love your Idea!

Will try to investigate the exact tech specs of these purple HDD's , see what they changed to make them £20 cheaper and "suitable for cctv?"

thanks Again
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
They're similar to reds but more targeted for a/v I think.

Anyway, either ZFS will handle them, or it'll let you know about it.

Ie if they throw errors you'll hear about.
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Had a few early errors but that was one of the Red's and seams to have gone away after a swap about of leads n stuff.

I have read about their lack of some fancy vibration canceling thats reds have, but as they are not racked up with lots of others, I figured that was not a game changer.

I Benchmarked then when new (not properly, but a fare test)

soft raid/LINUX MD/ext4) in a plain stripe of three's

purples read at about 150MB/s (DD > /dev/null on bare metal, base install of some linux)
whereas the RED's in same config showed Lots Lots more that (nearer 300 from memory?)

My worry was that ZFS might share the data load evenly across the HDD's and the performance of my red would be reduced to that of the purple, having to wait every time for the purples to complete their share of the data transfer. As I have zero in depth knowledge on what ZFS does underneath

I Guess 8 spindles are going to be quicker that 5 whichever way?

Maybe not 8 red's fast, but faster than 5.

YES this IDEA deffo growing on me ;-)
 
Status
Not open for further replies.
Top