Fascinating (anti-)little document about building a server

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm posting this here mostly as a memory aid. http://jro.io/nas/

This guy wrote a very detailed document on how he setup his server and it's mostly well-researched. I was going to ask him if he'd be willing to post it on the forums, since it's an interesting document to have, but I'd like to thoroughly check it for little things that really should be fixed:
  • Use of auth=ntlm (from the Ubuntuy wiki)
  • Scrub/SMART schedule threshold set to 35, despite wanting 15 days between runs
Once two or three people have looked it over and noted potentials flaws, I'll contact the author and invite him to post it in the Resources section and fix the minor details along the way.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
I'm posting this here mostly as a memory aid. http://jro.io/nas/

This guy wrote a very detailed document on how he setup his server and it's mostly well-researched. I was going to ask him if he'd be willing to post it on the forums, since it's an interesting document to have, but I'd like to thoroughly check it for little things that really should be fixed:
  • Use of auth=ntlm (from the Ubuntuy wiki)
  • Scrub/SMART schedule threshold set to 35, despite wanting 15 days between runs
Once two or three people have looked it over and noted potentials flaws, I'll contact the author and invite him to post it in the Resources section and fix the minor details along the way.

  • I can't say for sure whether "sec=ntlm" was from the ubuntu wiki, but it is a fairly prominent place where people have this configuration. After Samba 4.5, users should either have sec=ntlmssp or just leave the option off.
  • ea support = yes and store dos attributes = yes are default parameters in FreeNAS.
  • Probably should add a caveat that the parameter "ea support = no" is not compatible with the streams_xattr VFS object, and it is therefore not compatible with the "fruit" VFS object (important for Mac users)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
can't say for sure whether "sec=ntlm" was from the ubuntu wiki
He says most of those instruction came from there, so it probably is.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
First good write up I've seen on quietening a SuperMicro rack chassis

Neat thing was replacing the 3x80mm fan wall completely with 3x120mm fans!

I would've suggested using the wooden block on the base of the fan wall. I've found the bottom drives are the coolest and the top drives the hottest, having the wooden block on the top might result in more heat being trapped at the top (or not)

And he used my hybrid fan controller script ;)
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
He says most of those instruction came from there, so it probably is.

Whelp. Two points:
(1) I don't read reel gud
(2) Once upon a time I created an "Ubuntu One" account. I'll try to fix the ubuntu wiki. Wiki-based documentation is only slightly better than no documentation.
 
Last edited:

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Updated the following:
  • Switched to sec=ntlmssp in the fstab cifs connection string
  • Switched scrub threshold to "14" days
  • Added note about ea support = no messing with vfs_fruit
I'm still not sure if ea support and all its other attributes actually do anything worthwhile for me, so I may eventually remove those.

Here are the other updates I have planned:
  • Notes about server power consumption (it's about 200-250W)
  • More pictures of the fan wall construction
  • Update the rclone auth section (don't need to set up VNC, didn't realize there was a really solid headless auth option)
  • Add a note that ACD currently has rclone banned
I've got a ton of real life shit the next couple of weeks, but I'll keep making updates if and when people find other issues.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
This may not be the best place for the question, but I can't help wondering why rclone isn't just installed in a jail? Seems that building a VM introduces an additional, unneeded layer of abstraction.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Makes sense. So, since Amazon Cloud Drive has banned rclone, what's the fallback? I see on eBay what purport to be lifetime, unlimited Google Drive accounts starting as low as $5, but that sounds too good to be true.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
You can get your own business Google account (which includes Google Drive) for $10/mo, so that's what I did. Technically, they advertise it as 1TB/user until you have >5 users (at $10/mo/user), but I only have one user on my account and way more than 1TB on there so far. I probably won't update the site to reflect that as I don't want to get flagged for it, but all-in-all, the experience has been waaaay better on google drive than ACD. On ACD, I'd get several hundred transfer errors per day, particularly on large files after they finished transferring. On Google drive, I get maybe one or two per week.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Thanks, I'll have to check it out. I'm seeing on the rclone forums some indication that encrypted data may violate Google's TOS--are you aware of that being an issue?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
TANSTAAFL
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Just started playing with rclone and Google Drive myself, and it seems to be working well. $10/mo would be about twice the price of CrashPlan, but if it will be stable and run somewhere close to full bandwidth it could be worth it. Basic installation in a jail was trivial. Remote configuration was also very simple, rather simpler than https://rclone.org/remote_setup/ made it appear (maybe the process for Google Drive is different than for ACD, which is what was being demonstrated there).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
TANSTAAFL
True, though nobody's talking about "free"--ACD was, IIRC, $60/year for "unlimited" storage and bandwidth. This is comparable to CrashPlan, which I think is $50/year for unlimited storage, but limited to a single device, and CrashPlan has its own issues. Cheap? Yes, very, but not "free."
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Just started playing with rclone and Google Drive myself, and it seems to be working well. $10/mo would be about twice the price of CrashPlan, but if it will be stable and run somewhere close to full bandwidth it could be worth it. Basic installation in a jail was trivial. Remote configuration was also very simple, rather simpler than https://rclone.org/remote_setup/ made it appear (maybe the process for Google Drive is different than for ACD, which is what was being demonstrated there).
I've gotta update my site to remove the VNC stuff. When I went back through and set it up under Google Drive, I used the remote auth and it worked just fine. I was able to get 10MB/s+ consistently with Google Drive, so way faster than CrashPlan. The only limit I know of on Drive is a maximum of 2 files transferred per second, so if you're transferring a bunch of small files, it may take a long time.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@melp, thank you for the guide.

Since you wrote "select server-grade components wherever appropriate", and commented on choices made, I dare to have a question about the number of drives.

WD says that NASware 3.0 products are good for up to 8 drives in an enclosure. You have 11 of them. And it is quite possible that your older 4 TB Red is NASware 2.0 only, which is said to be good up to 5 drives in an enclosure. (Currently shipping WD40EFRX are NASware 3.0, but they were 2.0 until 6TB drives were introduced.)

After reading your capacity calculations, I want to upgrade my home system from 8x6TB to 11x10TB drives (WD100EFAX), so I wonder about your thoughts around NASware 3.0 advertised capabilities.

Thank you in advance !
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Nobody has been able to prove that those numbers are anything more than marketing crap with no real data behind them.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
@melp, thank you for the guide.

Since you wrote "select server-grade components wherever appropriate", and commented on choices made, I dare to have a question about the number of drives.

WD says that NASware 3.0 products are good for up to 8 drives in an enclosure. You have 11 of them. And it is quite possible that your older 4 TB Red is NASware 2.0 only, which is said to be good up to 5 drives in an enclosure. (Currently shipping WD40EFRX are NASware 3.0, but they were 2.0 until 6TB drives were introduced.)

After reading your capacity calculations, I want to upgrade my home system from 8x6TB to 11x10TB drives (WD100EFAX), so I wonder about your thoughts around NASware 3.0 advertised capabilities.

Thank you in advance !
Nonsense forget about those numbers.

Sent from my Nexus 5X using Tapatalk
 
Status
Not open for further replies.
Top