BUILD Newbie - migrating from windows. Strategies and hardware.

Status
Not open for further replies.

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello Forum!

Writing this post has been a WIP throughout my research on doing a “done right” setup, while using my current hardware. This text is me sharing my thoughts & muddling through into FreeNAS world while managing a migration from an existing windows based solution without the ability to “build an extra NAS” but, rather – converting present hardware in conjunction with new vital parts into a “done right” FreeNAS box. So this is intended for fun reading/sharing-the-newbie-experience, an extended introduction to questions at the end. Enjoy it all or "TLDR;;questions at the bottom"

The short story to why I ended up here:
-NTFS has gotten the best of me.

So while the official FreeNAS videos on YouTube where anything but appealing to me, I set out to watch numerous other contributors. After a while I realized that a crap load of “internet experts” where giving quite contradictory advices on hardware requirements (in particular based around the appeal of – using scrap parts building a cheap fileserver box). This lead me to checklist the rollercoaster ride to current level of insights of freenas:
1. Getting bored by official FreeNAS videos. Waay too sleepy and dense for a complete newbie/home user.
2. Realizing most youtubers advice cannot be trusted what so ever. Everyone is an expert.
3 Changing approach to watching given speeches on ZFS. Way more interesting and appealing.
4. Finding out ECC memory is a must. That put my old fileserver gear out the window.
5. Reading the manual cover to cover. Twice. Do not understand most of it. Yet hoping for some information to get stuck in my memory for further reference.
6. Realizing the significant differences to how storage can be expanded/reduced compared to solutions I’ve used.
7. Oh shit – my windows logics do not apply here what so ever.
8 Mini-ITX Asrock C2550D4I is available at local store – WIIHAA!
9. Realizing I just bought a LSI 9201-16i that would then be useless :’(
10. Only at this stage I really get into the forum starting to read posts. (no idea how I missed the forums)
11. I find the awesome helpful posts on hardware recommendations. Cannot believe it took this long down the line. So much research in erroneous directions could’ve been avoided.
12. Needing further expandability PCIe slots (extra HBA/NIC) – Looking for X10 Motherboards. Turns out cost for low end cpu+board is roughly comparable to the C2550D4I solution.
13. Finding out X11 motherboards are the most recent. A few key posts explain the issues of skylake. Realizing this forces another SSD or 2 to be used as boot drives due to XHCI lack of drivers in current FreeNAS version.
14. Doing PSU-calculations and research on efficiency of PSUs. Very interesting.
15. Realizes a migration really must be planned in conjunction with a pool expansion strategy over the next years. Getting to work crunching numbers.
16. Feeling confident enough to make a first post.

Ambition: Building a "done right" setup with smart priorities to fit my available recyclable resources and use.
User skill: Advanced home windows user (aka no sysadmin experience). No FreeBSD/Linux Experience.

Use case:
-The data is more or less “put it in the tank, don’t modify it, read it occasionally”.
- 2 windows machines/users (workstation / utility server).
- Windows backups
- Primarily CIFS/samba
-1 Gbit network will be used.
-Jails: I’d like to try out some of the plugins – owncloud, syncthing or something like that. I’m aiming to get OpenVPN to work in a jail too.

Case:
Will be DIY, using MDF laying around, including a bunch of fans (I’ll share the results & pics when it is all completed. If my ideas work out as intended, there might be some others interested in this DIY solution!)

The storage configuration.
This is a bit tricky since most users appear to have enough number of drives in coherent sizes available (or ready to purchase). Far from the mishmash that I have.

Some points to take into account that serves as a checklist/decision making reminder:
-vdevs cannot change number of drives. Thus think far ahead.
-zpools can stripe several vdevs, with equal number of drives in each vdev.
-different size drives can be used within each vdev. The smallest and or slowest drive determine size/performance of the vdev.
-once a vdev is added it cannot be removed from zpool.
-zpool can grow in 2 ways: replacing smaller disks with larger, adding (preferrably) equal number of hdds in a new vdev, added to the pool.
-multiple zpools can be used.
- Bidule0hm’s calculator https://jsfiddle.net/Biduleohm/paq5u7z5/1/embedded/result/ to get an idea of usable space.

I’ve been going over different raidz2 configurations using Bidule0hms calculator, to get a taste of what usable storage that comes with different solutions. I found this very helpful, since ZFS+raidz2 indeed consumes quite a lot more storage from each purchased drive than I’m used to, from NTFS\jbod.
Doing some research and fiddling around with numbers I actually got a lot of my initial questions regarding storage strategy answered. I’ll share this for anyone interested. Here’s how that reasoning turned out in numbers:
I’m willing to purchase additional drives in order to not lock myself into a upgrade frenzy shortly.
Here’s how I’ve approached the problem.

I first set out to explore the number of drives in each typical vdev and the resulting usable space.
A brief overview from Bidule0hms calculator gives:
1x 6drive Raidz2 on 6tbs: 18,8TB usable.
1x 7drive RaidZ2: on 6tbs = 23.62TB

I’ve envisioned myself having around 20TB of usable space (which also complies to a cyberjock recommendation, and well beyond "3x what you think you need the next couple of years" :D

How to migrate into the FreeNAS box and expand?
Since I’ve no place to dump all my current data – I must carefully think through the processes. Two main ideas – either start with the smaller drives, or start with the larger drives:

I’ve a couple of questions regarding CPU/RAM/Motherboard:
CPU: Here’s where I think it is wise to save money in my scenario. A E3-1230v5 is 4.5x the price of a G4400. I’m definitely leaning towards the G4400, if it is not instantly deemed to be completely out question for any reason I’ve not found yet? (encryption have been indicated as not being part of FreeNAS 10, which means the AES support won’t matter?)

RAM: I would like to start with 16gb of ram. I guess it is probably on the thin side, but remember the use case of the box. Please chime in, since I’ve no experience what so ever here.

I like the X11SSL-F-B. It seems to be a cost efficient board while featuring IPMI, Dual i210 GbE, and a good portion of expandability. Any comments on that choice?

Edit: Bonus question:
- I've read all over the place that 80% is about as much you'd want to fill a pool before ZFS goes nuts.
I guess that part of that recommendation has to do with the COW functionality. Now in my usecase, I wont have a lot of edits, but mostly reads of the data. Does the 80% "rule" apply as 'firmly' to this type of "last location storage tank" use?
Cheers
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I made my own case too, there's images in a thread somewhere but I can't find it at the moment <-- edit: found it: https://forums.freenas.org/index.ph...-ipmiutil-or-feeipmi.18377/page-6#post-178419 (there's a button to hide the pictures at the bottom of the post, it's not the case in it's final state, there's aluminium corners on every vertex so it's a lot prettier and the MB was a desktop MB as a placeholder) :)

For the 80 % rule: ZFS switch from speed optimization to space optimization at 90 % and you really don't want to fill the pool to 100 % (big trouble as it's a CoW FS you need some space to even delete files...). FreeNAS warns you at 80 % so you have time to do something before hitting 90 % but if you know what you're doing you can fill it up to 90 % without problem ;)
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I made my own case too, there's images in a thread somewhere but I can't find it at the moment :)

For the 80 % rule: ZFS switch from speed optimization to space optimization at 90 % and you really don't want to fill the pool to 100 % (big trouble as it's a CoW FS you need some space to even delete files...). FreeNAS warns you at 80 % so you have time to do something before hitting 90 % but if you know what you're doing you can fill it up to 90 % without problem ;)

Great, thanks. This helps a lot.

I'm looking to not only suspend the drivecage, but suspend each individual drive. From a sound perspective - I come to doubt it is really that beneficial? thoughts`?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I just edited to put the link, I finally found the post ;)

The drives cage (and the real MB... yes, it's a µATX MB but I planned the case to be able to take an ATX MB just in case...) looks like this:

b_w_852bf0162b3ad2237d5bee1c78ddea59.jpeg
 
Last edited:

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
Now in my usecase, I wont have a lot of edits, but mostly reads of the data. Does the 80% "rule" apply as 'firmly' to this type of "last location storage tank" use?
Turn off atime when you set up your storage. It won't alleviate the 80%/90% rule, but it will help by decreasing writes and fragmentation.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
@Bidule0hm: Now that's a hddcage suspension =D
Would you estimate any further improvements with regards to sound level being attainable by individually suspending hdds?

@mattbbpl: I searched the manual and found the following description:
"controls whether the access time for files is updated when they are read; setting this property to Off avoids producing log traffic when reading files and can result in significant performance gains"
What are the 'side effects' of this option? I cannot really tell what difference this would make (at least I've never encountered a situation on Windows where the file reads were logged, and of any use?)

Cheers
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I'm looking to not only suspend the drivecage, but suspend each individual drive. From a sound perspective - I come to doubt it is really that beneficial? thoughts`?

The drives should be held firmly in place if you care about them. The problem with that is that the case amplify the vibrations so we tend to put rubber or other things between the drives and the case but it's not a very good solution.

Personally I wanted a very very quiet server so if the drives die a few months earlier than they should it's not the end of the world for me (and for now I can't see any negative effects on the drive's life in comparison to a traditional mounting) but I've done it the clever way: I'm not isolating drive by drive but the all the drives at the same time, this means I can use the weight of the other drives to stop a drive from vibrating (as if it was firmly mounted, more or less) thanks to the inertia :)

It's not perfect but it works pretty well as far as I can see (NB: the design on the picture is not exactly what I designed at first, if you look carefully you can see there's steel cables but in the first place I wanted to use springs only, but I don't have the right type of springs on hand (still need to order them actually...) so I've use this solution in the mean time) ;)

Also I mounted one half of the drives in the opposite direction of the other half, that way the net torque during spin-up is more or less zero so the inner cage doesn't want to turn on itself.

I don't think you can gain much more on the sound level with individual suspension, plus it would be detrimental to the drive's lifetime.

Improvements to my design would be a bigger area air filter (less air resistance + further apart cleanings; using a V or W shape is an easy way to have a high area while keeping filter's height and width the same) and some foam or other sound absorbent + some vanes at 45° angle in the top and bottom air plenums to let air pass but to absorb and reflect the sound back. I plan to make these improvements the next time I disassemble the top plenum (I need to add more fans and modify the way I connect them to the MB anyway) ;)
 
Last edited:

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
@mattbbpl: I searched the manual and found the following description:
"controls whether the access time for files is updated when they are read; setting this property to Off avoids producing log traffic when reading files and can result in significant performance gains"
What are the 'side effects' of this option? I cannot really tell what difference this would make (at least I've never encountered a situation on Windows where the file reads were logged, and of any use?)

Cheers / Dice
To be blunt, I don't believe there are significant "side effects". I mean, you won't know the last time the file was accessed, but you probably don't care. In the meantime, you'll decrease IO and put less strain on your filesystem. The side effects are so light compared to the benefits that a lot of guru level members on this board such as jgreco have advocated that noatime be the default.

For what it's worth, I turned off atime, and I believe we have similar use cases.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The drives should be held firmly in place if you care about them. The problem with that is that the case amplify the vibrations so we tend to put rubber or other things between the drives and the case but it's not a very good solution.

I don't think you can gain much more on the sound level with individual suspension, plus it would be detrimental to the drive's lifetime.
This was very useful information. I will now definitely rework my design completely!

Improvements to my design would be a bigger area air filter (less air resistance + further apart cleanings; using a V or W shape is an easy way to have a high area while keeping filter's height and width the same) and some foam or other sound absorbent + some vanes at 45° angle in the top and bottom air plenums to let air pass but to absorb and reflect the sound back. I plan to make these improvements the next time I disassemble the top plenum (I need to add more fans and modify the way I connect them to the MB anyway) ;)
Inspiring. My design was aimed at creating a small footprint for 24 drives, while individually suspended and excellently cooled. The filter and noise aspects can definitely come in hand now that I am back to the drawing board, square one. Thnx mate.

To be blunt, I don't believe there are significant "side effects". I mean, you won't know the last time the file was accessed, but you probably don't care. In the meantime, you'll decrease IO and put less strain on your filesystem. The side effects are so light compared to the benefits that a lot of guru level members on this board such as jgreco have advocated that noatime be the default.

For what it's worth, I turned off atime, and I believe we have similar use cases.
Great. I figured it is best to ask before applying 'tweaks' - the potential for unwanted effects to come in play increases when there is no <understanding> but mere <applying>.. hehe
Thnx!

Cheers /
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, I highly recommend positive pressure + an air filter, that way you just need to clean the filter from time to time (something like every 6 months or even a year) rather than having dust everywhere in your server (especially in the CPU cooler and the fans) ;)

I use the kind of filter material we use in extractor hoods and fish tank filters as it has a low air resistance and it's very cheap:
ouate-perlon-64-cm-x-12-pour-filtre-d-aquarium.jpg

l_rouleau%20de%20filtre%20g4.jpg
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Thnx @Bidule0hm, last night was back to the drawing board. Got some new ideas.. as per usual - things get out of control rather quickly.
What was a "small foot print 24 DIY case" turned into a "3 shelf, 24disk, 2mATX + network shelf/optinal 3rd mATX - all in one" - kind of deal. LOL :D
This is the beauty of thinking on paper, rather than on live materials. It is far cheaper to end up with a better solution...
Inspiration is much appreciated! (Is there a "show off your freenas box" thread somewhere? I've not been able to 'stumble upon' any of them)

Now - I would still like to emphasize the remainder of my questions regarding this build:
CPU: Question...
CPU: The popular e3-1231 generates a passmark score of about 9000. The ASrock C2750 chip about 3800. Now the G4400 which I've locked my eyes into - generates a score of about 3700. But - with higher clocked but fewer cores (thinking this is the key to SAMBA performance).
To me this seems like the G4400 would do the trick. Yet it would be possible that the 8 cores of the c2750 would compensate in some way the lack of powe compared to the the 4 cores, (8 including HT) , of the E3-1231V3.
I am blind to where this G4400 would be on the scale from "barely making it" to "a solid choice to your use"....?

RAM:
It is a bit of jungle to find what recomendations that apply to present versions via searches.

Something on the back of my mind tells me it is time to get 32gb, yet I am holding back like a cat slipping down into a bath tub...
Am I well beyond the scope for 16GB or is it still "fine, but it would be overly stupid for reasons XYZ" ?

Cheers /
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, SolidWorks is great for that too (I used it only to have the wood boards exact dimensions, that's why the 3D capture is so ugly and undetailed... :D) ;)

There was one but one day the crappy forum engine decided it was time to delete it... then a new one was created but it's lost somewhere in the middle of the other threads... I'll try to find it...

Well, in theory 16 GB can be enough (if it's a server for backups for example) but here I think you'll really need the 32 GB. But you can always buy 16 GB and then buy another 16 GB only if you need it, there's no risk for the data, just that the server can be too slow for your liking :)
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Well, in theory 16 GB can be enough (if it's a server for backups for example) but here I think you'll really need the 32 GB. But you can always buy 16 GB and then buy another 16 GB only if you need it, there's no risk for the data, just that the server can be too slow for your liking :)
Thank you. That is comforting.

OP, I just had to say that the your post was exceptionally well structured. Kudos

Thanks, appreciate the compliment.
(from lurking on the forums it became fairly obvious that in order to 'recieve' you must 'make an effort' first. I tried my best :)

Cheers /
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello,
I have been stuck on selecting CPU and amount of ram, to fit the budget. Thankfully Bidule0hm provided me some assurance that 16gb is a viable (perhaps not necessarily desirable) option.
At first sight, the component selection seemed pretty obvious. What was required to fit my needs, but then… that turned into something not so obvious. This is an investment after all (to me – a pretty big one).

Part of the problem is that I could not really figure out what I want to do with this server, over an extended period of time. Most of my “use case” described in my first post, is for “trying”. Not really “production”. So when I’ve tried the stuff, what then?

There are two routes to approach this matter.
Either by departing from my current user needs, or selecting components based on what would be the best investment in terms of a home server platform, to be in use for at least +6years. Notice, that those two do not line up nicely when the budget constrain is on.

From the Freenas-current-need perspective:
Should I buy the best fit for my recent needs (CIFS/backup server - focusing on RAM)?
Or one that would fit my desires to explore VM’s further along with media server functionality ( requiring additional CPU power)?

The other route is to explore what route is the best to provide a “future proof” solution.
The budget constraints of motherboard/ram/cpu dictates some limitations. I’ve outlined a couple of options, their cost and performance. I hacked together an excel spreadsheet to display what is going on I also decided to include CPU-capacity in each of the options for comparison to my current machines.

I started exploring the use of my previous machines. What had been upgraded? What was the ‘weak point’ at the life span of server?
I quickly found out that my 4 previous servers had 2 traits in common.
- Never upgraded any CPU
- Always upgraded RAM, 2 machines – even twice.
- When machines were taken out of production their CPU’s have been the biggest bottle neck.

This gives substantial indications on what of hardware options that could be considered. A powerful CPU would definitely be a longer-lasting choice. At least granted my own experience of use.

Conclusion: The E3-1230v5 + 16gb ram is the likely winner. I have not yet ordered anything of the above.
I am eager to hear inputs on these choices and my reasoning. Please, fill in from your experience, it might provide some clues to help me feel good about my investment J

Cheers /
 
Last edited:

hyperq

Dabbler
Joined
Sep 6, 2015
Messages
10
Xeon D-1521 or D-1518 supports 128MB ram, and runs at 45W and 35W respectively. That is twice the memory and half of the wattage comparing to the E3-1230v5. The cost for a Supermicro Xeon-D motherboard with soldered-on 4 core CPU is less than $500, which almost is as same as the cost for a E3-1230v5 board with CPU. You may want to look into Xeon-D options if you are still uncertain yet. I heard the Supermicro Xeon D-1521 and D-1518 boards will be available by the end of February. They will also support SR-IOV, which is not supported by the current D-1520 and D-1540. Not sure whether SR-IOV is important to you or not.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Xeon D-1521 or D-1518 supports 128MB ram, and runs at 45W and 35W respectively. That is twice the memory and half of the wattage comparing to the E3-1230v5. The cost for a Supermicro Xeon-D motherboard with soldered-on 4 core CPU is less than $500, which almost is as same as the cost for a E3-1230v5 board with CPU. You may want to look into Xeon-D options if you are still uncertain yet. I heard the Supermicro Xeon D-1521 and D-1518 boards will be available by the end of February. They will also support SR-IOV, which is not supported by the current D-1520 and D-1540. Not sure whether SR-IOV is important to you or not.
Thank you for your input. Unfortuneately these are not availible from my retailer.
Furthermore, I am rather debating switching towards less cpu power, as the G4400. But, cannot completely bury my fear of regretting not getting the fast Xeon, while in the process of building. Even if it is vastly overkill at the moment, and potentially never will be justified by my use :(
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Let's put it this way:
What do you plan to do with the server?

Comparing it to older servers is mostly meaningless. Plenty of people use i3s and Pentiums without any regrets.
 

Scareh

Contributor
Joined
Jul 31, 2012
Messages
182
not anywhere near your use case, but from what i've read/understood over the past 2 years or so is that ram matters more then CPU. From a freenas perspective, only CIFS/samba benefits from a stronger CPU.
From a plugin stand: if you're going to use VM's, Plex to do any transcoding, stuff like that, go for a "beefier" cpu.
Since you don't really have an idea what you're going to do with your server, i'd go for I3 with 16Gb ram (upgrade to 32 afterwards if you lack the ump).

VM's are non production anyway for you, just to "mess around" with, so if they are a bit slower then optimal, you wouldn't care.
Transcoding a movie in plex will take what, 10 secs longer, if even that.
Samba/CIFS, who uses that anyway, use FTP :p, no seriously, for doing large file transfers, use (s)FTP, for browing stuff or copying a movie or such, use samba
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Regarding transcoding and the need for more or less CPU power, when Plex has to transcode a stream (e.g. AVI to Mac) I see my G3220 heavily utilized, but no hiccups. However, there's never more than one client streaming, nor is there any other significant load on the system during streaming.
 
Status
Not open for further replies.
Top