Setting up my first FreeNAS

Status
Not open for further replies.

Heide264

Cadet
Joined
May 2, 2018
Messages
8
Good Afternoon,

I recently took down a MythTV server and frontend and pieced together (what I thought) would work well for a FreeNAS setup. In the end, I'd like to be able to have a nice robust NAS to store a few file volumes:
-One that would contain any small(ish) important files that will go to a cloud service for offsite backup (500Gb is plenty)
-One that would contain lossless & compressed music files - and auto sync the compressed folder to my phones local storage (maybe 2Tb total???)
-One large volume to contain lossless blu ray & 4k disk rips

The server would have to host Plex server to stream to a couple other devices. No need to have 50 transcodes going though - it's just me and my fiance that would be using it.

So I have some 'guts' leftover from my server/frontend machine that were a built overbuilt that I can use. I'll toss some stars beside what I think I'll use in the NAS:
Case: *Fractal Design R5 (very happy with it... my windows desktop uses the same case. I'll toss in another case fan in front of the HDD 'tower')
CPUs: i3-3220 & *i3-4130T (1150 socket on that one!)
Memory: 2x8Gb Crucial Ballistix DDR3 1600 UDIMMs (maybe even 4 of em!). I know they aren't ECC, but they can get me up and running for now, I think
HDDs: *3x3Tb, 2x1Tb, and *6x5Tb (I'll get to the issue in a minute)
SDDs: *256Gb Evo something or other (planning on using it for a jail)
Mobos: two miniITX boards - I'm guessing none are of use (Gigabyte GA-H61M-HD2 is one of 'em)
Boot: 2x16Gb mirrored SanDisk Ultra Fit flash drives

So I realized that my mobo only has 4 SATA ports... Being the wreckless/stupid person I am, I went on amazon and picked up a whopping 10 port SATA expander (Ableconn PEX10-SAT PCI Express Host Adapter Card). After doing some reading, it seems that it was a dumb decision for $87, and I may return it. I am thinking I'll pick up a Supermicro X10SL7-F-O Micro ATX board, instead. It will give me an additional two RAM slots and that nice SAS controller built in.

I'm not familiar with server hardware, to be honest. I noticed my i3-4130T was not in the recommended hardware list, but the i3-4300 was for 'medium duty'. I did see that intel's 'ark' mentioned my 4130T supports ECC.

Any input?

Thank you!
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Just some quick thoughts, don’t really have the time for a thorough rant.

The CPU has barley the capacity for two transcoding streams, but sins you already have them use the 4130T. It will make the switch to a decent Xeon easier if you feel the need.

Stick with ECC RAM, no point in spending money on redundancy if you allow the CPU to write garbage to the pool.

There are reasons for running different pools, but content is rarely not one. Now you have a mix bag of drives and you need to think carefully about what you want. Most SOHO recommendations here on the forum is to run 6-8 wide RAIDZ2 vdevs. Personally I have a mirror vdev pool in my home server at the moment because I also reused lots of older projects when I built it. So I paired up the drives according to size and throw them all in the same pool.

With that many drives you should look at running an LSI SAS HBA and an expander. I think second hand off eBay is just fine, just make sure you use it while burning in the drives to weed out “DOA”-ish cards.

There is no explicit need for a SSD pool for jails, but I run a mirror pair of SSDs for iocage, temp folders and databases. The only reason I do this is to spare writes on the main storage pool. I try to limit writes to the spinners.

Some numbers.
Putting all the disks in mirror pairs (requires buying a forth 3 TB) would give you 16 TB + change of usable storage.

3x3TB RAIDZ1 is 4.7 TB (and not recommend)
2x1 TB mirror is 0.8 TB
6x5 TB RAIDZ2 is 14 TB

As you can see, splitting up your drives in pool don’t really give you that much extra. And you run the risk of filling up one pool while another is still empty.

Now maybe you want low redundancy on some stuff and high on other, sure that would require several pools, but for SOHO with offsite backup I see really no need to split system resources across multiple pools if the drive selection permit it.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I went on amazon and picked up a whopping 10 port SATA expander (Ableconn PEX10-SAT PCI Express Host Adapter Card). After doing some reading, it seems that it was a dumb decision for $87, and I may return it
Yes. Get your money back if you can.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I wanted to read what @garm said before I commented. He has some good input.
Any input?
My suggestion would be to put the 3TB drives and the 5TB drives into a single RAIDz2 vdev of 8 drives, despite the difference in size.
This means that the 5TB drives would be treated as 3TB drives, but it would give you a usable capacity of 11.8TB immediately, discounting the 20% free-space you need to maintain. When you are able to replace the 3TB drives with something larger than a 5TB drive, the pool would auto expand and treat all the drives as 5TB drives giving you 19.8TB of capacity.
You can subdivide the capacity in the FreeNAS "Volume Manager" by creating datasets.
This is a commonly used method of growing capacity over time.

If you have not yet looked at these resources, you might want to, so you can learn the terms:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 

Heide264

Cadet
Joined
May 2, 2018
Messages
8
Just some quick thoughts, don’t really have the time for a thorough rant.

The CPU has barley the capacity for two transcoding streams, but sins you already have them use the 4130T. It will make the switch to a decent Xeon easier if you feel the need.

Okay. I'm okay with two transcoding streams, at the moment. I'll give it a go and swap in a xeon later, if needed.

Stick with ECC RAM, no point in spending money on redundancy if you allow the CPU to write garbage to the pool.

I'll probably pick up 32Gb of ECC RAM in the near future, but I figure I can at least use the 16Gb of non-ECC I have to get up and running. I understand the ECC ordeal is a hot topic, and I agree I should use it, but spreading out the initial cost would be nice (my wedding is May 19th - they aren't cheap, apparently)

There are reasons for running different pools, but content is rarely not one. Now you have a mix bag of drives and you need to think carefully about what you want. Most SOHO recommendations here on the forum is to run 6-8 wide RAIDZ2 vdevs. Personally I have a mirror vdev pool in my home server at the moment because I also reused lots of older projects when I built it. So I paired up the drives according to size and throw them all in the same pool.

With that many drives you should look at running an LSI SAS HBA and an expander. I think second hand off eBay is just fine, just make sure you use it while burning in the drives to weed out “DOA”-ish cards.

So the mobo that I listed above contains an SAS controller built in, to my knowledge. It would allow up to 12 drives directly into the mobo, if I understand correctly.

There is no explicit need for a SSD pool for jails, but I run a mirror pair of SSDs for iocage, temp folders and databases. The only reason I do this is to spare writes on the main storage pool. I try to limit writes to the spinners.

I just have it on hand, and it's not large, so I figured it would be it's own pool. I may actually have another of the same size I can drop in there for mirroring if there is a use for it. I wasn't planning on diving into anything overly complex on this.

Some numbers.
Putting all the disks in mirror pairs (requires buying a forth 3 TB) would give you 16 TB + change of usable storage.

3x3TB RAIDZ1 is 4.7 TB (and not recommend)
2x1 TB mirror is 0.8 TB
6x5 TB RAIDZ2 is 14 TB

As you can see, splitting up your drives in pool don’t really give you that much extra. And you run the risk of filling up one pool while another is still empty.

Now maybe you want low redundancy on some stuff and high on other, sure that would require several pools, but for SOHO with offsite backup I see really no need to split system resources across multiple pools if the drive selection permit it.

I was planning on doing three pools, I guess:
1: the SSD for plex jail
2: 3x3Tb in a RAIDZ1 (one disk redundancy) for music
3: 6x5Tb in a RAIDZ2 (two disk redundancy) for video

I'll have physical media backup of most of the media (a few .mp3s - perhaps not - but they aren't priceless, to say the least), so I'm not overly concerned about offsite backup on those for now. If I wanted to create a small portion for offsite backup - even a directory, that would be ideal. Can I just create a small vdev for it? I have no experience with vdevs and was originally going to just do one per pool.

I really don't have that massive of a library - all of my stuff I purchased at one time or another, so as of now, I think 10TB is way overkill, haha! I just figured I'd build it out due to the 'fun' involved in enlarging the zfs pool.

Thanks for the input! I'll definitely do some reading and keep this stuff in my mind.
 

Heide264

Cadet
Joined
May 2, 2018
Messages
8
I wanted to read what @garm said before I commented. He has some good input.

My suggestion would be to put the 3TB drives and the 5TB drives into a single RAIDz2 vdev of 8 drives, despite the difference in size.
This means that the 5TB drives would be treated as 3TB drives, but it would give you a usable capacity of 11.8TB immediately, discounting the 20% free-space you need to maintain. When you are able to replace the 3TB drives with something larger than a 5TB drive, the pool would auto expand and treat all the drives as 5TB drives giving you 19.8TB of capacity.
You can subdivide the capacity in the FreeNAS "Volume Manager" by creating datasets.
This is a commonly used method of growing capacity over time.

That's a solid idea. One of the 3Tb drives is still new in box (and a long story short, the reason this FreeNAS is in progress)... I think I'm in the return window for that still. I'll send it back with the 10 port paperweight (SATA expander) and grab a 5Tb in it's place - even though it won't help me until I pick up two more 5Tbs. 11Tb is definitely enough to get me started with my media conversion!

If you have not yet looked at these resources, you might want to, so you can learn the terms:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

I'll check out the reading. I have been doing a good bit of reading, but my dealings with ZFS have been very limited (and not overly pleasant with ubuntu server years ago)... So I'm still coming up to speed, and it's hard to grasp some of the concepts from a practical standpoint without getting my hands a bit messy. Thanks for the links.


So would you agree that the X10 board with the SAS adapter is a good place to start? It would be a bit easier financially (the sticker is lower than the 1151 equivalent boards by a good chunk and I have a CPU to use) - and I can always toss in a heavier CPU later.

Will my 16Gb 'devil' RAM suffice to get everything up and moving for now?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll check out the reading. I have been doing a good bit of reading, but my dealings with ZFS have been very limited (and not overly pleasant with ubuntu server years ago)... So I'm still coming up to speed, and it's hard to grasp some of the concepts from a practical standpoint without getting my hands a bit messy.
One of the servers I manage at work is a RedHat server that uses ZFS on Linux and it is much more a pain due to poor integration with the base OS. Many items can be managed purely with the GUI in FreeNAS and it has some nice graphs to help you see where your resources are going. There are also many command line tools like midnight commander that make certain management tasks easier. There are a number of links in my signature under the "useful links' button including some scripts that will automate monitoring.
So would you agree that the X10 board with the SAS adapter is a good place to start?
Yes. Personally, I like a board with more expansion slots, but that is a good board to start with since you have a CPU that will work in it.
Will my 16Gb 'devil' RAM suffice to get everything up and moving for now?
LOL. Yes. The 'devil' RAM will work for now. It is best to have ECC though to help ensure system stability and accuracy over time, but that can come later.
 

Heide264

Cadet
Joined
May 2, 2018
Messages
8
Thanks. It's comforting to know that ZFS is a bit easier with FreeNAS. I'm an EE, not a CompSci guy, for some context. I prefer command promps to (poorly designed) GUI's normally, but that being said, when dealing with file systems, its a bit overwhelming for me to troubleshoot.

I'll pull the trigger on that mobo. The paperweight was nearly $90, so that makes the mobo pretty easy to swallow. I'll hold off on the CPU upgrade until I know I need it. I'll put some emphasis on getting 32Gb RAM with some holy water sprinkled on it (ECC) after the wedding. I did see the thread that deals with X10 boards and the "RAM compatibility" list.

That presentation you linked is pretty awesome. I'm working my way through it with a cigar and a bourbon on the back porch... instead of tending to the lawn on one of our few nice days up here in Pittsburgh.

Thanks again for the help, Garm & Chris.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll put some emphasis on getting 32Gb RAM with some holy water sprinkled on it (ECC) after the wedding.
ECC memory is not much more than regular memory. I did a price comparison for someone around a month ago and the difference was only about $8. You could likely sell the memory you have on eBay for enough to turn around and buy good used memory. You can get some really good deals on surplus datacenter gear. That is where I get my hardware.

Happy wedding.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Okay. I'm okay with two transcoding streams, at the moment. I'll give it a go and swap in a xeon later, if needed.
Be mindful that the need for Plex to transcode streams will be dependent upon the playback devices you use, and the formats you use for saving the content. A Core i3 will probably support one stream of HD content if it is necessary to transcode. DVD quality video and music will not be an issue. There have been quite a few posts written about this and it might be worth your while to look some of them up.

The Core i3 and the Supermicro motherboard will be a great way to get your system going. Just be mindful that you will need upgrade the CPU if serving video via Plex is a priority. That is exactly what I did: I ran a Core i3 for two years, then upgraded to a Xeon. Very happy with my setup.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
For context about streaming capacity, I run an i3, and transcoding from h264 @1080p down to 720p on default plex transcode settings doesn’t hit the chip very hard at all. Now, it depends on the source as well, but I have 0 issues supporting a couple simultaneous streams without it having any issues. Also, 16BG is almost certainly enough, ECC is preferable, but as you said that can be a future plan. Still though, with ram prices how they are right now, 16GB of ECC will be fine.

I run a couple jails off my RAID Z2 pool, no need to have them on ssd. Not that you couldn’t, but that’s another way to save a couple bucks.


Sent from my iPhone using Tapatalk
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
For context about streaming capacity, I run an i3, and transcoding from h264 @1080p down to 720p on default plex transcode settings doesn’t hit the chip very hard at all. Now, it depends on the source as well, but I have 0 issues supporting a couple simultaneous streams without it having any issues. Also, 16BG is almost certainly enough, ECC is preferable, but as you said that can be a future plan. Still though, with ram prices how they are right now, 16GB of ECC will be fine.

I run a couple jails off my RAID Z2 pool, no need to have them on ssd. Not that you couldn’t, but that’s another way to save a couple bucks.


Sent from my iPhone using Tapatalk
I would agree. 1080P h.264 video is DVD quality - shouldn't be an issue. H.265 compression or 4K video will be more compute intensive.
 
Last edited:

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I would agree. 1080P h.264 video is DVD quality - shouldn't be an issue. H.265 compression or 4K video will be more compute intensive.
Yes. My 1080p h264 is from blue ray source, but it’s transcoded down on import to reduce file size and increase stream-ability later on. So the heavy lifting is done once. For me, this has worked very well. This way I also typically don’t even need to transcode to watch non-locally as my internet upload speed can normally support a non-transcoded stream.


Sent from my iPhone using Tapatalk
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I'll probably pick up 32Gb of ECC RAM in the near future, but I figure I can at least use the 16Gb of non-ECC I have to get up and running. I understand the ECC ordeal is a hot topic, and I agree I should use it, but spreading out the initial cost would be nice (my wedding is May 19th - they aren't cheap, apparently)
Non-ECC is fine for non-critical data IF YOU TEST! as already advised, test all your hardware before putting it into "production".

/me ducks...
 

Heide264

Cadet
Joined
May 2, 2018
Messages
8
Hey guys - I can't find a 'multi-quote' option on here, so I'll have to generalize all your replies. I didn't realize people were still replying to this thread - my apologies for not getting back sooner.

Anyhow - The X10 mobo doesn't seem to like the 4x8Gb of Crucial Ballistix that I tossed in there, so that plan is out. After reading, it seems like pulling out 2x8Gb strips may have at least gotten me up and going, but oh well. I was originally going to only use 2x8Gb, but due to a stupid mistake on my part, I ended up having to gut a different PC which also had 16Gb RAM... Amazing what happens when you forget which mobo/CPU is in a PC. That being said, the 4x8Gb sticks won't go to waste - they'll be returned in pairs to the 'front end' machines they came from. I'll pick up a xeon for the NAS eventually, and move this i3 back into the one it came from... so I'll be back to two functional PCs (one for a windows desktop, one for the living room).

I ended up just buying 2x8Gb strips of new Samsung ECC memory on Amazon (part number was on the X10 RAM support page). It was a bit pricier than the cheapest I could find, but prime-ish shipping makes returns easy if anything doesn't check out. I'll get everything up and going and check to see if the system needs any more after that.

Random/obvious question, but the 'RAM allocation' table of the X10 manual is pretty confusing (to me). I am assuming that if I have two black memory slots and two blue memory slots (of which, I know what the 'first' one of each color is)... you should put one strip in the first channel of blue and one strip in the first channel of black? The table almost makes it seem as if both strips should go in the same color before moving over to the other color. I figured I'd just check before figuring out the hard way - not too familiar with server boards and there were a few oddities I found when building it.

Thanks again for the help everybody!
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
For normal mobo's I am pretty sure you populate same color slots first. That being said, I don't think that is a hard fast standard.

What does the manual say?

Usually its channels A1,B1,A2,B2 for instance. You would A1 and A2, and then populate the B's.
 

Heide264

Cadet
Joined
May 2, 2018
Messages
8
For normal mobo's I am pretty sure you populate same color slots first. That being said, I don't think that is a hard fast standard.

What does the manual say?

Usually its channels A1,B1,A2,B2 for instance. You would A1 and A2, and then populate the B's.

Here are some excerpts from the manual.

I can't remmeber the last time I populated a 4 slot mobo with 2 DIMMs, actually. After reading the Channel/Slot labels below, I do think you are correct about using the same color for the first two DIMMs. The table confused the snot out of me, as it said to populate the DIMMs accordingly... but I can't make much sense of it.

So you said A1 and then A2 in my case, but wouldn't that leave all my RAM in a single channel? I'm thinking that I should use both black DIMMs slots (A1/B1) in the image below.

...This is what happens when you think too hard, ha

RAMSlots2.JPG
RAMTable.JPG
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I would think you want both channels in use. A1 and B1 or A2 and B2. seems like every motherboard is a little bit different.o_O
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Yea. Go A1,B1. Whatever the verbiage is, pretty much always populate the same physical location. So, you have 4 dimm slots in two banks, populate the first slot of the first bank, and the first slot of the second bank. In your case A1/B1.

It’s possible I miss remembered what the verbiage usually is for the consumer boards I typically use, but either way, I believe A1/B1 to be correct.

All of this aside.... I run my nas in single channel since I opted to use the “free” 4GB that my chassis came with for a total of 20GB, and it seems 100% fine. RAM speed usually isn’t the issue with FreeNAS, so even if you had it wrong, I bet you would literally never notice lol.


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top