First build. Hardware and some FreeNAS questions.

Status
Not open for further replies.

cryodream

Cadet
Joined
Dec 6, 2012
Messages
2
Hello, freenas community,

A quick background: I am pretty new to the nas side of things. It has been a couple of years, that I wanted to build me a decent media file server. At first I looked at unraid, later on at flexraid. Both are one man operation, which was one of my biggest concerns all the time. Many people like them and recommend them, but I kept an eye on both over the last years and they never made me feel secure enough to pull the trigger. The second biggest concern for me was, that I didn't feel good about running 24 drive server with a single parity drive (which both of them do). I always felt that both of them had some positives, but mainly the negatives always out-weighted those. So all this time I waited and waited and simply ran a win7 machine with 20 drives in the Norco 4220 case. But finally I got tired of loosing drives and the data in them. I never tried flexraid, because I don't want snapshot protection and flexraid's real-time is still pretty much in beta. I tried unraid's trial with 3 drives, a couple of months ago - it went tits up after 3 weeks. I asked for help on the forums, but with no avail - so much for their reliability and ease of use.
I new about zfs before, but never considered it seriously as it seemed so much out of my league (I know nothing about linux - strictly windows guy). But a week ago I decided to read up on it. My journey led me here - to freenas.

I apologize if my questions seem silly or have been answered before. I read freenas manual from top to bottom, browsed the forums, googled until my eyes bled, but the more I read about some topics, the more complicated and confused things got. Sorry for my engrish too.

Software side:
  1. Is this true: freenas is not a one man show as unraid/flexraid are. Freenas is way more robust, advanced and better maintained, with some proper backing from iXSystems. Plus it runs on zfs, which is king of the file systems atm (from what I managed to understand from the internets). Compared to the other too, freenas could be used in the business environment?
  2. I want to build a media server to supply my multiple HTPCs (running XBMC) around the house. I would love to build a server which is as much appliance-like as possible. I would like to spend as much time using it and as little time fixing it, as possible. Is freenas for me? Or is better suited for the "tinkerer"? (I love to tinker, I'm just afraid, I won't have too much time).
  3. do all the drives in freenas have to be spinning all the time or can they be spundown, if server is not accessed for a longer time and no scrubs/etc are running? I mean, power-usage-wise, would like to be able to save some electricity.
  4. Crucial: even thinking about loosing all the data in all the drives is making my sphinkter squeek. The approach of unraid/flexraid not striping the data is very comforting in that you only loose the bad drives not the whole array. In zfs you loose the whole damn pool! I would love for zfs to have an option to not stripe data, but alas it's not possible, AFAIK, right? The question is, can I create/use separate pools/volumes like this:
    • instead of creating 1 pool (pool/volume is the same, right?) and adding multiple vdevs to it, I create multiple pools, eg: 2 separate pools of 10 drives in raidz2
    • using separate pools, if I loose 3 drives in the same vdev, I loose only that single vdev/pool - only 10 drives and not all 20
    • if drives can be spun down, when not in use, I gather it would benefit to have rarely used data separated from often used data. I would probably have more than half of drives not accessed for months, so why to spin them all the time (except for scrubbing o/c)?
  5. I'm not rich so I would like to strike the balance between affordable and relatively safe. The data is mostly movies and tv. While I want to protect it and would hate to spend lots of time on rebuilding my collection, if anything goes horribly wrong, I simply can't justify backing this stuff up. I cannot and will not run 2 servers just for backup, hence:
    • according to the manual my best option is to run the pools with either a single vdev of 10 drives in raidz2, or with 2 vdevs of 5 drives in raidz1. Either way I loose 2/10 drives for parity.
    • do I understand this correctly: a pool with a single vdev of 10 drives in raidz2 would be relatively a little bit safer than option c?
    • a pool with 2 vdevs of 5 drives in raidz1 would be faster because zfs stripes the vdevs, but relatively a litlle bit less safe than option b?
    • speed-wise, all I need is a possibility to stream to 3-4 simultaneous htpcs, so I'm kinda leaning towards 10x raidz2, or am I wrong?
  6. A little confused: manual says freenas can import NTFS volumes. But if I want to create zfs 10x drives raidz2 vdev, the drives will be formated and I will lose the data? I mean, I need to move the data of the drives before hand? (this is gonna hurt, I'l probably need to buy more drives)
  7. I have various drives, 4k sectors and not. Shoudl I force 4K when creating raidz2? (manual seems to say I should)
  8. Is the pool/data available while scrubbing?

Hardware side:
Couple of months ago I started buying stuff for an ESXi unRaid build, but after trying out the 3 drive trial I was disappointed and dropped it. Here is the list of the parts I have/could use atm:
PSU: CORSAIR TX Series CMPSU-750TX 750W 80 PLUS BRONZE (brand new, unopened)
PSU: SeaSonic X750 Gold 750W 80 PLUS GOLD Full Modular (brand new, unopened)
MB: SUPERMICRO MBD-X9SCM-F-O C204 (brand new, unopened)
HBA: 2x AOC-SASLP-MV8
Case: NORCO RPC-4220
Other PCs, that I could probably use but, I think it would be better to build a new.

Anyway, I have ~100 drives laying around in the drawer (full of data). As much as I understand, zfs itself does not have a drive limit (from our - home users puny point of view). Now what I would like to ask: what is a reasonable maximum number of drives that I could plug into my freenas? When I say reasonable, I mean: I'm willing to build from scratch - buy all new hardware specially for this server. I can return some of what I have, maybe sell the rest.


  1. I will not be using Norco case. I quite liked it, even wanted to trade up to 4224, but I decided I'm not gonna do that. The simple matter is, I have no good place in my home where to put that thing. Too big and uncomfortable to use at home (for now). I will be building my own custom case/solution with 200mm silent fans and way more (hopefully) comfortable shape and size. I know its sounds kinda crazy, I but I always wanted to try this out. Depends on what answers I get to the next questions, I hope my own creation will help me with expandability.
  2. I always wanted to build a server and put as much drives online as possible. The main concern, imho, is the PSU - how big the psu needs to be? Do I need a crazy expensive server PSU with less 12V and more 5V for the drives? I read some people using 2 PSUs, how? Or is it just too expensive to connect too many drives to a single server without uber expensive enterprise hardware? Please help, I am so out of my depth here...
  3. So whats better: try and plug many many drives to a single server. Or just bite the lip and build 2 servers? I mean, 2nd server: that's another bunch of green on mb, cpu, ram, psu (no 2nd case in my situation). And how about power bill when running 2 boxes instead of one?
  4. How much drives can I go for? I now I can do 24-26 (as many have done this on unraid forums). Can I do 32? 48? more? What is the limit of drives in a single server for a home consumer, when the price and/or performance hit becomes too much?
  5. Could I do 48 drives with 2x IBM M1015s and 2x Intel RES2SV240 Expanders? I read all over: some people say that these expanders support only up to 16 drives when you connect them to M1015s with 2 cables, some say they run 24 drives of a single cable - confusing...confusing...confusing...
  6. Or shoudl I just be better off without expanders and slap 4x M1015s in there and be happy with 32 drives?
  7. Or maybe buy some expensive motherboard with 6 PCIe-x8 ports and put 6 HBAs and 6 expanders and go buy more drives? Please, someone, get me back to the planet earth :)

Honestly, please help me. I really like FreeNAS from what I've read here and around. I'd love to finally build a bitchin server and be done with my gf constantly bugging me: "where's my Dexter, where's my Revenge and whatnot..."

If you are still reading, thank you very much for sticking around with this new hit show "So You Think You Can Build A Server".
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
You've done an admirable amount of homework.

Software Side.

1. a. http://www.freenas.org/about/team-members
b. I can't tell you if it's better maintained. I can say it's maintained well.
c. Yes, it supports ZFS. It's important to realize ZFS is just the file system. FreeNAS is actually a package that sits on top of FreeBSD for the purpose of making a NAS appliance.
d. Yes, FreeNAS can be used in a business environment.

2. I don't see a single reason you'll have to tinker with it once set up. Many people here who tinker tinker beause they like to, not because they have to.
3. Drives can be spun down. This has been covered many times here. You MIGHT have some trouble getting the SMART monitoring running in a way that allows the drives to spin down.
4. a. Yes, you can create multiple pools.
b. Correct
c. Also possible.

But really, make sure you look into (for instance) RAID-Z2. You really do decrease the likeliness of losing everything. And RAID isn't backup. Backup your most critical stuff.

5. a. OK.
b. yes.
c. This is that tinkerer thing. Does it really matter? Your disk I/O will already be faster than your gigabit connection. And once you load the data, your streaming demands will be much lower.
d. I see nothing wrong with 10x raidz2 other than it's a large vdev. I keep mine 6x. There are, of course, plusses and minues.

6. You cannot use drives with data on them to create your pool. You will lose the data on the drives you use. This is another benefit of smaller pools. You can create a pool of, say, 6 drives, copy over data, blow away the old drives and use them to create your next 6-drive pool. But again, you have to look at your situation and see what makes the most sense.

7. Force 4k on all drives.

8. Yes.
 

cryodream

Cadet
Joined
Dec 6, 2012
Messages
2
@Stephens, thank you very much for the answers.

I don't see a single reason you'll have to tinker with it once set up. Many people here who tinker tinker beause they like to, not because they have to.
I'm sorry, my bad, what I mean to say/ask: tinkering not as in tweaking, messing around with various settings and such, but as in fixing constant problems. I meant, I would like an "appliance" experience as much as possible - I set it up, and forget it - it just works, like a fridge or microwave. Of course I'll be monitoring if its running ok, or some disks are failing, or FreeNAS needs to be updated, etc. But not having various problems all the time and needing to scour the forums on daily to fix this and that. Well, you know what I mean.

Drives can be spun down. This has been covered many times here. You MIGHT have some trouble getting the SMART monitoring running in a way that allows the drives to spin down.
Hmm, that's sad to hear. How much more do I risk, if I go with powering down and less SMART monitoring against SMART and all disks spinning all the time? Though in my case spinning 50-60 drives 24/7 for just this reason feels too expensive power bill wise. One can never forget the scary bouncers from Greenpeace, too...

But really, make sure you look into (for instance) RAID-Z2. You really do decrease the likeliness of losing everything. And RAID isn't backup. Backup your most critical stuff.
I use Crashplan for critical stuff. Backing up the media - I'm too poor to do that, no way. And I am gonna use RAID-Z2 single vdev pools.

I see nothing wrong with 10x raidz2 other than it's a large vdev. I keep mine 6x. There are, of course, plusses and minues.
For me, having no experience in RAID at all, the concept of loosing space is still foreign by nature. I'm gonna run 10 drive pools in RAID-Z2. That is 1/5 of space lost (actualy more, from all the people's complaints that I've seen here and on IRC about lost space). Loosing >1/3 of space in 6 drive vdevs would kill me... if not fast, then slowly and painfull... loosing sleep every night thinking of wasted drives...sheeesh...

You cannot use drives with data on them to create your pool. You will lose the data on the drives you use. This is another benefit of smaller pools. You can create a pool of, say, 6 drives, copy over data, blow away the old drives and use them to create your next 6-drive pool. But again, you have to look at your situation and see what makes the most sense.
Yep, that's the plan. The problem is - for it to work, I need to start with 10 empty drives and they must be the largest drives I've got. All I can manage atm is only 7 drives, all others are full of data. I'll need to do some serious spring cleaning.


Please, help me with the hardware questions:
Ok, I've finished a 48 hour marathon of googling and reading various forums on the hardware I need. I have a setup in mind: I would like to buy 4 of SGI RACKABLE SE3016 SATA SAS Expander 16 Hard Drive Bay boxes and plug them into a single server. Now the questions:
  1. From all the research I've done, I'm thinking of buying this HBA: LSI LSI00276 PCI-Express 2.0 x8 SATA / SAS 9201-16e Host Bus Adapter. My reasoning:
    • I do not want hardware RAID and probably never will.
    • LSI as a maker is good choice (from what I could gather from the internets over the years). They're good quality, good compatibility, etc. Or am I wrong?
    • This card has 4 x SFF-8088 mini-SAS connectors. Thats exatcly what I need to connect these 4 boxes, right?
    • This card enables me to connect all the boxes (all 64 drives!) on a single PCI-Express 2.0 x8 port on the motherboard. Using a single port saves other ports for whatever else - expansion (second HBA) or NICs or somesuch.
  2. Or is the fact, that I'm connecting so many drives to a single PCI-Express 2.0 x8 port actually a bad thing? Will this impact the performance too much, bottleneck, or create some other problems? Would I'd be better of with 2 cards or more? Bare in mind, I'm gonna be using this server to stream HD media to multiple htpc on a 1Gbit network, but thats about it. I would love to saturate that Gbit connection, though. (I really lack basic experience in these things).
  3. I read in numerous places, that ZFS and SAS expanders are a bad idea. Is this really that huge of a problem and still not fixed, that using these boxes with this card makes me unable to use zfs? Like I said, I wanted to try out FreeNAS.
  4. If the answer to previous question is that I'm unable to use zfs with this card, what other card/cards/combination of hardware could I buy to solve this problem and be able to use these 4 boxes plugged to a single server?
  5. Maybe for whatever other reason you think I'm better of with any other card(s) - price/quality, anything, please let me know. And why.
  6. If this is the card for me: provantage.com has it $100 cheaper than newegg.com. I never bought from them, how is provantage? It's quite a better deal though.
  7. Methiks, I'll be needing 4 of these cables from monoprice: 2m 28AWG External Mini SAS 26pin (SFF-8088) Male to Mini SAS 26pin (SFF-8088) Male Cable - Black. These should be long enough. Are these right type of cables?
  8. I've read people here talking about daisy-chaining these boxes. Does that mean I could connect all 4 boxes to a single SFF-8088 mini-SAS port on an HBA?
  9. If the answer to previous question is yes - I could plug oll 4 boxes to a single SAS port, I would understand the positives. I could buy a cheaper card, probably. But what would be the disadvantages of doing so, if any?
  10. I'm a windows guy and pretty much never tried anything else. I am very anxious to try out FreeNAS or any other flavor of ZFS (zfs is what I'm after for here). Although, I would very much like to be able to get back to windows if my endeavors to "cool land" fails miserably (a possibility). Will this setup be windows-proof?
Thanks in advance.
 
Status
Not open for further replies.
Top