Hello, freenas community,
A quick background: I am pretty new to the nas side of things. It has been a couple of years, that I wanted to build me a decent media file server. At first I looked at unraid, later on at flexraid. Both are one man operation, which was one of my biggest concerns all the time. Many people like them and recommend them, but I kept an eye on both over the last years and they never made me feel secure enough to pull the trigger. The second biggest concern for me was, that I didn't feel good about running 24 drive server with a single parity drive (which both of them do). I always felt that both of them had some positives, but mainly the negatives always out-weighted those. So all this time I waited and waited and simply ran a win7 machine with 20 drives in the Norco 4220 case. But finally I got tired of loosing drives and the data in them. I never tried flexraid, because I don't want snapshot protection and flexraid's real-time is still pretty much in beta. I tried unraid's trial with 3 drives, a couple of months ago - it went tits up after 3 weeks. I asked for help on the forums, but with no avail - so much for their reliability and ease of use.
I new about zfs before, but never considered it seriously as it seemed so much out of my league (I know nothing about linux - strictly windows guy). But a week ago I decided to read up on it. My journey led me here - to freenas.
I apologize if my questions seem silly or have been answered before. I read freenas manual from top to bottom, browsed the forums, googled until my eyes bled, but the more I read about some topics, the more complicated and confused things got. Sorry for my engrish too.
Software side:
Hardware side:
Couple of months ago I started buying stuff for an ESXi unRaid build, but after trying out the 3 drive trial I was disappointed and dropped it. Here is the list of the parts I have/could use atm:
PSU: CORSAIR TX Series CMPSU-750TX 750W 80 PLUS BRONZE (brand new, unopened)
PSU: SeaSonic X750 Gold 750W 80 PLUS GOLD Full Modular (brand new, unopened)
MB: SUPERMICRO MBD-X9SCM-F-O C204 (brand new, unopened)
HBA: 2x AOC-SASLP-MV8
Case: NORCO RPC-4220
Other PCs, that I could probably use but, I think it would be better to build a new.
Anyway, I have ~100 drives laying around in the drawer (full of data). As much as I understand, zfs itself does not have a drive limit (from our - home users puny point of view). Now what I would like to ask: what is a reasonable maximum number of drives that I could plug into my freenas? When I say reasonable, I mean: I'm willing to build from scratch - buy all new hardware specially for this server. I can return some of what I have, maybe sell the rest.
Honestly, please help me. I really like FreeNAS from what I've read here and around. I'd love to finally build a bitchin server and be done with my gf constantly bugging me: "where's my Dexter, where's my Revenge and whatnot..."
If you are still reading, thank you very much for sticking around with this new hit show "So You Think You Can Build A Server".
A quick background: I am pretty new to the nas side of things. It has been a couple of years, that I wanted to build me a decent media file server. At first I looked at unraid, later on at flexraid. Both are one man operation, which was one of my biggest concerns all the time. Many people like them and recommend them, but I kept an eye on both over the last years and they never made me feel secure enough to pull the trigger. The second biggest concern for me was, that I didn't feel good about running 24 drive server with a single parity drive (which both of them do). I always felt that both of them had some positives, but mainly the negatives always out-weighted those. So all this time I waited and waited and simply ran a win7 machine with 20 drives in the Norco 4220 case. But finally I got tired of loosing drives and the data in them. I never tried flexraid, because I don't want snapshot protection and flexraid's real-time is still pretty much in beta. I tried unraid's trial with 3 drives, a couple of months ago - it went tits up after 3 weeks. I asked for help on the forums, but with no avail - so much for their reliability and ease of use.
I new about zfs before, but never considered it seriously as it seemed so much out of my league (I know nothing about linux - strictly windows guy). But a week ago I decided to read up on it. My journey led me here - to freenas.
I apologize if my questions seem silly or have been answered before. I read freenas manual from top to bottom, browsed the forums, googled until my eyes bled, but the more I read about some topics, the more complicated and confused things got. Sorry for my engrish too.
Software side:
- Is this true: freenas is not a one man show as unraid/flexraid are. Freenas is way more robust, advanced and better maintained, with some proper backing from iXSystems. Plus it runs on zfs, which is king of the file systems atm (from what I managed to understand from the internets). Compared to the other too, freenas could be used in the business environment?
- I want to build a media server to supply my multiple HTPCs (running XBMC) around the house. I would love to build a server which is as much appliance-like as possible. I would like to spend as much time using it and as little time fixing it, as possible. Is freenas for me? Or is better suited for the "tinkerer"? (I love to tinker, I'm just afraid, I won't have too much time).
- do all the drives in freenas have to be spinning all the time or can they be spundown, if server is not accessed for a longer time and no scrubs/etc are running? I mean, power-usage-wise, would like to be able to save some electricity.
- Crucial: even thinking about loosing all the data in all the drives is making my sphinkter squeek. The approach of unraid/flexraid not striping the data is very comforting in that you only loose the bad drives not the whole array. In zfs you loose the whole damn pool! I would love for zfs to have an option to not stripe data, but alas it's not possible, AFAIK, right? The question is, can I create/use separate pools/volumes like this:
- instead of creating 1 pool (pool/volume is the same, right?) and adding multiple vdevs to it, I create multiple pools, eg: 2 separate pools of 10 drives in raidz2
- using separate pools, if I loose 3 drives in the same vdev, I loose only that single vdev/pool - only 10 drives and not all 20
- if drives can be spun down, when not in use, I gather it would benefit to have rarely used data separated from often used data. I would probably have more than half of drives not accessed for months, so why to spin them all the time (except for scrubbing o/c)?
- I'm not rich so I would like to strike the balance between affordable and relatively safe. The data is mostly movies and tv. While I want to protect it and would hate to spend lots of time on rebuilding my collection, if anything goes horribly wrong, I simply can't justify backing this stuff up. I cannot and will not run 2 servers just for backup, hence:
- according to the manual my best option is to run the pools with either a single vdev of 10 drives in raidz2, or with 2 vdevs of 5 drives in raidz1. Either way I loose 2/10 drives for parity.
- do I understand this correctly: a pool with a single vdev of 10 drives in raidz2 would be relatively a little bit safer than option c?
- a pool with 2 vdevs of 5 drives in raidz1 would be faster because zfs stripes the vdevs, but relatively a litlle bit less safe than option b?
- speed-wise, all I need is a possibility to stream to 3-4 simultaneous htpcs, so I'm kinda leaning towards 10x raidz2, or am I wrong?
- A little confused: manual says freenas can import NTFS volumes. But if I want to create zfs 10x drives raidz2 vdev, the drives will be formated and I will lose the data? I mean, I need to move the data of the drives before hand? (this is gonna hurt, I'l probably need to buy more drives)
- I have various drives, 4k sectors and not. Shoudl I force 4K when creating raidz2? (manual seems to say I should)
- Is the pool/data available while scrubbing?
Hardware side:
Couple of months ago I started buying stuff for an ESXi unRaid build, but after trying out the 3 drive trial I was disappointed and dropped it. Here is the list of the parts I have/could use atm:
PSU: CORSAIR TX Series CMPSU-750TX 750W 80 PLUS BRONZE (brand new, unopened)
PSU: SeaSonic X750 Gold 750W 80 PLUS GOLD Full Modular (brand new, unopened)
MB: SUPERMICRO MBD-X9SCM-F-O C204 (brand new, unopened)
HBA: 2x AOC-SASLP-MV8
Case: NORCO RPC-4220
Other PCs, that I could probably use but, I think it would be better to build a new.
Anyway, I have ~100 drives laying around in the drawer (full of data). As much as I understand, zfs itself does not have a drive limit (from our - home users puny point of view). Now what I would like to ask: what is a reasonable maximum number of drives that I could plug into my freenas? When I say reasonable, I mean: I'm willing to build from scratch - buy all new hardware specially for this server. I can return some of what I have, maybe sell the rest.
- I will not be using Norco case. I quite liked it, even wanted to trade up to 4224, but I decided I'm not gonna do that. The simple matter is, I have no good place in my home where to put that thing. Too big and uncomfortable to use at home (for now). I will be building my own custom case/solution with 200mm silent fans and way more (hopefully) comfortable shape and size. I know its sounds kinda crazy, I but I always wanted to try this out. Depends on what answers I get to the next questions, I hope my own creation will help me with expandability.
- I always wanted to build a server and put as much drives online as possible. The main concern, imho, is the PSU - how big the psu needs to be? Do I need a crazy expensive server PSU with less 12V and more 5V for the drives? I read some people using 2 PSUs, how? Or is it just too expensive to connect too many drives to a single server without uber expensive enterprise hardware? Please help, I am so out of my depth here...
- So whats better: try and plug many many drives to a single server. Or just bite the lip and build 2 servers? I mean, 2nd server: that's another bunch of green on mb, cpu, ram, psu (no 2nd case in my situation). And how about power bill when running 2 boxes instead of one?
- How much drives can I go for? I now I can do 24-26 (as many have done this on unraid forums). Can I do 32? 48? more? What is the limit of drives in a single server for a home consumer, when the price and/or performance hit becomes too much?
- Could I do 48 drives with 2x IBM M1015s and 2x Intel RES2SV240 Expanders? I read all over: some people say that these expanders support only up to 16 drives when you connect them to M1015s with 2 cables, some say they run 24 drives of a single cable - confusing...confusing...confusing...
- Or shoudl I just be better off without expanders and slap 4x M1015s in there and be happy with 32 drives?
- Or maybe buy some expensive motherboard with 6 PCIe-x8 ports and put 6 HBAs and 6 expanders and go buy more drives? Please, someone, get me back to the planet earth :)
Honestly, please help me. I really like FreeNAS from what I've read here and around. I'd love to finally build a bitchin server and be done with my gf constantly bugging me: "where's my Dexter, where's my Revenge and whatnot..."
If you are still reading, thank you very much for sticking around with this new hit show "So You Think You Can Build A Server".