Would FreeNAS be the best solution for me?

Status
Not open for further replies.

ThePhantom0114

Dabbler
Joined
Oct 3, 2013
Messages
12
Hi guys, first of all, I want to apologize..I'm sure you guys get tired of posts like this. I'm in a bit of a time crunch, and there is a LOT to read up on it seems with FreeNAS and ZFS, so I'm hoping for a bit of quick advice.

I'll explain my situation, and then ask for advice to see what everyone thinks. I recently had Ubuntu home server that I built myself. I used it for VMs, media storage and streaming, web hosting, etc. It was used heavily, and the CPU idled at about 40% usage with nothing going on. I had 3 3TB drives in software Raid 5 on the system, and all was well. A week or two ago, I decided I wanted upgrade it (was running out of space, and needed a bit more power), so I swapped out the motherboard to allow for future upgrades (was out of SATA ports on the old board). Well...this blew up in my face, still can't really explain what happened, but the nearly full raid 5 died, taking all data with it.

This wasn't critical data, it was mostly all media, and some VMs, stuff like that, it was the storage array, the OS and my critical data was still good. Even though it wasnt critical, I REALLY don't want that to happen again, I lost a lot of stuff I really wanted to keep. I then decided it would be a good time for an upgrade, and hopefully a better solution. I bought a used Dell PE 2950 to replace the server, now running ESXi. The server can't handle the drives I have because they are 3 TB, it can only see 2 TB each, so my thoughts were to use the server space for OS datastores and such, and build a NAS box with my old server hardware and attach it via iSCSI to the ESXi host for media storage. Looking at my options, I am very interested in FreeNAS, but until a few days ago, had never even heard of ZFS.

I dont need super performance, this will just be storage, though the faster the better since it will be a lot of big files. I bought a 4th 3 TB drive as well, and was originally planning on a Raid 5 to give me 9 TB of space, but now of course that would be a Raid Z1 with 9 TB of space. I'm fairly sure I won't need to increase that anytime in the near future, since I am basically starting from scratch now, so I think I am ok with the limitations on expanding a Raid Z. I was kinda thinking about eventually buying one more drive to make a Z2, but I can't afford that now, and it doesnt look like I can do that in the future either from what I have read. How reliable of a setup would this be, and if the OS or host system crashed, what are the chances something would go wrong putting the drives in a new system and importing the raid? How easy is it to break the raid and lose data? I really just don't want to start from scratch again...

I am also interested because I am an IT person, and I don't know much about this, which now makes me want to play with it, lol. I was also thinking aboug unRaid and Openfiler, but I havent done much research on those yet, I did see that I would have to buy a license to use 4 drives for unRaid, which I would rather not do... So what do you guys think, do you see any problems with this? Reasons I should use FreeNAS over the others? I am hoping for some good reasoning, because I usually get myself in trouble by being curious about something, trying it, and finding out the hard way it wasn't the best solution...trying to avoid that this time. I appreciate any advice, and sorry this is so long... Thanks!
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Here are some thoughts from my side.

First of all, virtualizing FreeNAS is not a good idea, but possible. Be sure to read http://forums.freenas.org/threads/p...nas-in-production-as-a-virtual-machine.12484/. Also I didn't look up the server specs, but make sure to have plenty of ECC RAM (1GB per TB hdd).
Second, RAIDZ1 is not a good idea if you value your data. I think it's more reliable than a RAID5 Setup, but still there is chance of loosing two drives at the same time and kissing your data bye bye. Go for Z2 if possible. Or/And a good backup of critical data.

I would personally prefer ZFS/FreeNAS over the somewhat proprietary unRaid solution, because ZFS is designed to get it "right" and also offers some neat features like datasets, snapshots, integrated integrity checking, and replication. Haven't had any experience with Openfiler yet.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're on the right track. But do RAIDZ2 now. The easiest way to "lose" is to lose redundancy somehow, and it isn't beyond possibility that a drive fails and then a second drive goes flaky under the additional stress. You cannot make a RAIDZ into a RAIDZ2 later, as you noted.

If the FreeNAS OS crashes, it is on a conveniently replaceable USB (or other) flash device. If you've saved your config someplace else, it is as easy as getting a new flash, sticking the OS image on it, and then uploading the config. It is meant to be an appliance and works pretty well as one. The downside, in my experience, is that USB flash drives are less than 100% reliable, so you run a somewhat higher chance of actually needing to do this in the future. My experience and opinion does not seem to be shared by the majority though.

ESXi performs poorly with FreeNAS by default, for all the usual reasons that ESXi performs poorly. ESXi likes sync writes. For home use, you may be able to get away with disabling sync writes. This is the same thing all the little NAS boxes will advise you to do to "fix ESXi performance issues." The better solution is to get some form of SLOG device to act as a write cache, but you probably won't like the cost of that.
 

ThePhantom0114

Dabbler
Joined
Oct 3, 2013
Messages
12
Here are some thoughts from my side.

First of all, virtualizing FreeNAS is not a good idea, but possible. Be sure to read http://forums.freenas.org/threads/p...nas-in-production-as-a-virtual-machine.12484/. Also I didn't look up the server specs, but make sure to have plenty of ECC RAM (1GB per TB hdd).
Second, RAIDZ1 is not a good idea if you value your data. I think it's more reliable than a RAID5 Setup, but still there is chance of loosing two drives at the same time and kissing your data bye bye. Go for Z2 if possible. Or/And a good backup of critical data.

I would personally prefer ZFS/FreeNAS over the somewhat proprietary unRaid solution, because ZFS is designed to get it "right" and also offers some neat features like datasets, snapshots, integrated integrity checking, and replication. Haven't had any experience with Openfiler yet.


Thanks for the input, however either I didnt explain myself well, or you misunderstood a bit. I have no intentions of virtualizing FreeNAS, or any of the NAS software, that will on its own box connected by iSCSI or some other form of connection to the ESXi box, which will house my VMs and such. The box that will run FreeNAS is an older AMD quad core, the 965 X4 at 3.4, with 8 gigs of non ECC RAM. Hopefully that would be plenty good enough...

And to both you, yeah I have been thinking about Z2 a lot..I really can't afford another driver right now, as I just spent a more than I should have on the Dell server for the ESXi box. I really wish you could convert a Z1 to Z2, that would make this a lot easier. The Raid 5 I had worked great and I had no complaints at all, but it went belly up due to some unknown problem, possibly my own doing, I'm not sure, so I really don't want that to happen again... I'm just worried that since I don't know a whole lot about ZFS that I will make some mistake a kill it somehow.

jgreco, thanks a lot for your info, that helps. I'm glad to know its pretty easy to get the raid back up if the OS was to fail for whatever reason, but what about if its on different hardware? Say the motherboard in the host dies, and I have to put a different board in, will FreeNAS care at all when I try to bring the raid back up? I understand USB drives arent the most reliable, so if one did die, as long as I could get it back up with a new flash drive, I'm fine with that.

Now about the ESXi performing poorly, that is something I don't know anything about yet. Sync writes? Could you maybe explain a bit more? Like I said, I'm not looking for the fastest set up in the world, but I also need it to be decently quick, as I will be writing a LOT of large files to the raid. So that is something I will need to think about.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
jgreco, thanks a lot for your info, that helps. I'm glad to know its pretty easy to get the raid back up if the OS was to fail for whatever reason, but what about if its on different hardware? Say the motherboard in the host dies, and I have to put a different board in, will FreeNAS care at all when I try to bring the raid back up? I understand USB drives arent the most reliable, so if one did die, as long as I could get it back up with a new flash drive, I'm fine with that.

As long as the drive is attached through a regular port (mainboard SATA, or HBA controller, etc) that has not written some sort of magic firmware header. FreeBSD has a nice disk labeling system that handles the hard work of mapping ports to disks through their labels. FreeBSD just has to be able to see a consistent view of the label.

For example, if you get a 3Ware or LSI RAID controller, the firmware on those will install their own disk label in the first sectors of the disk. Then the controller presents sectors further in as a "virtual disk" to the OS. So when the OS thinks it is writing to sector 0 on "the disk", the RAID controller is actually remapping it to sector 1000 (only an example, actual numbers different). So if you then unplug that disk from the RAID controller and plug it into a motherboard SATA port, then FreeBSD sees the 3Ware's disklabel, not the FreeBSD disklabel that is embedded further into the disk. This can also lock you into staying with the same brand of RAID controller if the old one fails. Ick.

In general, you should be able to take a ZFS pool created on a FreeNAS system built on an AMD desktop PC with motherboard SATA ports, export it, and then attach it to a server style Intel Xeon system with a ServeRAID M1015 in HBA mode and the system won't bat an eyelash, though if the network interfaces are different you might have to reconfigure that.

The key idea is NO FANCY PORTS. ZFS loves being connected directly to its disks.

Now about the ESXi performing poorly, that is something I don't know anything about yet. Sync writes? Could you maybe explain a bit more? Like I said, I'm not looking for the fastest set up in the world, but I also need it to be decently quick, as I will be writing a LOT of large files to the raid. So that is something I will need to think about.

http://forums.freenas.org/threads/s...xi-nfs-so-slow-and-why-is-iscsi-faster.12506/

Your best course of action for ESXi and heavy NAS requirements is:

1) Create a VM. Linux, FreeBSD, whatever.
2) On the virtual machine itself, use NFS in the VM's operating system to mount the NAS.
3) Happy speeds.

ESXi itself is resistant to giving you good speeds unless you have some sort of POSIX compliant write facility. It is well thought out, but it is frustrating to deal with.
 

ThePhantom0114

Dabbler
Joined
Oct 3, 2013
Messages
12
For example, if you get a 3Ware or LSI RAID controller, the firmware on those will install their own disk label in the first sectors of the disk. Then the controller presents sectors further in as a "virtual disk" to the OS. So when the OS thinks it is writing to sector 0 on "the disk", the RAID controller is actually remapping it to sector 1000 (only an example, actual numbers different). So if you then unplug that disk from the RAID controller and plug it into a motherboard SATA port, then FreeBSD sees the 3Ware's disklabel, not the FreeBSD disklabel that is embedded further into the disk. This can also lock you into staying with the same brand of RAID controller if the old one fails. Ick.

Alright, this makes perfect sense. Chances are, I will be keeping it on desktop hardware for quite some time, and by the time I get around to buying server hardware for it, I will probably be replacing the raid anyway. I just wanted to make sure if I had to replace the motherboard or something like that, it wouldn't crap out like my linux software raid did (it said it couldn't find the superblocks on 2 of the 3 raid 5 drives)

Your best course of action for ESXi and heavy NAS requirements is:

1) Create a VM. Linux, FreeBSD, whatever.
2) On the virtual machine itself, use NFS in the VM's operating system to mount the NAS.
3) Happy speeds.

ESXi itself is resistant to giving you good speeds unless you have some sort of POSIX compliant write facility. It is well thought out, but it is frustrating to deal with.

Ok, this is exactly what I wanted to hear! I was planning on attaching the entire raid to a linux VM anyway (Ubuntu Server), but was going to do it through RDM in ESXi. I can completely skip ESXi and just attach FreeNAS to the Ubuntu VM. Thanks for that. I have a FreeNAS VM right now going so I can play with it, made a virtual Raid Z1 in it, just to test it out some, got it connected to ESXi no problem, got it passed through to the VM no problem, however I am having trouble mounting it in Ubuntu, due to the file system type... Any pointers on that? Maybe what direction to look in? With ZFS, I don't really know how to set this up, normally I would just partition the software raid with ext 4 and be done, that isnt the case now...
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Alright, this makes perfect sense. Chances are, I will be keeping it on desktop hardware for quite some time, and by the time I get around to buying server hardware for it, I will probably be replacing the raid anyway. I just wanted to make sure if I had to replace the motherboard or something like that, it wouldn't crap out like my linux software raid did (it said it couldn't find the superblocks on 2 of the 3 raid 5 drives)



Ok, this is exactly what I wanted to hear! I was planning on attaching the entire raid to a linux VM anyway (Ubuntu Server), but was going to do it through RDM in ESXi. I can completely skip ESXi and just attach FreeNAS to the Ubuntu VM. Thanks for that. I have a FreeNAS VM right now going so I can play with it, made a virtual Raid Z1 in it, just to test it out some, got it connected to ESXi no problem, got it passed through to the VM no problem, however I am having trouble mounting it in Ubuntu, due to the file system type... Any pointers on that? Maybe what direction to look in? With ZFS, I don't really know how to set this up, normally I would just partition the software raid with ext 4 and be done, that isnt the case now...
If you are using iscsi to pass through the disk via rdm to the vm, it is just presented as a raw disk, and you would have to put your own file system on it just as if you had stuck a physical disk in the server.

If using nfs as suggested, you don't put a file system on it, as nfs is a network based protocol and it doesn't matter to the client what file system is on the other end.
 

ThePhantom0114

Dabbler
Joined
Oct 3, 2013
Messages
12
If you are using iscsi to pass through the disk via rdm to the vm, it is just presented as a raw disk, and you would have to put your own file system on it just as if you had stuck a physical disk in the server.

If using nfs as suggested, you don't put a file system on it, as nfs is a network based protocol and it doesn't matter to the client what file system is on the other end.


Alright, good to know. I'm not sure why I had trouble then with the iSCSI connection, thats what I tried, treated it as a physical drive connected, maybe it was just a simple error somewhere. I will check again just for the experience and see if I can figure out why it wont work. I have never messed with NFS before, so I am going to have to look up how to mount that, but I'm glad that should work so well. Thanks!
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
I have never messed with NFS before, so I am going to have to look up how to mount that, but I'm glad that should work so well. Thanks!


in the simplest form:
Code:
sudo mount -t nfs ipaddress.of.nfs.server:/path/to/nfs/share /path/to/local/directory/you/want/location/mounted/to

of course there are many parameters you can add to that to utilize many different options

the preferred way would be to edit /etc/fstab and make it automount.

Read up on using NFS (skip to the CLIENT section) on Ubuntu here:
https://help.ubuntu.com/community/SettingUpNFSHowTo
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
caution: automount has a magic meaning for UNIX admins. fstab is a persistent mount.
 

ThePhantom0114

Dabbler
Joined
Oct 3, 2013
Messages
12
Thank you both very much, I actually found that same page and have been playing with it using my VM just to get the hang of it. I have it working now using /etc/fstab with no security, so thats a great start. I will have to look into authentication at some point, but this will get the job done. I used the manual mount command you used above, except with a -o in there (dont know what that is for...) and mounted it, and it worked just fine. Then I rebooted the VM, tried adding it to fstab and did a mount -a, it showed that it was mounted, but the space showed up wrong, and my test file wasnt there, and I didnt have write access. Not knowing what was wrong, I just restarted it again, got no fstab error, did a df -h and it didnt show the mount, but manually checked and it showed up correctly, and now I have full rights...so I'm not sure what happened. I guess I just need to play with it a bit more.

It looks like I am going to use FreeNAS, I think you guys have convinced me, and I am going to try and buy one more drive to make a Raid Z2 with 5 x 3TB, 9TB usable, and attach it with NFS. That should be plenty of storage, and 2 parity drives will be reassuring. Thanks a lot for the advice, I really appreciate it. Any other last tips that might help me out with this setup?

Edit: Quick question, just tested a failed raid and removed a vdisk from FreeNAS, FreeNAS shows everything correct, its in a degraded state, one disk is missing, but it says its still functional. In Ubuntu, the drive mounted without error, but no files show up...any ideas on that?
 
Status
Not open for further replies.
Top