FreeNAS and ESXi5 Questions - backups, media, & XBMC usage

Status
Not open for further replies.

slyph

Cadet
Joined
Sep 6, 2012
Messages
5
Hello,

I'm diving into both the storage and virtual worlds head first with very little experience. I'm trying to build a lab environment that will fit my current and future needs. Please correct any assumptions that I may make. Any help would be appreciated! :D

From my limited research, here's what my plan requires:
ESXi5 server
- Boot ESXi from USB flash drive
- 16GB of RAM
- An 8 virtual core processor
- No hard drive
- Debian Squeeze VM (web server, mail server, MySQL for XBMC and web, etc)
- Win7/Win Server VM (domain, torrent, RDP access, etc)
- Ubuntu/CentOS VM (testing, breaking)
- Another VM for future proof

FreeNAS server - latest nightly build
- Boot FreeNAS from 2GB USB flash drive
- 16 or 32GB of ECC RAM
- Dual core processor

Extra:
- UPS w/ pure sine wave for both servers
- Have a separate backup of important files

I'd like to be able to do the following at the same time:
- Torrent using Win7 VM to/from NAS
- Run my other linux VMs (webserver, MySQL, etc) off of NAS datastore
- Copy a finished torrent to my NFS shared drive on my NAS
- Stream from my NAS to multiple XBMC servers to watch a show without buffering

Questions:
Do I need to separate my datastore and torrents?
I was originally hoping that I could do the previous without worrying about performance by creating a single zpool with 2 vdevs each with 5x 2TB disks in RAIDZ. However I may need to separate my datastore and torrents to something like this:
ESXi Datastore - 2x 500GB RAID1
torrent seed - 1x 500GB
Media NFS - zpool with 2 vdevs each with 5x 2TB disks in RAIDZ

Does anyone have any experience with the performance needs of a datastore or torrenting?

I'd like to be able to rack both my servers. Any suggestions on cases for the FreeNAS server?

Is it better to find a case that supports many drives or buy a RAID card that's not used for hardware RAID?

Should I think about dual Gig nics?

How important is it that my vdevs match? Could I have vdev1 be 5x 2TB RAIDZ and vdev2 be 3x 3TB RAIDZ in my zpool?

Do I need to worry about ZIL or L2ARC's if I max out my motherboards RAM?

I've seen multiple threads discussing FreeNAS as an ESXi VM. Would that be a more manageable approach for me?

Thanks!!
 

slyph

Cadet
Joined
Sep 6, 2012
Messages
5
Also, has the 4k drive issue been completely resolved in FreeNAS 8? I can find several posts saying that the option just has to be enabled for 4k drives.

Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You have a great set of questions.

Do I need to separate my datastore and torrents?

No. However, performance will be better if you do. Mirroring gives very consistently much better performance for VM use than RAIDZ2; some people say RAIDZ isn't so bad, so I will just make the general point and leave the testing to you.

I was originally hoping that I could do the previous without worrying about performance by creating a single zpool with 2 vdevs each with 5x 2TB disks in RAIDZ. However I may need to separate my datastore and torrents to something like this:
ESXi Datastore - 2x 500GB RAID1
torrent seed - 1x 500GB
Media NFS - zpool with 2 vdevs each with 5x 2TB disks in RAIDZ

Does anyone have any experience with the performance needs of a datastore or torrenting?

How much bandwidth do you have? Putting a torrent server on a 1Gbps pipe in a datacenter somewhere is very different than putting up a torrent server on the end of a consumer 768/384 DSL line.

How busy do you expect your media filesystem to be?

I'd like to be able to rack both my servers. Any suggestions on cases for the FreeNAS server?

Is it better to find a case that supports many drives

People around here seem to like the Norco's. They seem to have some mild quality control issues. Suggest testing all backplanes while still in warranty. Regardless, it is far, far better to go with a single chassis than it is to half-ass it with external enclosures, especially external USB enclosures.

or buy a RAID card that's not used for hardware RAID?

Should I think about dual Gig nics?

RAID card, look at the ... I want to say IBM M1015, easily found on eBay and raved about by many here. Once flashed, it becomes a highly competent eight port SATA controller.

Dual gigE? Good for performance but probably not absolutely necessary for a home setup. On the other hand, the quality server-class motherboards you're undoubtedly considering are all going to offer them onboard.

How important is it that my vdevs match?

Totally unimportant. Just don't do something stupid like add a vdev without redundancy to a pool that has redundancy. Be aware that failure of a vdev is probably fatal to a pool.

Could I have vdev1 be 5x 2TB RAIDZ and vdev2 be 3x 3TB RAIDZ in my zpool?

Yup.

Do I need to worry about ZIL or L2ARC's if I max out my motherboards RAM?

ZIL increases performance of certain things, but at a risk to the pool, because failure of a ZIL isn't handled too gracefully. People work around this by mirroring the ZIL. L2ARC is always fun if you have a spare SSD, but with lots of RAM may not be all that noticeable.

I've seen multiple threads discussing FreeNAS as an ESXi VM. Would that be a more manageable approach for me?

A lot of clever people want to do that. You have no place to put the FreeNAS VM files. Even if you fix that by creating a local ESXi datastore on a small drive, you'll discover that ESXi doesn't handle it gracefully: ESXi really wants to see its datastores out there at boot-time, and you'll discover various problems (such as VM's not starting, datastores not being found, etc).
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I can offer some input on a couple of your questions.

1. Cases. Can't commend on the NORCO case, but I purchased 3 5-drive into 3x5.25 bay space hot swap backplanes from SuperMicro. I installed these into a cooler master case that had 9 external drive bays, giving me up to 15 easily removable drives. These Supermicro drive bays are very nice looking, and very well made... however their quality control and tech/warranty support are lacking to say the least. Search my posts for all the problems I have had with them. Long story short, I'm still using them, but had to modify them to make the work correctly.

2. FreeNAS on ESXi. I've been down this road, and it worked successfully for quite some time. There are a couple problems with this though. One, you have to get device pass-through to work to get the most out of a ZFS filesystem. I didn't do this, as it was frustrating to find info on how to set this up. I added all my disks to ESXi, created the largest possible datastore on them, then presented that to my freenas server. By doing this, any issues with the underlying vmdk file system would go undetected by ZFS. Not to mention I had some performance issues because I was running my VMs off one SATA drive.

I'm in the process of moving all drives to a dedicated freenas box, and I'm going to run my ESXi via iSCSI on a barebones machine with no HDDs, and host my files/media/etc on the freenas box.
 

slyph

Cadet
Joined
Sep 6, 2012
Messages
5
Thanks for the replies!!

As you say, jgreco, it looks like I need to do my own benchmarks. I've already purchased 6 drives. This should allow me to benchmark several RAID configurations (mirrored vdevs, RAIDZ, etc) before I move my data onto the NAS.

TravisT, please let me know if you learned anything in your transition to ESXi on barebones connecting to a FreeNAS server via iSCSI. This is very similar to what I plan on doing.

I'm leaning toward this build:
NORCO RPC-2212 - (269.99)
IBM M1015 x2
Highpoint miniSAS to SATA - (24.99) x3
Seagate ST3000DM001 - (127.49) x10 (6 already purchased)
Kingston 8GB ECC DDR3 1600 - (59.99) x2
Supermicro MBD-X9SCM-F-O - (199.99)
Xeon E3-1230 3.3GHz Quad - (244.99)
Rosewill CAPSTONE-450 Gold Plus - (69.99)
WD 500GB Green for torrents - already own
Hitachi 3TB hot spare - already own
Total ~ $2500

This, unfortunately, is only half of the lab environment I'm building since I also want an ESXi server.

The most expensive decision that I made with this build is to go with ECC memory. This moves the motherboard from desktop to server grade and, therefore, the processor. I'm still looking into other options.

Questions:
Is a Volume a zpool in the GUI? I'm having hard time finding the "click here to create a new vdev for your zpool" option. I've been doing my tests from the cli to make sure I'm configuring it correctly.

Can anyone go into more detail about resilvering? Any benchmarks? Some posters say it took multiple weeks to resilver and that doesn't seem acceptable to me.

Does Windows 7 or Server 2008 support NFS well? There seems to be a lot of NFS applications to handle this. Again, I'll have to perform some benchmarks comparing CIFS, iSCSI, and NFS.

ZFS seems to rock! Any downsides that I should be aware of?

Thanks for any help! :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Pretty sure that's the wrong memory for that board. Check the Kingston site for the correct part. We use X9SC-family stuff here with Sandy Bridge E3-1230's and like it a lot.

I've discussed performance a lot in the past; one thing I'll note is that ZFS is kind of piggy, and the smaller Atom/E350/N40L/etc systems all tend to max out around the point that ZFS kind-of works and is stable. The E3-1230 will idle at a low power consumption, and so if it's not being exploited, you're out maybe a dozen watts and a hundred bucks over a lower-priced solution, but if you suddenly need more CPU (compression, etc) you have it - while the lower-priced guy has to live with it. So in my book, worth it, not overboard, unless you can't afford an extra hundred bucks. And if you start feeling like that, then go look at the QNAP and Synology offerings and see what the markup is on Atom based systems.

There's a great explanation of ZFS parts (well allegedly great, I think I just glanced at it) hanging around the forum somewhere.

NFS under FreeNAS is pretty decent and very stable. There've been some problems apparently related to locking with OSX, and the workaround is to "don't do that" (which sucks if you need locking). Anyone with any NFS experience knows locking is always the thing that gets you.

Resilvering is the process of rebuilding data from redundant information. Scrubbing is the process of checking the coherence of all the data.

ZFS is awesome except when it isn't. There are lots of downsides, many of which are avoidable through design choices. For us, we were trying to recycle some old high-efficiency 1U Opteron boxes (4 drives) and were running into all kinds of problems with ZFS performance; an 8GB Opteron 240EE should be able to saturate the wire (and did, easily, under UFS). Yet RAIDZ2 on 4 drives stinks for reasons that seem elusive, and that doesn't appear to be limited to our 1U boxes. FreeNAS is designed for large amounts of memory (8GB+), a design choice that might bite Atom- and E350-fans. Tuning can "make it work" though. FreeBSD 8 lacks some of the infrastructure to do automatic replacement of failed ZFS drives, so FreeNAS can't do that quite yet. ZFS is a copy-on-write filesystem, meaning that file fragmentation can be an issue. Etc. But you asked. None of it is unmanageable or impossible.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I haven't fully transitioned over to iSCSI, but I'm working towards it. As soon as I get my ZFS volumes burned in and working at a level that I feel comfortable with, I'll begin transitioning over to iSCSI. I'll be glad to post up any info I have when I get there.
 
Status
Not open for further replies.
Top