Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

loesje

Cadet
Joined
Nov 25, 2013
Messages
4
Because of this thread, no virtualisation, so no Esxi.
Freenas on the bare metal, nothing in between.
Besides three disks will be mounted very easily in your running OS. That OS gives you all the bells and wizzles to do networking and sharing.
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
I don't have a feeling that virtualized environment is a good idea for storage appliance solution, thus I respect the words mentioned in the first post of this topic.
Freedom of bare metal is what FreeNAS/NAS4Free is designed for.

I'm just wondering what to do with my old dual core machine limited with 4 GB of DDR2 memory... I was thinking about RAID5 (RAIDZ) via Nas4Free or FreeNAS software, alongside 4 x 3TB WD RED (drives with TLER support).

Since FreeNAS is a software based RAID, is there any benefit of using drive with TLER support without hardware RAID controller? I doubt, but since these drives are more robust and reliable (built for 24x7 NAS appliances) I'm considering them as a choice.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Since FreeNAS is a software based RAID, is there any benefit of using drive with TLER support without hardware RAID controller? I doubt, but since these drives are more robust and reliable (built for 24x7 NAS appliances) I'm considering them as a choice.

It is a common misconception that TLER has something to do with hardware RAID controllers. TLER has nothing to do with hardware RAID controllers. TLER effectively bounds the amount of time a drive can spend twiddling around trying to retrieve a bit of data.

For a desktop PC, it can be assumed that there is no other source for that data, therefore the choices are try-real-hard or give-up-and-report-error. Data loss being generally considered bad, the choice is to twiddle around trying to retrieve it. In the meantime, the PC or application may well appear to freeze.

For a ZFS pool, there is probably a redundant source for the information, so causing the pool to suspend its workload in order to try real hard to retrieve a block is disruptive to the many other things that might be going on in the pool. It may be safer and better to just consider the block unretrievable, rebuild it from mirror or parity, and go on with life. In this case, TLER is the specific function that can help the drive understand the storage system's preference.

Hardware RAID controllers do something similar, in that they expect bounded responsiveness from a drive. However, in the case where a drive fails to respond rapidly, basically most hardware RAID controllers are pretty stupid and they figure the drive is dead. So they may drop it from the pool and force a replacement/rebuild operation to begin. This is not exactly brilliant, and it is why hardware RAID admins make sure TLER is a checklist item for purchases.
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
Jgreco, thank you for commenting.

Here's a real nice article regarding this matter: http://www.smallnetbuilder.com/nas/nas-features/31202-should-you-use-tler-drives-in-your-raid-nas
None of the stated manufactures even listen or consider TLER or oder CRC drive functions in a RAID array... I was wondering what is the case with FreeNAS or NAS4Free.

I'm looking for a TLER enabled drive (WD RED, EFRX series) for a FreeNAS custom build. So, your opinion is that it's OK to go this path? The main advantage comes from a fact that these drives are built for a 24/7 and "enterprise" work loads (according to WD marketers).

"While TLER is designed for RAID environments, a drive with TLER enabled will work with no performance decrease when used in non-RAID environments."
http://wdc.custhelp.com/app/answers/detail/a_id/1397

Sorry for off-topic!

Gonna give it a try with UFS first, and plan to move on to ZFS RAIDZ2. Bare metal, no VM OFC :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The quoted article is from almost four years ago, at which time I would expect that most NAS appliances were in relative infancy, and vendors were not yet releasing NAS-specific versions of their products.

It is not clear that the current generation of NAS drives are "enterprise" grade. As a matter of fact, I'd wager not... WDC certainly doesn't seem to want to say so.

http://www.wdc.com/en/products/internal/nas/

Capture.PNG

Basically the way that the storage market is expected to evolve is that so-called "enterprise" drives are going to more or less go away. Flash is tons faster and can be combined with slower massive platter based storage to give a hybrid storage system that provides some of one thing and some of another at a lower cost. ZFS is an early indicator of this evolution. Drive manufacturers are under significant pressure from SSD's, "The Cloud," "The Demise Of The PC," the relative ease with which things like Intel SRT/LSI CacheCade already can create hybrid solutions, and other factors. Or you can just go old school! The reason for buying a drive like the WD4000FYYZ (4TB enterprise 7200RPM "Raid Edition") is to go faster, but for $350 per drive, I can buy at least two 5400RPM "Red" drives for the same money. Now admittedly the 5400's are slower drives, but having two separate heads is in aggregate faster. Consider what that means for a shelf of drives. To redundancy. Etc.

ZFS is well positioned to make great use of the potential of a pile of slower drives. Whether you pick a TLER drive or not is a matter for whether your workload can afford to be paused for a drive's error recovery algorithm to run. ZFS itself won't care too much....
 

sandvika

Cadet
Joined
May 5, 2014
Messages
2
Great topic from jgreco, it has made FreeNAS a reality in my home lab / consolidated server, when there were times when it was easier to quit. I'll be honest, I was dragged kicking and screaming into compliance, but I'll share my journey and my reasoning, so that you can avoid the same pits I fell into.

Years ago, I purchased a WD "MyBook" NAS-in-a-box and when I only got 5 MB/s performance over gigabit ethernet which made it essentially impossible to back up and too slow to use, I ripped the disks out of it, and put them in my PC mirrored on a cheap VIA RAID card - altogether better but defeating the objective. Fast forward 5 years, my PC was long in the tooth, the VIA RAID driver proved to be the cause of increasingly frequent BSODs and hadn't been updated in years, so time for the next computer. Having gained much experience of ESXi in the mean time I figured it was time for a home lab and another stab at NAS. Exactly what this thread cautions against.

My server build is based on the ASUS Z9PA-D8 dual socket 2011 (Xeon E5 series) motherboard, with LSI 9240-8i controllers for the storage. For decent performance and minimum rebuild times I wanted lots of small disks - this allowed SATA rather than the much more expensive SAS. I have 8 x 750 GB 2.5" WD Black as the NAS zpool, another 4 as RAID0 temporary storage (eg. for HD video editing) and for pillaging when a NAS drive does need replacing, and 4 x 1TB 3.5" WD Black as RAID10 for the hypervisor and VMs. The 12 x 750GB disks are in a pair of Icy Dock drive bays.

I agree unequivocally that the ZFS volume has to be accessible via bare metal when the hypervisor fails, but where I didn't agree particularly was that hardware RAID should be bypassed. The whole point of a smart LSI controller with its own PowerPC processor is to manage and recover from drive failures without loading it onto the processors or operating system. I set about doing performance tests with ESXi and was sorely disappointed with the performance of the LSI 9240-8i as RAID5 - under VMware ESXi 5.5 just copying VMs (serial write of large objects) averaged a pathetic write speed of 5 MB/s and reading averaged 100 MB/s. Using disk testing tools within VMs, write speed was up to 60 MB/s and read speed up to 2400 MB/s, but this wasn't representative of normal use. I figured that the LSI was not performing well, and reconfiguring as RAID50 would confirm this if it were just as slow - which it was! So at this point I shrugged, reconfigured the drives as RAID10 and got fantastic performance. Then a bit of googling confirmed what I already knew by then: the LSI 9240-8i isn't good at XORing o_O

I built my FreeNAS on this RAID10 (stubbornly determined to stick to the hardware doing the job it is supposed to) booting from USB stick and all was wonderful. Performance was fabulous. I then went to build it on ESXi 5.5 with PCI passthrough of the LSI 9240-8i .... and it started with tears and got nowhere. I eventually found that there's a bug in ESXi that has been there since version 4 and they seem to be in no hurry to fix - the FreeBSD driver resets the LSI during boot configuration and it never comes back - just a succession of timeouts reported. At this point I was on the verge of giving up on FreeNAS, but lots of googling later I figured that it was worth persisting because there was no better candidate and one thing I don't have time or enthusiasm for is reinventing the wheel.

So, I finally conceded defeat on the LSI 9240-8i on the basis that VMware won't fix the passthrough and reflashed it as a basic LSI 2011-IT, rebuilt my USB stick install of FreeNAS and got a nice RAIDZ2 volume. What I don't understand is how 8 x 750GB produces just 2.5TB of RAIDZ2 volume, when I'd have expected it to be about 4.5TB. I must be missing something.... My hunch is that FreeNAS configured it "optimally" as a pair of 4 x 750GB RAIDZ2 volumes, so I have lost half the disk space to parity, with the ability to lose 4 of the 8 disks! I then went to build it on ESXi with passthrough of the masqueraded "LSI 2011-IT" and it works fine. I cheated on the VMware tools installation - just took the FreeBSD 9.0 drivers from them and didn't install PERL or run any scripts - but this works a treat too and I have my VMXNET3 interfaces working at 10Gbps and FreeNAS delivering fabulous performance both to VMs and physical hosts.

There is however a down side. When in 9240-8i and RAID 10 mode, I could unplug and replug a disk and it would just rebuild on the fly, just as it should. Now that the LSI controller is in humble 2011-IT mode, I can unplug and replug a disk but FreeNAS doesn't detect the replacement and a reboot is needed to pick it up again. Satisfactory for a home NAS but not in a real production environment (read the subject of this thread again;) ). Thus my preference for hardware RAID appears vindicated - had it actually worked through ESXi! However, FreeNAS appears to be pretty clever at working out that the replugged drive is still fine.... I will have to comb the syslog to see whether it did actually do a rebuild during boot before I could get back into the GUI.

So, aside from the mysterious loss of the storage space, I'm mostly there. It's not necessarily the end of my journey though because I'm also trying to pass through a GPU to have the ESXi server function as a decent workstation too. That makes reflashing the LSI seem trivial - there are guys changing the resistors on their GPUs to get them to masquerade as different graphics cards on the PCI bus to enable them to pass through. The crux of the issue is backward compatibility with all the previous defunct graphics standards, that ESXi PCI passthough doesn't deal with and VMware don't care about because it's a server hypervisor! I'm not prepared to put my soldering iron to a brand new GPU so have a supposedly good bet on order, with pass through possible both on ESXi and XenServer. If ESXi proves impossible and XenServer works, then I'll be revisiting my FreeNAS build again and attempting to get it going on XenServer, so for now it's a test bed and I will be hammering it to make sure nothing breaks before transferring my data across for the final time.

I've spent my last few years working for software vendors who have insisted on bare metal deployments whilst the tide of virtualisation and cloud has turned against them, reluctantly bowing to the inevitable and adapting. Basically, if you don't support both virtualisation AND cloud, expect to become obsolete and forgotten within just a couple of years. I hope that the FreeNAS team will choose to adapt rather than become obsolete and what we're doing here will become normal and supported :cool:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I truly hope you aren't trying to justify putting ZFS on hardware RAID. There's plenty of "examples" of people that did that and on first rebuild it killed the pool. You also lose the self healing properties of ZFS since ZFS is no longer handling your redundancy. If you really are advocating that, you're going to find out just what it is like to lose your data.

Quite literally, doing hardware RAID on ZFS is worse than choosing hardware RAID without ZFS. You've gutted the features that make ZFS great and replaced it with a design the creators never intentioned.

As for rebuilding on the fly is absolutely possible with ZFS. I've replaced more than a dozen hard drives in various servers and never required downtime. So your "vindication" is not only extremely inaccurate, you have a failure mode with hardware RAID and ZFS you won't even know is coming until it hits you smack in the face... just like the "examples" that have been made out of dozens of people on this forum. It all sounds great, until the second it doesn't(you know... just like jgreco said in the opening for this thread). Then you are left scratching your head wondering why the hell you didn't listen to us and asking yourself where the backups are.

As for the vitualization and cloud support, you seem to not understand the problem. The problem isn't that FreeNAS shouldn't be virtualized. The problem is that ZFS shouldn't be virtualized. That's not our problem, and frankly, I'm not too worried about being "obsolete". ZFS is something that has no alternative anywhere. It is unmatched. People that want the kind of data protection that ZFS offers will absolutely sacrifice small things like "virtualization" for absolute certainty that their data isn't corrupting behind their back. So thinking that your post is going to make "us" open our eyes shows your lack of understanding. Go tell Oracle/Sun, FreeBSD, and the other ZFS supporters that they should be prepared to "adopt or die". They'll laugh at you because you aren't seeing "their" bigger picture with ZFS.

ZFS is growing by leaps and bounds every day. So clearly there isn't much risk to being obsolete despite the setback with virtualization.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
Personally one day soonish I'd like to be able to run Windows in the same box that zfs is running on. Whether that means just a virtualized Windows or both virtualized I don't really mind, just something that works, and I appreciate all who are poking at it trying to make it happen.

i
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
There's plenty of "examples" of people that did that and on first rebuild it killed the pool...................

As for the vitualization and cloud support, you seem to not understand the problem. The problem isn't that FreeNAS shouldn't be virtualized. The problem is that ZFS shouldn't be virtualized. That's not our problem, and frankly, I'm not too worried about being "obsolete". ZFS is something that has no alternative anywhere. It is unmatched. People that want the kind of data protection that ZFS offers will absolutely sacrifice small things like "virtualization" for absolute certainty that their data isn't corrupting behind their back. So thinking that your post is going to make "us" open our eyes shows your lack of understanding. Go tell Oracle/Sun, FreeBSD, and the other ZFS supporters that they should be prepared to "adopt or die". They'll laugh at you because you aren't seeing "their" bigger picture with ZFS.

ZFS is growing by leaps and bounds every day. So clearly there isn't much risk to being obsolete despite the setback with virtualization.

Wow that's some real FUD you are spreading there...people's RAID card eating their pool, you've got to be kidding me....

ZFS sure is growing by leaps and bounds every day and you know what, the new folks(& many of the old folks too) in town fully support virtualization. In fact several other vendors have ready made appliances for one's virtual environment. Virtual SANs are the hot game in town for the small business environment. Just about every SAN vendor is pushing into this space and many new ones are popping up using ZFS has their building blocks. Also apparently you had better tell IXSystems to stop uploading virtual appliances to http://download.freenas.org/9.2.1.5/RELEASE/x64/ too if it's not supported and considered a bad idea.

Since I'm in a fired up mood today, I'll also point out that some of the competition actual suggests putting ZFS on top of a RAID subsystem, since RAID cards do such a nice job of doing things like flashing red leds on failed drives and doing automatic rebuild of your array at 2am. Not to mention the RAID vendors have had this tech working for decades now so it's pretty battle hardened. Now I'm not saying I agree with put ZFS on RAID(I haven't tried it) but I can see that it makes sense to let a piece of dedicated hardware handle the disks since most ZFS vendors haven't gotten their GUIs up to speed for handling disk enclosures and I'm not seeing much movement in the last year on auto replacement of failed disks. Also it's a lot easier to tell some low end tech at a remote site to pull out the disk with the red light on it and replace it. Also spare me the decade old RAID hole thing and everything else that's wrong with RAID, yes the hole still exists but most modern systems have patched that hole years ago or mitigated it. As always you can always design a system to open that hole up if you wish to violate the card vendors recommended practices.

But I think it's becoming high time for FreeNAS to embrace the virtualization realm or history will repeat itself and those that do will eat FreeNAS's lunch using ZFS. The RAID thing is worth some investigation itself also, by folks with the desire to make it work.

Anyways nice write up their sandvika, keep up the hard work.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The real problem isn't competent professionals who know what they're doing and why. To anyone who can read my list of issues and understand how to mitigate them, you're certainly welcome to try virtualizing.

However, this is the PC world, it's a race to the bottom. The difference between a pbucher or jgreco and the average N00b reading the N00bs forum is pretty high; chances are you and I have done lots of things right and aren't unknowingly flirting with danger.

It is absolutely nucking futs to compare some vendor's well-engineered "virtual SAN" product against FreeNAS; they're always likely to be deployed substantially differently ... one by an experienced IT person with knowledge of virtualization and testing prior to cramming things into production use, the other by a hobbyist who hears buzzwords like "virtualization" and "free nas" and tries to put the Legos together however they happen to fit on top of his hardware.

iXsystems provides VM appliances so that people can experiment with FreeNAS more easily. Pretty sure it isn't meant for significant deployments.

The cold realities of ESXi mean that you can absolutely cram FreeNAS on it - and it'll even work - but if something goes wrong, you really need a plan on how to recover your data, which is why I put so much care and detail into that "absolutely must virtualize FreeNAS ... how not to lose your data" document.

If you can think of something specific, ANYTHING specific, that FreeNAS has failed to do in order to be virtualizable, then by all means, speak up. Basically the reason we discourage it is because N00b shows up, N00b does an unwittingly dumb configuration like using nonredundant disks for backing store, N00b's disk dies, and N00b's VM becomes unbootable, and then bad things happen. Is this FreeNAS's fault? No. But FreeNAS will be blamed, because it bills itself as a NAS that protects your data.

There is no winning this game, sir. That's why we warn the N00bs away from virtualization until they are sufficiently expert that they know where the boundaries could be, and then they're encouraged to read my summary of actual issues and how to do it safer, better.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah.. Mr pbutcher who told people they should do sync=disabled and that the "risk is mitigated" with a UPS.. Wha!?

http://forums.freenas.org/index.php?threads/nfs-performance-with-vmware-mega-bad.12240/#post-57768

Yeah.. keep talking. You look so incredibly stupid even trying to compare FreeNAS to other devices that are clearly in a race to see who can do the cheapest solution.

And speaking of FUD, @jgreco, you have that link to that guy that bought a server for like $30k or something from some vendor, they dropped ZFS on a RAID6, and when a disk failed and he replaced it he racked up like 10 million errors?

Oh.. pardon me.. I'm sorry.. I'm talking FUD because the almighty pbucher knows everything, and since he didn't know that it clearly *must* be FUD.

Reality check pbucher. You have no clue what you are talking about 9 times out of 10. Spend some time actually reading and learning instead of telling people how to do some of the stupidest stuff you could possibly do with ZFS and *then* we can have an intelligent conversation. Until then, please disappear and take your logic for doing stupid things elsewhere.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Yeah.. Mr pbutcher who told people they should do sync=disabled and that the "risk is mitigated" with a UPS.. Wha!?

Oh and that's any different then saying just use iSCSI instead of NFS. You changed your tune on iSCSI until I pointed out that out of the box iSCSI was the same as sync=disabled & NFS...........You're failure is to understand that different people need different solutions, not everyone needs max data protection and not everyone can afford some days what they protection we think they should have. As for the guy with the 30k RAID 6 array, I bet he did something he shouldn't have. I've seen plenty of experienced IT folks cook RAID arrays, just because the option is on the screen doesn't mean you should click on it. If someone's pool get's nuked by a RAID array what ever caused it would have nuked NTFS,ext3, HFS+, UFS, etc. The bits are either still all there or they are scrambled it's as simple as that.

Oh don't worry I'll be out of your hair soon enough...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Honestly, cyberjock, take a chill pill... the reality is fuzzier than you'd like it to be. You're gonna have an aneurism if you can't relax a bit.

As for the vendor who provided hardware RAID underneath ZFS and the resulting train wreck:

http://zfsguru.com/forum/generalchat/565

But hey, you know, if done properly hardware RAID is actually a good thing. The problem is no one will do it "properly". Basically you can take some disks and make a virtual disk device out of them. Then you do it several more times with other physical disks. Then you ZFS your several virtual disks together. But this isn't what people WANT to do; people WANT to just throw it "all in a RAID" and not worry about it.
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
Oh and that's any different then saying just use iSCSI instead of NFS. You changed your tune on iSCSI until I pointed out that out of the box iSCSI was the same as sync=disabled & NFS...........You're failure is to understand that different people need different solutions, not everyone needs max data protection and not everyone can afford some days what they protection we think they should have. As for the guy with the 30k RAID 6 array, I bet he did something he shouldn't have. I've seen plenty of experienced IT folks cook RAID arrays, just because the option is on the screen doesn't mean you should click on it. If someone's pool get's nuked by a RAID array what ever caused it would have nuked NTFS,ext3, HFS+, UFS, etc. The bits are either still all there or they are scrambled it's as simple as that.

Oh don't worry I'll be out of your hair soon enough...



Just some food for thought on zfs +hardware raid.

Why did SUN (the creators of ZFS) start building storage servers (Thumper/Thor) without RAID cards and no ability to add one?
If you review the marketing material the boxes were pitched as "the first computers designed with ZFS in mind" or such.
Do any of the MASSIVE storage units Oracle now sells have hardware raid cards?

--
"Also spare me the decade old RAID hole thing and everything else that's wrong with RAID, yes the hole still exists but most modern systems have patched that hole years ago or mitigated it. "
--
First, does it exist or not? You seem to have contradicted yourself. I take it you mean by adding memory and a battery to the RAID controller?

Seems like buying a costly hardware raid card, and costly memory unit (they are often sold as add-ons) is better then a cheapo sata card and none of this mess?


Some people like ZFS , some prefer hardware raid. Considering this forum is for FreeNas which is ZFS based its hard to find a "hardware raid" fan here.
ZFS is all about cheap consumer grade disks and direct access to them. No raid controllers.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
My relative noob point of view: Throwing ZFS on top of RAID is like throwing a truck onto another truck. Neither was meant to be used with the other and there really isn't much of a point to doing so, outside of some weird scenarios. Since FreeNAS' point is ZFS, I really don't understand why some insist on hardware RAID. At that point, they might as well go with Ubuntu server, or Windows Server or whatever is popular. I'm sure Windows Server is easier to learn than FreeNAS and, since they're shelling out for expensive RAID controllers, money isn't that tight...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ericloewe, that's definitely the safe path. Most people in this forum are "n00bs" and aren't ready to perform without a safety net. Once someone gets to the point where they feel they can work without it, no one here can stop them, and either they succeed, or gravity teaches them the cost of failure.
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
Wow that's some real FUD you are spreading there...people's RAID card eating their pool, you've got to be kidding me....

ZFS sure is growing by leaps and bounds every day and you know what, the new folks(& many of the old folks too) in town fully support virtualization. In fact several other vendors have ready made appliances for one's virtual environment. Virtual SANs are the hot game in town for the small business environment. Just about every SAN vendor is pushing into this space and many new ones are popping up using ZFS has their building blocks. Also apparently you had better tell IXSystems to stop uploading virtual appliances to http://download.freenas.org/9.2.1.5/RELEASE/x64/ too if it's not supported and considered a bad idea.

.


Oracle also offers a image of their "Storage server" system. Since they have the Virtualbox image available for download does that imply it is meant to be virtualized as well? (HINT: it is sold as a hardware/software solution so this is diffidently not the case).

As others said, just because they offer the Virtualbox image doesn't imply it is suitable to deploy in prod.
Before building my freenas box i used a virtualized environment to learn how it works.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Honestly, cyberjock, take a chill pill... the reality is fuzzier than you'd like it to be. You're gonna have an aneurism if you can't relax a bit.

My problem with the whole sync=disabled is that unless you are about to explain to a user, in extreme depth, why they should or shouldn't you it you shouldn't be willy-nilly throwing it out there like pbucher said. Not to mention adding a UPS doesn't actually eliminate the risks of disabling sync.

If someone is going to provide less than the most conservative answer they'd better be willing to explain in extreme detail to that reader so they can fully appreciate the risk you are potentially throwing at them.

Oh and that's any different then saying just use iSCSI instead of NFS. You changed your tune on iSCSI until I pointed out that out of the box iSCSI was the same as sync=disabled & NFS...........You're failure is to understand that different people need different solutions, not everyone needs max data protection and not everyone can afford some days what they protection we think they should have. As for the guy with the 30k RAID 6 array, I bet he did something he shouldn't have. I've seen plenty of experienced IT folks cook RAID arrays, just because the option is on the screen doesn't mean you should click on it. If someone's pool get's nuked by a RAID array what ever caused it would have nuked NTFS,ext3, HFS+, UFS, etc. The bits are either still all there or they are scrambled it's as simple as that.

Oh don't worry I'll be out of your hair soon enough...

I never said "just use iSCSI". I gave reasons why it could be superior. I made *no* mention of sync at all because that wasn't valid for my discussion. You'll also notice that I didn't say "iSCSI is faster because it ignores sync". You really do NOT know what you are talking about.

Oh.. of course! of course! That guy "did something he shouldn't have". Definitely not something you'd do. Yeah..definitely not!

And there's a major difference between cooking RAID arrays that had NTFS, ext3, etc. Software tools exist that are relatively inexpensive and can/do recover data for the standard file systems. So all is not lost even if you do something that kills your RAID array. But, even for a pool with only 450GB of data, you are talking $20000 at the minimum. No, I didn't pull that number out of a hat. That's exactly how much it cost someone that needed a 450GB pool recovered. The price difference alone is staggering. Using NTFS and the other "common" file systems presents less risk if thing go horribly wrong. If things go horribly wrong with ZFS, the end result is all data is gone unless you are ready to shell out the kind of money that people spend on a vehicle or house.

Totally different risk, totally different consequences for "worst case", not to mention most people are familiar with the "common" file systems and are just a step above clueless when it comes to ZFS.
 

sandvika

Cadet
Joined
May 5, 2014
Messages
2
I think I would have been surprised (maybe even disappointed ;)) not to get a nip from cyberjock. Cyberjock, no, I'm not advocating anything. I just described my journey, where I came from, where I ended up with FreeNAS and where I'm heading with my virtual workstation. I'm happy to field questions about it and contribute to the pool of experience, rather than just lurk in the forum siphoning off expertise without contributing anything.

For those for whom ZFS is the prime motivation for using FreeNAS, that's fine. If your motherboard or disk controller blow up, you can rebuild your FreeNAS on a different system, bring over your disks and be back online. That's great.

My motivation is different. I've multiple PCs in my study at home, varying testing requirements that would need more, can't stand the noise or the heat, loads of other computers in the family, not backed up, so I wanted server consolidation using ESXi combining a NAS solution that was not a pain I would regret later.

Although I've known Unix since version 6, I'm quite happy for others to do the heavy lifting and present a nice web GUI that lets me click to configure and enable. It's the easy CIFS, AFP and NFS with 10Gbps virtual ethernet and jumbo frames that tick my boxes. Snapshots are an excellent extra for easy backup/restore, but I wasn't banking on them. My choice of hardware was based on ESXi compatibility and cost. RAID for the hypervisor and virtual machines is a must-have, it didn't particularly make sense for me to do something else for the FreeNAS volume (not taking anything away from the merits of ZFS) however, the point is that it's a valid choice to have H/W *xor* S/W RAID via ZFS. I understand the features, risks and benefits of each. Hardware self healing is fine too, it doesn't have to be ZFS's flavour, and ZFS can't do reduncancy on a single RAIDn volume, so there's no issue of compatibility. Either way, I get an alert when something fails.

My journey was about the struggle of getting this to work - and more to the point, the fact that the write performance of the ZFS RAIDZ2 I ended up with out of need turned out to be much better than the hardware RAID5 configuration I had originally envisaged, so I'm very happy. Also, having the ability to 'rightsize' the virtual hardware will prove beneficial: it seems my pessimism of the impact on the system performance of software RAID and compression was misplaced and judging by the heaviest loads I've managed to throw at FreeNAS so far, 2 cores, rather than the 4 currently allocated, may well be more appropriate.

For the record, the ZFS volume manager did create a 2 x 4-drive RAIDZ2 volume from my 8 disks, when I had expected 1 x 8-drive RAIDZ2. I've now proven my backup and restore works, with destroying and recreating the volume 'manually' and now have the expected capacity :)

20+ years ago I was a fan of Andrew File System, then the DCE Distributed File System which was fabulous, but I only ever used it in a lab - I never saw it in production. Their brilliance was irrelevant - the tide was flowing in the opposite direction. It's obvious that there's need for fault tolerant, redundant, secure, compressed file systems in the virtual and especially cloud environments and that's the way the tide is flowing now. Many of the features of ZFS would appear to be able to meet these requirements. It's an opportunity. I'm not looking for those features now, but when gigabit broadband speeds arrive it will probably be another matter. I'd be pleased if FreeNAS is heading in the same direction as me and proves to be a good companion, otherwise I'll probably be looking for an alternative. So I'm not saying virtualise or die, I'm saying that there's an opportunity for ZFS and FreeNAS if it is adapted to our virtual future, and a probability of it occupying an ever smaller niche if it clings to hardware. As George Bernard Shaw put it: "The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." I'm not going to apologise for being unreasonable - I just want clever stuff that has merit and relevance to become cleverer and have more merit and relevance in future :D
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
...The difference between a pbucher or jgreco and the average N00b reading the N00bs forum is pretty high...

Quoted for truth! I'm so noob, I had to look up what a FUD is. So I do what I always do when I don't understand what you guys are talking about.... I google.

Fud

From Wikipedia, the free encyclopedia
37px-Wiktionary-logo-en.svg.png
Look up fud in Wiktionary, the free dictionary.
Fud or FUD may refer to:
Now I don't know if you guys are trying to make me scared, piss on me, or feed me without being detected.
 
Status
Not open for further replies.
Top