Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED FreeNAS Virtual Machine (VM) (WARNING: NOT SAFE) and Software Defined Storage (SDS)

Status
Not open for further replies.
Joined
May 8, 2017
Messages
26
FreeNAS Fans,

Hey guys, I built and deployed a FreeNAS Virtual Machine (VM) with multiple virtual harddrives within physical harddrives spread across.
It is a quite fascinating journey. I shot a detailed video of my build. Looking for your views:

What I am expecting is to have an alternative FreeNAS build unlike typical hardware based build via virtual platform. So that this gives the true flexibility and scalability also mimicing a large storage backup solution. With this build we can easily build a ZFS Z3 pool with minimal number of drives (or just one physical harddrive although this is purely for testing/experimenting purposes) or 1:1 drive mapping too.

There are few considerations though. FreeNAS (and community) never recommend a VM based HDD build. Since this fails to alert us individual drive SMART status and so on unlike real ones. But having said that we can still get away this issue by careful deployment.

cheers !
 

garm

FreeNAS Expert
Joined
Aug 19, 2017
Messages
1,298
No you cannot, this is a horrible idea and you will loose data doing this.
 
Joined
May 8, 2017
Messages
26
Oh I see. Can you please elaborate Gram. I am still new to FreeNAS. Appreciate your views.
Because I am seeing possibilities such as SDS (Software Defined Storage), etc.
 

KrisBee

FreeNAS Expert
Joined
Mar 20, 2017
Messages
1,019
And he is using an i7 CPU with just 7GB of what must be non-EEC RAM. Good for a demo of FreeNAS, but not much else.
 

adrianwi

FreeNAS Guru
Joined
Oct 15, 2013
Messages
1,077
I don't understand why you would even want to do this? Having virtual drive redundancy is pointless if they are all running on a single physical device?
 

KrisBee

FreeNAS Expert
Joined
Mar 20, 2017
Messages
1,019
I don't understand why you would even want to do this? Having virtual drive redundancy is pointless if they are all running on a single physical device?
Virtually a pointless exercise?
 
Joined
May 8, 2017
Messages
26
to give a perspective I am a systems and Linux Kernel (mostly networking) tech architect. Although I am not much into storage.
I am eager to get some views from guys who worked and touched ZFS stack and give some core technical opinion about the same.

I mentioned in the video already that it is not reliable and safe. But I see some possibility of improving the same.
I still love the fluidity of such a deployment over traditional bare-metal.

BTW, thanks a lot gram for those links. I am still awaiting some ZFS stack level insights.
-----------
I know 90% of these discussion forum users would be a typical may be experienced/novice system admins.

But I need a level deeper than that. Which is the code level.
 

garm

FreeNAS Expert
Joined
Aug 19, 2017
Messages
1,298
Okey, if you don’t want to listen too us or read the docs then let’s keep it simple.
These are just a few good reasons to kill this thing with fire

If one disk containing more virtual drives then your parity fails, you loose the whole pool. Also, resilvering in a replacement for a drive will be a nightmare.

The performance is the virtual drives will be dismal as several virtual disks share the IO of a single disk.

The underlying filesystem can corrupt data irrecoverable, especially sins some of the hardware measures deployed by OpenZFS cannot be enforced.

Data committed to disk may still be in flight when the next transaction group is being written and thus corrupt your pool even in the best of circumstances.

If you want to build and deploy something like this, build it on something other then ZFS and use ZFS as storage backend for that system...

If you are selling these, do a recall.. for everyone’s sake..
 

adrianwi

FreeNAS Guru
Joined
Oct 15, 2013
Messages
1,077
At best, lose the term Production from the title. Don't mislead people that this would be a good way to deploy FreeNAS in anything other than a test environment, as it's not.
 
Last edited:

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,656
This is a bad idea, based on a lack of understanding. You need to study the history of ZFS. ZFS is meant to be the underlying file system that touches bare metal. It can't work properly if it is not in control of the hardware. If you want virtualization, it should be done on another layer over ZFS.
In addition to some of the resources already pointed out to you, please look at these:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 

kdragon75

FreeNAS Expert
Joined
Aug 7, 2016
Messages
2,449
Since this fails to alert us individual drive SMART status and so on unlike real ones.
Its more than than. FreeNAS needs to see the actual SCSI sense codes etc.. to see when the drive is having trouble reading, writing, etc... Using VMware VMFS storage adds layers of abstraction that hide all of this information. Not to mention potential impacts of any extra layers of caching that may cause thrashing on the datastore.
Lest not even talk about using BETA software as an underpinning for an entire environment...
 

kdragon75

FreeNAS Expert
Joined
Aug 7, 2016
Messages
2,449
Joined
May 8, 2017
Messages
26
Thanks a lot guys for the info and the links. BTW, I did some trials on GlusterFS. It was fantastic. However at times there is excessive CPU consumption/overload issues. I found that is quite a common issue discussed in various developer/user forums. I am thinking to experiment CEPH sometime when I get some free time.

ZFS IS software defined storage. You are taking a bunch of hardware, pooling it, and using software to define volumes and datasets.
>> KD thanks for the clarification. I see, I never thought that way. As I said I am not a storage expert. Networking is my domain. I go through the freeBSD->zfs source. I did explore freeBSD->networking subsystem. But yet to explore overall freeBSD->FS->ZFS part.
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
2,226
Your opening slide quote says

"Know the rules well, so that you can break them effectively."

Please review the portion before the comma, as you currently appear to know absolutely none of the "rules" of ZFS and are breaking them dangerously.

I second @garm in that I strongly hope you are not attempting to sell this or use it for anything even remotely resembling "production"
 
Joined
May 8, 2017
Messages
26
Your opening slide quote says

"Know the rules well, so that you can break them effectively."

Please review the portion before the comma, as you currently appear to know absolutely none of the "rules" of ZFS and are breaking them dangerously.

I second @garm in that I strongly hope you are not attempting to sell this or use it for anything even remotely resembling "production"
I am not sure why some are thinking that I am making this to sell. I am a researcher. But my domain is core networking, Datacom, and systems. I am not a storage expert. So I am looking for inputs. I am glad I got valuable insights.

I don't mind if my system breaks or inconsistent. I am open enough to admit my foolishness and learn the stuff.

In the video title I mentioned production, which means in my local deployment for a real use-case. And I mentioned in the video that even if this thing fails it does not matter. Since it is just another NAS snapshot/replica. I never will make this as primary NAS. I mentioned also in my video that in future I may build a complete hardware (bare-metal) FreeNAS and going to make that as my primary NAS. And going to make my old (existing) Netgear NAS as snapshot bkp along with this VM one.

See there are few things here and the reason I shot the video which many are not getting the point:
1. with a setup like this, it gives an opportunity to play around with as many disks as possible. It is not something possible with a real setup all the time. Unless you are some admin working in a large DC or some firm.

2. gives opportunity to recreate inconsistent/failed/corrupt disks and resilver and attempt to get back healthy state. Once again we have the opportunity to do multiple scenarios. For example upgrade the entire ZFS pool capacity. It is simply not so easy to do with real physical stuff.
 

joeinaz

FreeNAS Experienced
Joined
Mar 17, 2016
Messages
188
FreeNAS Fans,

Hey guys, I built and deployed a FreeNAS Virtual Machine (VM) with multiple virtual harddrives within physical harddrives spread across.
It is a quite fascinating journey. I shot a detailed video of my build. Looking for your views:

What I am expecting is to have an alternative FreeNAS build unlike typical hardware based build via virtual platform. So that this gives the true flexibility and scalability also mimicing a large storage backup solution. With this build we can easily build a ZFS Z3 pool with minimal number of drives (or just one physical harddrive although this is purely for testing/experimenting purposes) or 1:1 drive mapping too.

There are few considerations though. FreeNAS (and community) never recommend a VM based HDD build. Since this fails to alert us individual drive SMART status and so on unlike real ones. But having said that we can still get away this issue by careful deployment.

cheers !
I think the best way to position what you have created is to call it a FreeNAS multidisk SIMULATOR. For example with thin provisioning, you could SIMULATE a large deployment with a relatively small amount of disk resources. This might be useful if you wanted to see what a deployment of the latest beta version of FreeNAS might look like on your current hardware. Since the simulation is just to model the working environment, things critical to good FreeNAS deployments like ECC memory and IT mode HBAs are not necessary. It would be fun to see what size system I could model using VMware and a single 480GB SSD...
 
Joined
May 8, 2017
Messages
26
I think the best way to position what you have created is to call it a FreeNAS multidisk SIMULATOR. For example with thin provisioning, you could SIMULATE a large deployment with a relatively small amount of disk resources. This might be useful if you wanted to see what a deployment of the latest beta version of FreeNAS might look like on your current hardware. Since the simulation is just to model the working environment, things critical to good FreeNAS deployments like ECC memory and IT mode HBAs are not necessary. It would be fun to see what size system I could model using VMware and a single 480GB SSD...
Exactly.
Although I have not explicitly mentioned I am kind of thinking a primary NAS with Dell PowerEdge T30 Mini Tower server.
Here are the specs: https://www.dell.com/en-us/work/shop/povw/poweredge-t30
This may be in future my all physical (bare-metal) Intel Xeon - ECC RAM based build.

Unfortunately the computer hardware costs in India and few countries is quite high compared to US.
Due to the 30% extra government tax on these. So one have to think before they stick their fingers in it.

I thought of building a complete custom build hardware, but found this server everything packed. And it is not that expensive for a proper FreeNAS server build.
The price of this server is just around: $600 (and in India it is around Rs. 42,000 which is $650 USD equivalent).
Which is kind of affordable. Except may be we have to upgrade the RAM. It comes with factory installed 8GB ECC RAM stick.

When you compare the overall costs with a budget Netgear RN300 series RN314 (in India) which is Rs. 48,000. Dell is way better investment.
https://www.amazon.in/Netgear-ReadyNAS-Network-Attached-Storage/dp/B00BO0MG02/ref=sr_1_1?ie=UTF8&qid=1536998522&sr=8-1&keywords=RN314

and I am not sure who is importing these FreeNAS hardware and selling in Amazon India. The prices are unbelievable. Complete b*** as you can see.
https://www.amazon.in/FreeNAS-Mini-XL-24TB-Attached/dp/B01CN1R1V4/ref=sr_1_14?ie=UTF8&qid=1536998695&sr=8-14&keywords=freenas
https://www.amazon.in/FreeNAS-Mini-XL-48TB-Attached/dp/B00QMPM5BO/ref=sr_1_15?ie=UTF8&qid=1536998695&sr=8-15&keywords=freenas
https://www.amazon.in/FreeNAS-Mini-Network-Attached-Storage/dp/B00NT4P3I8/ref=sr_1_16?ie=UTF8&qid=1536998695&sr=8-16&keywords=freenas

Unbelievable !

-----------
In the meantime I thought it will be interesting to build a complete VM setup. I can do that on a single drive and simulate some key features and gain knowledge. Since I have not yet explored FreeNAS :: FreeBSD->xfs code-base, I need to do some few top level experiments. Later I can custom compile and introduce debug points to trace inner stuff.
 

anmnz

FreeNAS Guru
Joined
Feb 17, 2018
Messages
262
I am not sure why some are thinking that I am making this to sell. I am a researcher.
In the meantime I thought it will be interesting to build a complete VM setup. I can do that on a single drive and simulate some key features and gain knowledge.
Experimentation and learning is fine, but why did you write "Production Deployment" in the title of your post? I think that's why you've got so many negative reactions. What you're doing might be fine for research and learning but it would be a total disaster as an actual production deployment.
 
Joined
May 8, 2017
Messages
26
Experimentation and learning is fine, but why did you write "Production Deployment" in the title of your post? I think that's why you've got so many negative reactions. What you're doing might be fine for research and learning but it would be a total disaster as an actual production deployment.
100% I agree now. Thank goodness I haven't embedded the word inside the video I posted. So it is just a matter of changing the title. I should do it right away.
But this post title I think it is not possible to edit/modify. Which is fine :)

Thanks a lot whoever volunteered and gave inputs. Once I later get my Dell server, I certainly will post an update. :p:D:eek:
 
Status
Not open for further replies.
Top