What to do, hardware/software FreeNAS or other?

Status
Not open for further replies.
Joined
Jul 19, 2016
Messages
72
Ok so I have been reading a lot to see what solution is best for me. I also read the FreeNAS documentation and now feel i know more about how things is. I have also been experiencing on a test machine while waiting for new parts for a new system.

The idea is that I will be running a VMware machine with a dedicated SSD drive for the VM's, But I want a VM, preferably a NAS VM to run a RAID on. Not make a RAID for VMware to run from.

I have manage to get a FreeNAS to run on a VM, also managed to get 2 older disks to be set up as physically mounted disks. So they do not show up under VMware at all as storage, but they are shown in FreeNAS. So I feel this is the way to do it. I read somewhere that RDM is bad, but this isn't directly RDM thought it is on another "level" of sorts. So I do not see why this should go down bad in a way?

The other thing that worries me most is the fact that ZFS isn't expandable in the way I want from the reading. As one cannot add a new disk to an existing vdev. Meaning I will have to invest in 3 more drives to fill up the 8 ports on the LSI HBO card I have ordered. I have looked at RockStor since one can add devices to a RAID when needed. This I like but I do not like the rest, don't feel I have the same control of there sharing part on the network. Is there a way to not use ZFS but use a more simple version that allows for the expanding like I want?

Another solution I am thinking of is to drop the ordered LSI HBO card and just use the HighPoint 2720SGL Raid card and run hardware raid instead, and then use FreeNAS for 1 drive. But not sure if that would work at all. Maybe I will just drop the whole thing and install a Windows 10 on VM and then run HighPoint as I have done for years without much problems at all.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hi there,

You seem not very sure about what you want to do, so reading up a little bit more on ZFS might enlighten you.
Best will be to start here:
Cyberjocks's Slideshow explaining VDev, zpool, ZIL and L2ARC for noobs!
and then to go here:
cyberjock's hardware recommendations

That being said:
what that guy is doing in that video is definitely mapping drives as a RDM .... A huge big no no.
ZFS needs direct and permanent physical access to the disks and drives.
In the end you can create RDM's if you don't really care about your data....

If you consider going down the virtualization route you wanna read thoroughly this too to understand the risks involved in virtualizing Freenas and ZFS in general.
Do not run Feenas in Production as a VirtualMachine

Have fun making experiments in your lab ;-)
 
Last edited:
Joined
Jul 19, 2016
Messages
72
I have read all of this and from the look of it it seems on me that some are against it others for it, and that it works when done properly.

I bet that for those that it ends bad for they use not enough memory. Vm itself uses a lot of memory. Even my test lab now has 16gb, and with 2x320gb on a FreeNAS with two more VM on it, it uses around 13-15gb of ram. And only has a i5.
The new server will have 48gb RDIMM ECC memory, and have 2 Xenon CPU, each CPU with 6 cores and 12 threads.

The other thing with VM is that when created it is set to use 1 core, 1 thread and with only 2GB of ram. It will use more than 2GB when running but if other VM uses a lot then it will eat from the NAS VM, making it crashi guess. So i guess setting it so that it will never have less than 12-16gb is better, and also set to use more CPU power.

What I'm wondering is that those that do run this successfuly are they doing it with RDM or other? For test purpose I do not have a dedicated HBO card just the main board. But I have ordered the LSI HBO card so I'm hoping I will manage to set this up as a straight through setup. Information on how this is done would be very much for appreciative.

Loosing data is bad but not critical. As the critical data o have does not take much space and is therefore located in things like OneDrive and Dropbox. This will have all the Blu-ray I have ripped into the drive, so loosing will mean I have to take the disk down from the attic and rip them in all over again.

Sent fra min E6853 via Tapatalk
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Good to hear that you have been reading all that already :smile:

When virtualizing Freenas it is very important that you understand how hypervisors work to be able to measure the additional layer of complexity it's bringing to your setup.
The key is NOT to use RDMs but to do the things properly and use a proper "passed-through" HBA in IT mode to give ZFS direct access to the drives.
When using RDMs you are adding an additional abstraction layer that might not behave all the time like ZFS it's expecting it. Additionally you will loose the capacity to perform SMART monitoring on your drives.

I have been using a Freenas VM for something like 2,5 years now and everything has been working nicely without major problems, because I followed Best Practices.
You will need to make the research on the proper resource allocations, but you seem on the right track with around 16GB of RAM for the Freenas VM.

BTW: You should completely forget about using that HighPoint controller with HW RAID in conjunction with ZFS.
When done right, ZFS is managing the RAID for you and will take good care of your data ;-)
 
Joined
Jul 19, 2016
Messages
72
Yes. Today i run W7 with Highpoint 2720SGL with Raid5 and this has worked for 5 years now, never lost any data. But I have now ordered an LSI 9207-8i That I will run in IT mode, then sell the Highpoint.

But what really concerns me is that I cannot add disks to a vdev when I need more space. I will now have to get 3 more disks to get FreeNAS to work for me to get the total of 8. Also I do not see the need for Z2/Raid6 Z1/Raid5 is more than enough for my use as long as I can get feedback once a drive starts to fail. So I can replace it. Raid 5 has worked for 5 years for me so far, så don't see the point of going more paranoid than that.
So for me the choice now is between FreeNAS and RockStor, the only benefit I see for the other is the fact that I can add drives along the way.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
I have been running a similar setup in the past as well, but after having experienced some bit-rot on the "basic" hard-ware RAID I searched for something more resilient.

Indeed you cannot add a drive to a VDEV, but you can replace the existing disks with bigger ones, thus growing the available space.
You might as well just add another VDEV to your Zpool config and your done.

I've choosen mirrors for different reasons, but also for the possibility to add simply a pair of 2 disks as a new VDEV into the Pool configuration.
I currrently have 3 VDEVs (2*2TB+2*3TB+2*3TB) and will probably replace the 2TB disks to grow my available storage space.
 
Joined
Jul 19, 2016
Messages
72
I have been running a similar setup in the past as well, but after having experienced some bit-rot on the "basic" hard-ware RAID I searched for something more resilient.

Indeed you cannot add a drive to a VDEV, but you can replace the existing disks with bigger ones, thus growing the available space.
You might as well just add another VDEV to your Zpool config and your done.

I've choosen mirrors for different reasons, but also for the possibility to add simply a pair of 2 disks as a new VDEV into the Pool configuration.
I currrently have 3 VDEVs (2*2TB+2*3TB+2*3TB) and will probably replace the 2TB disks to grow my available storage space.

And this is the problem. I now have 5 x 3 TB disks. I cannot grow this to 8 x 3 TB over time, I have to have 8 x X TB Disks now. I could add 3 smaller disks now and change them out for something else later, but that would also mean that the total Vdev/Zpool would be small now, until all 3 are changed to 3 TB.
One idea is to get 3 x 4 TB disks now to get the total 8 Disks, but then i will now take the advantage of the extra 3 TB i get from them compared to just getting 3 TB disks. Before I have changed all 5 x 3 TB disks for 4TB. And this is for me just silly.

Also adding more Vdev with mirroring to a Zpool is a to be paranoid, cause then i would have to mirror each to have some redundancy. And that in my mind is both waste full and to be way to paranoid.
But it is still an open idea. I could use the 5 x 3 TB disk now and run a Raid Z1, like I have for the last 5 years, and then add 3 more disks later as another RaidZ1 when I need more space. Then it would be almost the same as getting 8 disks now and run a RaidZ2.

For me the ideal setup would be 8 disks in total where 1 disk is the redundant part to change when failing. Yes you could have 2 disk in mirror but I'm not that paranoid. Like I said I have run Raid 5 for 5 years now without any problem.
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
As another suggeestion:
You could create now a VDEV of 4*3TB Disks and then add later on another VDEV of the current 5th disks + the 3 disks that are planning to purchase.
This might avoid the hassle that you think you will be having organizing your VDEVs

But in the end it all comes down to risk measurement and personal preference.
I don't run mirrors because I'm paranoid , but because I'm running iSCSI block storage
For me there's no fscking way I will go ever back to using RAID5 (or RAIDZ1 in the ZFS world, to use proper terminology. BTW: You know it's Dead right ?!) or any type of Hardware RAID ;-)
 
Joined
Jul 19, 2016
Messages
72
As another suggeestion:
You could create now a VDEV of 4*3TB Disks and then add later on another VDEV of the current 5th disks + the 3 disks that are planning to purchase.
This might avoid the hassle that you think you will be having organizing your VDEVs

But in the end it all comes down to risk measurement and personal preference.
I don't run mirrors because I'm paranoid , but because I'm running iSCSI block storage
For me there's no fscking way I will go ever back to using RAID5 (or RAIDZ1 in the ZFS world, to use proper terminology. BTW: You know it's Dead right ?!) or any type of Hardware RAID ;-)

Maybe you could enlighten me a little on the iSCSI thing? Link to whats it all about?

That link of the death of Raid5 was interesting, also the recap from the 2013. I feel I'm lucky then to have my Raid 5 still working after all these years. And If i where to increase I should go for Raid6/Z2.
So many ways to go forth here, problem is always no mather how many vdevs you have the zpool will fail if one of the vdevs fail. But then if one run 2 disk in mirror on 1 vdev then do the same on the other, the rebuild would be better of as there is less data to rebuild right? A rebuild is done on the vdev not on the entire zpool right?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Maybe you could enlighten me a little on the iSCSI thing? Link to whats it all about?

That link of the death of Raid5 was interesting, also the recap from the 2013. I feel I'm lucky then to have my Raid 5 still working after all these years. And If i where to increase I should go for Raid6/Z2.
So many ways to go forth here, problem is always no mather how many vdevs you have the zpool will fail if one of the vdevs fail. But then if one run 2 disk in mirror on 1 vdev then do the same on the other, the rebuild would be better of as there is less data to rebuild right? A rebuild is done on the vdev not on the entire zpool right?

At home I'm running an AIO (All in One) box. A server with a few VMs including the Freenas VM and I'm storing on the zpool zVols (for iScsi block storage) that are being presented to the other VMs.
Like this I'm sure all the important data (I care about) is protected by ZFS on the disks.
In the future I plan to make my homelab grow and will also create some datastores that will "live" on my zpool, so iSCSI with mirros is a absolute must for maximum IOPS.
Why iSCSI requires more resources for the same results

Concerning the rebuild:
A rebuild is being done on VDEV level. So a resilver in a mirror will create significantly less pressure and wear on the other disk during a rebuild (it's comparable to a drive copy).
During a resilver in a RAIDZ* VDEV there are more many operations going on ALL drives, this is why the chance of another drive failing at the same time is significantly higher.
There are actually many many reallife stories on the forums here about issues and failures during a resilvering operation ...
Even with Hardware RAID these kind of failures were and are still happening. In my old job I had a colleague who encountered such a problem twice in just 3 months (that was last year) ....
This is why moving away from RAIDZ1 is better in IMHO. :smile:
 
Joined
Jul 19, 2016
Messages
72
At home I'm running an AIO (All in One) box. A server with a few VMs including the Freenas VM and I'm storing on the zpool zVols (for iScsi block storage) that are being presented to the other VMs.
Like this I'm sure all the important data (I care about) is protected by ZFS on the disks.
In the future I plan to make my homelab grow and will also create some datastores that will "live" on my zpool, so iSCSI with mirros is a absolute must for maximum IOPS.
Why iSCSI requires more resources for the same results

Concerning the rebuild:
A rebuild is being done on VDEV level. So a resilver in a mirror will create significantly less pressure and wear on the other disk during a rebuild (it's comparable to a drive copy).
During a resilver in a RAIDZ* VDEV there are more many operations going on ALL drives, this is why the chance of another drive failing at the same time is significantly higher.
There are actually many many reallife stories on the forums here about issues and failures during a resilvering operation ...
Even with Hardware RAID these kind of failures were and are still happening. In my old job I had a colleague who encountered such a problem twice in just 3 months (that was last year) ....
This is why moving away from RAIDZ1 is better in IMHO. :)

Sounds logical but then what about running 2 Vdevs with 4 drives on each like mention earlier?, with RaidZ1? Is that a ok solution? Then i Could get says 3 new 4TB disk on a vdev2 of 3 x 4TB and 1x 3TB (i know i lose some TB) But then later change this 1 x 3TB to a 4TB gaining another 3TB later when needed giing me 12TB on that vdev. Then have 9TB on the other vdev witht the 3TB disks.
This sound better than running 8 disks with RaidZ2/Raid6
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Creating two 3TB 4Disk RAIDZ1 VDEVs might be a solution, but with 4TB drives the data density is getting quite high.
It's just a matter of making the proper risk/value assessment and making sure you have backups.
Will you always be able to perform a disk replacement before another one fails (when you are on vacation for example) ?.
You are the one who will be able to take that decisions for your particular use case. ;-)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
In this thread I see some confusion or maybe more like concerns about how to grow a pool and with ZFS, that is something to understand up front and design your system for, there is no easy way around it. Also the OP wants to run FreeNAS in a VM. Well if you are disciplined then you could do it. I have a fairly long thread discussing my adventures with FreeNAS and ESXi, and other things which will give you some good insight on the good and the bad of ESXi and FreeNAS VM. Also there are two great threads HERE on Virtualization by @jgreco and worth the reading. While I didn't find running FreeNAS on ESXi difficult, the only thing I did find that gave me real trouble is ensuring the system shutdown properly when the UPS lost power. ESXi doesn't natively support USB connected UPS units.

And might I offer up a solution to your vdev/pool capacity questions...
1) Figure out what you will need for the next 3 years for capacity and double that. That is what you need to buy for or you will likely run out of storage (seen it many times before).
2) If you cannot purchase all the drives up front, buy what you can and build your single vdev/pool. Once you can purchase more drives, you can copy all your data off the current pool to other drives (yea, not ideal), and then destroy your pool, add the new drives and create your new pool, then move your data back. Sometimes that is all you will be left with for an option. When I start having drive failures, I will copy all my data to other media and replace my pool with two less drive but of higher capacity and then copy my data back. For me, the bulk of my data storage are historical backups. I have a small amount of data that would fit fine on a single 2TB drive without issue that I need to retain.
 
Joined
Jul 19, 2016
Messages
72
I'm thinking maybe running vdevs with 3 disk 1 for spare/ raidz1/raid5. Then Add along. But can I later add a second LSI HBO controller so I have a total of 16 ports? Cause 3x3 disks is 9. And the controller only supports 8.

Sent fra min E6853 via Tapatalk
 
Joined
Jul 19, 2016
Messages
72
I might have found a second solution. In my area there is this Supermicro CSE 846TQ for sale with hardware. But is this any good? Then i could run only FreeNAS on this and have VMware on another server.

Chassis: Supermicro CSE 846TQ
Backplane: SAS846TQ rev. 3.1
Motherboard: H8DME-2 2.01A
Processor: 1X Quad-Core AMD Opteron 1.8GHz 2346 HE
RAM: 16GB 8x2GB DDR2 ECC Registered
Power Supplies: 2 x 900 W
HD Controller: 3x Supermicro SAT2-MV8
IPMI: SIM1U+ with AOC-USB2RJ45

Not sure if the HD Controllers are supported for FreeNAS, but they are rated to 300mb/s per channel it says. I am happy if I get transfer rates of 100-150mb/s when moving files on the network.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I might have found a second solution. In my area there is this Supermicro CSE 846TQ for sale with hardware. But is this any good? Then i could run only FreeNAS on this and have VMware on another server.

Chassis: Supermicro CSE 846TQ
Backplane: SAS846TQ rev. 3.1
Motherboard: H8DME-2 2.01A
Processor: 1X Quad-Core AMD Opteron 1.8GHz 2346 HE
RAM: 16GB 8x2GB DDR2 ECC Registered
Power Supplies: 2 x 900 W
HD Controller: 3x Supermicro SAT2-MV8
IPMI: SIM1U+ with AOC-USB2RJ45

Not sure if the HD Controllers are supported for FreeNAS, but they are rated to 300mb/s per channel it says. I am happy if I get transfer rates of 100-150mb/s when moving files on the network.
There's a distinct disadvantage to the TQ-style chassis. As noted in this thread, you'll end up with a rat's nest of cables:

"The TQ option brings each individual bay out to an individual SAS connector. This is straightforward and nonthreatening to those who are unfamiliar with multilane. However, it is a bad idea to have twenty four individual cables to have to dig through if you suspect a bad cable, etc."
 
Joined
Jul 19, 2016
Messages
72
If that is the main problem then there is no problem in my mind. What I need to know is if this solution is good with FreeNAS, if the HD controllers are supported and works..

Sent fra min E6853 via Tapatalk
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
If that is the main problem then there is no problem in my mind. What I need to know is if this solution is good with FreeNAS, if the HD controllers are supported and works..
Duly noted.

Regarding the HD controller... it might work. But it's based on a Marvell host controller, and Marvell has a reputation for not working very well with FreeBSD/FreeNAS. Also, it only supports 3.0Gbps transfer rates. And it's old enough that it may not support drive sizes > 2TB. I wouldn't recommend it...

You'd be better off using an LSI9211 / IBM M1015 / Dell PERC H200 HBA based on the LSI 2008 chip. You'd need three of these to run 24 drives in the TQ chassis, versus only 1 in a chassis with a multi-lane backplane. This is another disadvantage of the TQ-style chassis.
 

Linkman

Patron
Joined
Feb 19, 2015
Messages
219
Also, if it's concern where you live, the DDR2 memory in that box is going to be an electricity hog (probably the CPU too).
 
Joined
Jul 19, 2016
Messages
72
Duly noted.

Regarding the HD controller... it might work. But it's based on a Marvell host controller, and Marvell has a reputation for not working very well with FreeBSD/FreeNAS. Also, it only supports 3.0Gbps transfer rates. And it's old enough that it may not support drive sizes > 2TB. I wouldn't recommend it...

You'd be better off using an LSI9211 / IBM M1015 / Dell PERC H200 HBA based on the LSI 2008 chip. You'd need three of these to run 24 drives in the TQ chassis, versus only 1 in a chassis with a multi-lane backplane. This is another disadvantage of the TQ-style chassis.

This is more what I am worried about. I liked the chassis as it had this many hot-swap slots. On the chassis I have now I will need to buy some backplanes. I have found these But 4 of these will fill the chassis I have now, but it will give me 12 slots, more than enough to grow on. But then I am also back to the original plan VMware with a VM FreeNAS. 3TB limit is not good, cause if I where to buy new disks I would get 4TB or even 6TB disks now.

Power is not such a big concern the power here in Norway is in general very cheap.

But back to the original idea then. I have ordered a LSI SAS 9207-8i card. but can this be mixed with a Dell H310 card? So that i could get a total of 16 slots ready? Or should I find another LSI SAS 9207-8i card to match?

All will be in the same zpool, but i think the best for me would be to run 3 disks on each vdev where 1 disk is the redundant disk.
 
Status
Not open for further replies.
Top