Looking for a FreeNAS 10 box

Status
Not open for further replies.
Joined
Mar 30, 2015
Messages
32
Greetings everyone
I'm keeping a close eye on FreeNAS 10 development and I'm loving it :cool:. (I'm planning this for March when FreeNAS 10 is finally released)
I have built 3 FreeNAS boxes so far, the build i'm most proud of is a ASRock Intel Avoton C2750 in a small Fractal Design Node 304 holding 13TB of storage and Samsung SSD for cache.
So I decided to build a box for our food company, I want it for mainly storage of course also for VM's. I want to run my FreePBX VM (that will run on a Raspberry PI so no issues here) and most importantly my Odoo VM (still don't know is it going to be an Ubuntu server VM or a docker application) postgresql database is the workhorse of that, and postgresql needs blazing fast I/O.
my budget is kinda limited, 3300$ is my max. I was looking around and I found this HP ProLiant DL120 G9 a 6 core Intel Xeon E5-2609 CPU, 4x HDD Hot Plug 3.5in LFF and 3 x PCIe slots. plus its only a 1U. and that server is priced in my local retail shop for 1000$.

So I'm thinking maybe I should get that and add a 32GB or 64GB of ECC RAM depending on my budget, 4x6TB drives WD RED NAS drives and an Intel 750 SSD.
What do you think?

Also When I built my first mentioned system I was not aware of ZIL,SLOG and all that. I only chose cache from the GUI for the SSD. what should I put the intel 750 as for the maximum I/O performance? should I get 2x intel 750s?
 
Last edited by a moderator:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
If you're planning on storing critical data, especially in a business environment, you may want to avoid FreeNAS 10 for the time being. You should consider running FreeNAS 9.10.2 in production.

For hardware recommendations, please see this guide: https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

It is possible to run virtual machines under 9.10 using iohyve. See my video guide here: https://forums.freenas.org/index.php?resources/iohyve-set-up-and-basic-usage.9/
 
Joined
Mar 30, 2015
Messages
32
If you're planning on storing critical data, especially in a business environment, you may want to avoid FreeNAS 10 for the time being. You should consider running FreeNAS 9.10.2 in production.

For hardware recommendations, please see this guide: https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

It is possible to run virtual machines under 9.10 using iohyve. See my video guide here: https://forums.freenas.org/index.php?resources/iohyve-set-up-and-basic-usage.9/

Maybe I didn't clarify enough or maybe over excited a bit for FreeNAS 10, I'm sorry but this is a future build. As I'm looking at here FreeNAS release is only 2 weeks from now. and I know better than to run a beta software on a live environment trust me (been there done that). I have few small VM's running on Vbox on my other boxes. I'm just planning ahead
 
Last edited by a moderator:

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
Joined
Mar 30, 2015
Messages
32
Have you planned migration of those? There are no virtualbox on FreeNAS 10.
I have few Idea's first 2 of them are running Win 2008 and 3 are running Elastix 4.5 (freePBX).
I was thinking maybe if I extract the VHD files I can try to boot them from the new VM system on S 10.
Or convert VHD files to VHDX or whatever... if nothing works well...
I'm going to have to back up all the databases I have there and start fresh install. not a cool thought ;/
But you reminded me i need to set up a FreeNAS 10 lab, thanks!
 
Last edited by a moderator:

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
But you reminded me i need to set up a freenas 10 lab, thanks!

You got the point :)

I use FreeNAS as a NAS, so no attached to any version as long as CIF, NFS, etc works.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If you need performance from VMs, (locally or remote), then Mirrored vDevs can be helpful.

However, just 2 x Mirrored vDevs, (4 disks total), is a bit light on IOPS, (I/O Operations Per
Second). You might look into 4 disk 1U disk expansion chassis, (with SAS expander of course),
and then mirror the 4 internal disks to the 4 external disks. This would give you both more IOPS
and some redundancy.

Note that any external disk only chassis should either use SAS disks, or a SAS expander. SAS
expanders do allow the use of SATA disks just fine. The SATA protocol get's tunneled through
the SAS link between the expander and host port.

Using an external disk only chassis without a SAS expander means the cable(s) between the
host chassis and disk chassis must be short as possible. Like 0.5 meter. That's because SATA
uses lower voltages and less comprehensive error detecting and correction on the cable. Further
you would need a host port per disk. SAS expanders need only 1 SAS host port, (though 2 or
more can be helpful).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
FWIW, the Xeon 2609v3 is the least performant Xeon at 1.9Ghz.
 
Joined
Mar 30, 2015
Messages
32
If you need performance from VMs, (locally or remote), then Mirrored vDevs can be helpful.

However, just 2 x Mirrored vDevs, (4 disks total), is a bit light on IOPS, (I/O Operations Per
Second). You might look into 4 disk 1U disk expansion chassis, (with SAS expander of course),
and then mirror the 4 internal disks to the 4 external disks. This would give you both more IOPS
and some redundancy.

Note that any external disk only chassis should either use SAS disks, or a SAS expander. SAS
expanders do allow the use of SATA disks just fine. The SATA protocol get's tunneled through
the SAS link between the expander and host port.

Using an external disk only chassis without a SAS expander means the cable(s) between the
host chassis and disk chassis must be short as possible. Like 0.5 meter. That's because SATA
uses lower voltages and less comprehensive error detecting and correction on the cable. Further
you would need a host port per disk. SAS expanders need only 1 SAS host port, (though 2 or
more can be helpful).
on the IPOS part:
See I was thinking I can avoid planning all that and just let the PCIe Nvme SSD with IPOS of (4K Random Read: 440K, 4K Random Write: 290K) do all of the heavy lifting and from what I understood it will out shine any Disk combination. just trying to find out what's the best way to set it up? or do i need 2 of them one for SLOG and one for L2ARC?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Abdullah AL Zayadi,
First off, the SLOG only helps with synchronous writes. Has nothing to do with IOPS, other
than reducing the latency. The data still needs to be pushed onto the data disks. So if you
are doing a great many tiny writes, they still need to be pushed to the data disks. That takes
IOPS away from reads. So a SLOG does not reduce data disk writes, simply time shifts the
actual synchronous writes, (and only sync writes).

Second, until you max out your RAM, the L2ARC is generally not recommended. See a
L2ARC uses RAM for it's directories, so too little RAM and too large L2ARC, you actually
get worse performance.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I think its worth understanding though that sometimes, when you have a system which can handle 1TB+ of RAM, maxxing out the ram might not actually be the best advise... when money is an issue :)
 
Status
Not open for further replies.
Top