FreeNAS Suggestions

Status
Not open for further replies.

Rob2

Cadet
Joined
Oct 30, 2015
Messages
3
Hello,

I am planning to migrate a small old NT/2003 domain (5 physical servers, xeons, P4 generation, terminal, mail, web, firewall and backup server) into a XEN virtualisation. The user load on this domain is very low at the moment (2-3 users). However, the structure should stay and the cost should be as low as possible, unfortunately.
We have four "new" XEON (Core2Quad generation) machines with 8GB RAM each with a fresh XEN setup. Performance of this machines is OK.
Data connection should use the gigabit switch infrastructure.

The small pool should be served with a FREENAS box.
I assume 5-6VMs on a single FreeNAS box is not optimal, but this servers have a really low disk usage, so I am relative sure, when the VMs are up and running only some small disk accesses here and there are done.

The data amounts are around 100GB for the data files (CIFS share) and maybe 5 x 100GB for the XEN server virtual disks. Maybe 300GB for a file based backup of the data files. The VM's should be backed up by freenas-snapshots or by xen-snapshots in some regular intervals.

The freenas snapshots should be replicated on a second storage server as backup.

The planned hardware for the main freenas server is:
-X10SL7-F
-Xeon E3-1220 v3
-16GB ECC
-6 x 1TB WD RED
-optional: 2 mirrored SSDs for ZIL (Edit: meant SLOG) if needed performance wise
-optional: 1 SSD for L2ARC if needed performance wise

I have a FREENAS test server here already (however, the test server is only a i3, non-ecc-4GB). No critical data on the test server.

But even after reading very much here, in the manual and in the howto presentation I am not sure at some points...

1) 6x 1TB... What is the best? 6 drives in RaidZ2, 6 drives in Raid10 or just mirror each 2 disks an put them in the main pool.

2) iSCSI or NFS share for the XenServers (I tend to iSCSI)

3) iSCSI done by ZVOL(s) or done by file mapping (I think that ZVOL(s) have better performance?!)

4) One big iSCSI share for all VMs or one iSCSI share for each VM. This is hard to decide. First option has less unused space, however I only can zfs-snapshot all VM's at the same time. Second option has not that good storage utilization, however one can zfs-snapshoot all VM's in different schedules.

5) Storage efficient provisioning... can I use thin provisioning anywhere without hurting performance to much?

6) Can I use non-ecc hardware as 2nd level backup server? Maybe Nas4Free with UFS. However I think the replication is not really easy with this constellation.


I did some performance tests with test freenas testbox which has 2x400GB striped. When using Win7 in hardware as iSCSI initiator on a ZVOL or file based iSCSI target I get around 80MB/s, which is limited by the gigabit network. However, from a Win2003(VM) I only get around 30MB/s tested with hd-tune and PV drivers. So I think the speed limiter will be XEN on the "old" Xeon hardware anyway, not freenas.

Would be nice if someone can point out some hints on the questions or give a feedback if I have overlooked a big NO-GO. :) And thanks in advance.

BR
Rob
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Some thoughts:

1) What is best depends on what your needs are. Best performance will be with striped mirrors (RAID10 in ZFS is just mirrored pairs in one pool). Best storage space might be RAIDZ1 or RAIDZ2 depending on your uptime/redundency needs.

2) NFS is, in general, better in my opinion. However, iSCSI has a place depending on the load. In both cases, you'll want sync writes enabled, which means you need a ZIL (that's not optional).

3) and 4) This is part of why I lean towards NFS, because you directly get some of the ZFS benefits that get obfuscated away using iSCSI

5) It depends on what "too much" means.

6) Sure, there's nothing really wrong with that. However, you can't replicate from ZFS to UFS - everything has to be ZFS, and so the RAM requirements are there. You could also do rsync or something like that to move to a backup server, or use a cloud solution to do your backup.
 

Rob2

Cadet
Joined
Oct 30, 2015
Messages
3
Hello Nick,

thank you very much for your reply.

I have investigated a little more the NFS/iSCSI question and came to the point, that iSCSI is only "faster" because it is using async writing. That's a point I don't like. On the same time when I am using different shares for all 5-7 VMs (to be able to snapshot them in different schedules) I have to specify the share/iSCSI size on iSCSI, while I don't have to do it when using NFS, which is more nice.

So I think NFS is the way to go. Also cause the XEN iSCSI software stack is reported to be not the fastest out there. However, with NFS I only have around 5MByte/s write speed from within the VM. So I put a SSD as SLOG in the test-FreeNAS and I hit a wall of 50MByte/s write speed out of the VMs. I think that is OK. And I think the 50MByte/s are limited by XEN, cause when accessing the box via CIFS share I can write with 90-100MByte/s which saturetes the gigabit network here.

So I have updated my plans to use:
-4x2TB WD RED in mirrored pairs "Raid 10" (can be extended later with 2 more or 4 more)
-use a Intel SSD SD 3510 120GB SD for a small SLOG partition.

Also I have requested to get 32GB ECC RAM if any possible, but not below 16GB ECC RAM.

Cause the costs are a matter here it can be that I only get one SSD for the beginning. I am taking the risk of loosing the transaction log writes after a power loss if the SSD dies at the same reboot. But I think I can mirror the SSD in half a year.

At the same time I am thinking about to put a L2ARC partition on the same drive, as I have seen someone in the net doing that. Maybe 10GB SLOG and 50GB L2ARC. I have to evalute this and the proper sizes.

Replicating the shares from FreeNAS to a non-ecc-Nas4Free box with UFS over RSYNC works. However, sometimes I think if it would be OK to take the risk of using non-ecc-ram and FreeNAS on the 2nd level backup box. I don't know. Maybe Nas4Free with UFS is better.

On the other hand, FreeNAS-ZFS-Snapshots of the running VMs on a NFS share is not the safest thing either, when the snapshots are taken while data transactions are done. Mhh.
Unfortunately I have not yet found a free or at least relative cheap automated(!) backup solution, that is better.... still a open point...

BR
Robert
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
At the same time I am thinking about to put a L2ARC partition on the same drive, as I have seen someone in the net doing that. Maybe 10GB SLOG and 50GB L2ARC. I have to evalute this and the proper sizes.

Hi Rob2,

Bad idea. L2ARC and SLOG are two completely different workload requirements, and sharing a device between the two makes it do a poor job of both.

L2ARC in general with 32GB of RAM is also a mixed bag at best - with NFS instead of iSCSI it might be able to provide value, but most likely not with 16GB. You might actually hurt performance there.
 

Rob2

Cadet
Joined
Oct 30, 2015
Messages
3
Hello HoneyBadger,

thanks for that hint. I know that SLOG is a kind of write backup for the ZIL-data in RAM in case of a power loss and that L2ARC is a read cache. Under normal/heavy load it would not be useful to mix them.

However, my thoughts came from my performance tests with my FreeNAS Test box. (2x400GB SATA striped Raid 0, connected via 1GBit ethernet)

With CIFS I can read and write with 90MByte/s, which is limited by the workstation HD and/or the gigabit network.

However, from a Win 2003 Test VM I can not get more than 40-50MByte/s write speed. No matter if I use NFS or iSCSI. I think this is saturated by the XEN hypervisor. Read speed is "normal" witrh arouns 90MByte/s.
Also I assume the average load will be significant below 10MByte/s with all planned VMs running.
This load should not nearly saturate the SSD I/O capacity. Cause of that I thought I could use a L2ARC on them to lower read latency a little.

OK, I start without L2ARC and monitor the final performance. I assume I can test a L2ARC and remove it later, also.

Does anybody know, if 50MByte/s write speed on a XEN hypervisor on older hardware (Core2Duo generation XEONs) is a usual value?

Br
Rob
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi again Rob,

Your bottleneck is probably the SLOG device. 40-50MBytes/s is still fairly good for sustained synchronous writes. What SSD are you using at the moment, the S3510?

The issue of sharing the SSD isn't about bandwidth/throughput but rather about latency and queuing. You're free to test it as adding and removing L2ARC and SLOG devices from a pool are both non-destructive operations but I imagine you'll find it has no impact, or possibly even a negative one.
 
Status
Not open for further replies.
Top