FreeNAS as Backup Server for Exchange Databases

Status
Not open for further replies.

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
Hello everyone,
At first I need to say that I am using FreeNAS for years, but never got any deeper into it then just setup a storage for some files... so basically I am noob. I did read a lot articles in the last few days but still not sure what's the best way.
My primary job at the company i am working for is our Exchange Organization. Right now i need to find a way to backup our Databases. Currently i have 4 Databases with a total of 4 TB... i expect them to grow to 5 TB in the next few month. This amount of data makes it very difficult to backup in a night.
My current attempt is to use Windows Server Backup (Exchange 2010 is running on 2 Windows 2012 R2 machines, not virtualized). Both Server have a 6 TB iSCSI Disk, which i setup on one FreeNAS Server at our Backup location. Those two locations are connected over a fibrechannel connected to 10G Netgear switches. Server and FreeNAS have a Supermicro X10DRI-T motherboard that comes with 2 10G Intel NICs. This is a dedicated storage network, exclusively for those exchange server, at the moment.
FreeNAS:
Version 9.10.2-U6 (i will try to upgrade to 11 in the next days)
Motherboard: Supermicro X10DRI-T
RAM: 128 GB
CPU: 2 x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
HDD: 1 x 500GB SATA for Freenas System (i know thats way over the top but i didnt had a smaller one or a usb disk at hand...)
HDD: zpool: Raid-Z1 4 x 6 TB Hitatchi HGST HDN726060ALE614
All Harddrivers are connected to the onboard SATA Connectors.
I created a raid-z1 with two zvols. iSCSI is configured with one portal, one initiator, two targets and two extends. I used device (zvols) for the extends.
One word to this hardware setup. Since i am still testing my options, i took this system, since we got those parts left in our office. I'm aware that I don't need 20 CPU Cores and that 500GB for the FreeNAS boot partition is wasting space. I also have a Intel SSD, that i could use as ZIL/L2ARC, but this is a desktop ssd, no backup battery or anything.
Besides the HDD configuration, the exchange servers have the same hardware.
I did some test backups. Backing up both servers at the same time takes about 26h without the ssd as cache. With the ssd it takes a little less time. Overall i was hopping to get a little more performance out of this setup.
So what are your thoughts about this? Anyone with a similar setup? Any suggestions how i cloud improve the performance? More RAM? I read a lot before writing this, nearly every performance issue on this forum has the suggestion to solve it with more RAM. But i am not sure if in this case it would solve my problems. Maxing out the RAM of the Mainboard would mean 2 TB RAM and i think that cant be the answer... storing almos the hole backup in RAM...
Checking the reportings tab in the web gui i saw that ARC Hit Ratio is about 50% so far away from the suggested 90-100%... since i want to backup to this server, read hits are not that important. That's at least what i think.
My second thought was to increase the amount of hdd's. The way i understand RAIDs is that the more hard drives i use, the faster i can write to the pool. Currently i use a Supermicro chassis and could double the amount of hard drives from 4 to 8.
The third option i can think of is not using iSCSI and use NFS or SMB instead. Note sure if that would speed up those backups.

Maybe someone around here had solved a similar problem.

If you need anymore information please let me know.

Sorry for my bad English, but I think writing this in English, it will reach out to more people.

Greetings
Doug
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
HDD: 1 x 500GB SATA for Freenas System (i know thats way over the top but i didnt had a smaller one or a usb disk at hand...)
I would suggest upgrading this to a mirror so you don't have a system crash if the boot drive fails.
HDD: zpool: Raid-Z1 4 x 6 TB Hitatchi HGST HDN726060ALE614
Edit: You are IO bound because of the low drive count and the drive (array) configuration.
For your application, you are going to need a whole lot more drives, for speed, not capacity. At a minimum, I would say you need 10 drives in mirrored sets. It should give you enough bandwidth to the drives to fully utilize that 10GB network.
What I would do is use 12 x 2TB drives in six mirrored pairs to give a little room for overhead in both speed and storage capacity because you don't want to fill it beyond 50% with iSCSI. The calculations I did depend on using fast drives 170BM/s.
 
Last edited:

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
Thanks for your advice. i will try to upgrade to more drivers. Since i only have 8 + 2 (System + mirrored System, i will do that) i first need to swap out my chassis.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Are your Exchange databases also hosted on FreeNAS or are they on DAS and only using FreeNAS for streaming backups?
 

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
Are your Exchange databases also hosted on FreeNAS or are they on DAS and only using FreeNAS for streaming backups?
The Exchange Databases are on the exchange hosts only. There is no dag at the moment. I will setup the DAS soon. We are currently evaluating the option to upgrade from 2010 to 2016. Currently I want to use FreeNAS only to stream the backup to it.

Hi,

i have also 2 exchange servers running (ca. 2TB) (both on hyper-v), im backuping them with veeam backup (hyper-v backup) per SMB to FreeNAS (once a day)

you can use veeam on your host machine to backup on iscsi, smb, local volume etc.
https://www.veeam.com/windows-endpoint-server-backup-free.html
thanks, i will take a look at this software. I just run quickly over it and right now i cant see any direkt adventages over the ms windows backup solution.
 
Last edited by a moderator:

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
I put in 4 more Disks for a another try and performance is increasing. So i probably will move everything to another chassie. But i have one last understanding question. I put in the SSD... as ZIL and L2ARC Cache.

Clipboard - 25. September 2017 22-49.png SSD is 100% busy

all other Disks are not nearly as busy as the SSD
Clipboard - 25. September 2017 22-50.png

Does that mean the ssd maybe slowing down things? Latency on the normal disks is about 1.5 - 1.8 ms and the SSD is at 5 ms writing and 11 ms deleting. Just trying to get into this. Right now i have only 8 Disk in, not the recommended 12 but in the recommended configuration 2 disks mirrored in 4 vdev's.
 

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
UPDATE: I did remove the ZIL Cache while backing up... and boom it was like unleashing something. Performance instandly doubled from 1G/s to 2G/s incomming datastream :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
UPDATE: I did remove the ZIL Cache while backing up... and boom it was like unleashing something. Performance instandly doubled from 1G/s to 2G/s incomming datastream :)
Yes, a regular consumer SATA SSD is going to be slower than the pool. To get the kind of IOPS you need for this from an SSD, it would need the connect directly to PCIe like the one @Stux used in his recent build.

Right now i have only 8 Disk in, not the recommended 12 but in the recommended configuration 2 disks mirrored in 4 vdev's.
Based on what you said, you either are using the wrong terminology, or you didn't configure the disks the way I am trying to convey.
I will make a diagram to try and illustrate.
Code:
zpool status

  pool: mirropool
state: ONLINE
scrub: none requested
config:
		NAME		   STATE	   READ  WRITE  CKSUM
		mirropool	  ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c2t0d0p0   ONLINE		0	 0	 0
			c3t0d0p0   ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c4t0d0p0   ONLINE		0	 0	 0
			c5t0d0p0   ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c6t0d0p0   ONLINE		0	 0	 0
			c7t0d0p0   ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c8t0d0p0   ONLINE		0	 0	 0
			c9t0d0p0   ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c10t0d0p0  ONLINE		0	 0	 0
			c11t0d0p0  ONLINE		0	 0	 0
		  mirror	   ONLINE		0	 0	 0
			c12t0d0p0  ONLINE		0	 0	 0
			c13t0d0p0  ONLINE		0	 0	 0
		spares
		  c14t0d0p0	AVAIL
		  c15t0d0p0	AVAIL
errors: No known data errors

In the pool illustrated, each mirror is a vdev (virtual device) and the pool is a stripe of all the mirrors. This gives the most IOPS (speed) for the available number of disks. I illustrated two hot spares, but you can skip that if you are not adverse to the risk. Don't skimp on this. The spares are for the sake of reliability and you will want to keep an eye on it.
 

Doug04

Cadet
Joined
Sep 21, 2017
Messages
6
upload_2017-9-26_8-17-39.png


i think besides the cache, that looks very much alike. And as i said, yes i can see a hugh performance increase. It's much better now :)
 
Status
Not open for further replies.
Top