Veeam Backup is Bombing at 16 Terabytes

Status
Not open for further replies.
Joined
Dec 16, 2013
Messages
8
Howdy,

I've tried this 10 times now, with different raidz configs, currently using a pool of six mirrors in a stripe, all 4TB drives. I'm trying to run a Veeam backup over the network and it continually dies at 16TB. I have enabled large backup support in Veeam, and it will process up to 18 TB (full backup) but when it gets to that 16 TB on the write, KABOOM! It shuts down.

Can anyone help me figure out how to verify the sector/block/cluster size on a setup like this or chime in about other possible issues? I'm running on a Dell R720XD with the high end PERC controller in jbod, 48 gigs of RAM, FreeNAS installed to the Compact Flash card on the system. Incidentally I have had over 16 TB of data written to this system before (just in multiple files).

Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Is your Veeam license limited to 16TB of backups or anything? Seems odd to stop working at 16TB...

I could be wrong, but I don't believe any PERCs out there have a working JBOD design. What happens if you do smartctl -a /dev/(somedisk)?
 
Joined
Dec 16, 2013
Messages
8
No, I've talked to Veeam. By JBOD I just mean that there is no RAID setup on the controller. It is the higher end card for that model server though as I had issues with the lower end card and performance.

when I use the -a switch, I get unable to specify device type and prompted to specify with -d
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, that controller isn't compatible with SMART, so you should *not* be using it. :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't have a pool with 16TB free so I can write a 16TB test file.

Any chance you have a quota set for the dataset or something? ZFS can definitely handle files bigger than 16TB. :P
 
Joined
Dec 16, 2013
Messages
8
No, pretty much set the defaults. Do you happen to know how to "create" a 16tb file on the system? I recall reading about some folks using something to performance test but I dont recall what it is and cant think of any other way to generate a 16 TB file.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah..

disable compression on your pool or dataset (whichever you plan to store the file on)

dd if=/dev/zero of=(filename) bs=1G count=16384

That should give you a 16TB file.

Now I'd recommend you store the file wherever your Veeam backups are going for simplicity, but anywhere should work.
 
Joined
Dec 16, 2013
Messages
8
Ok interesting thing... I did that and created what appears to be an 18TB file while ssh'd into the box, but the pool shows that I have all 21 TB of free space still available in the web gui...

Did I actually create anything that takes up space?

zpool list shows:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
junk 21.8T 74.7M 21.7T 0% 1.00x ONLINE /mnt
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I'll bet you missed the "disable compression" step. A string of zeroes (or any of the same byte, really) compresses very well, and won't use anywhere close to the stated amount of disk space.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's exactly what you did. Can't skip steps and expect a proper outcome. :)
 
Joined
Dec 16, 2013
Messages
8
Ok, so I generated an 18TB file without issue. The issue I suppose is not on the FreeNAS side. I wonder if anyone else out there has had trouble essentially writing larger than 16TB over CIFS from a Windows server to FreeNAS and if so what was the fix?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not sure, but some Googling has said that the limits for CIFS is 256TB - 64KB, so I wouldn't expect that to be a limit. ;)

I'm not sure how many people have manipulated 16TB files with CIFS, but it's possible there's a bug. Maybe put in a bug ticket?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you have another *nix machine (or VM), you could mount your share via SMB on that machine, and use the same dd command to produce a 18TB file over CIFS--it might help narrow down where the problem is. If you wanted to be pretty brutal with it, you could use if=/dev/random rather than if=/dev/zero, to make sure no compression at any point was changing the result.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There's a dd for windows out there too. You could just create a 16TB file through your windows server directly. This is probably better than using another OS since there is the possibility this limitation is on Windows. ;)
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Are you backing up vCenter in the same veeam job that comprises the 16tb capture? Veeam will freak out if vCenter is included in the data center backup; it needs to be its own job. I don't know if it's in the job or not but if it is that could be the problem
 
Status
Not open for further replies.
Top