File limit restriction when copying files larger than 1Tb?

Status
Not open for further replies.

hdezmora

Cadet
Joined
Mar 18, 2013
Messages
4
Hello there, I'm trying to copy a 1.5Tb file from a NetApp NFS share on a host running RHEL6.6 into a second FreeNAS NFS share and I'm getting this error message: "cp: writing `/freenas/mountpoint/filename': File too large" having the file being copied less than 1Tb from its original size. We are running FreeNAS-8.3.1-RELEASE-x64 (r13425) on a quad-cores Intel Xeon processor with 32Gb of RAM. Please, let me know if there is any limit I need to edit in order to have files larger than 1Tb being copied into our FreeNAS system. Also, please let me know if you need additional info to address this issue.
Thanks,
-Hugo
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There is no practical limit - something is up with your system.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
try running the command limit or ulimit -a and see if there is a limit on filesize.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Do you have a quota set and your file is larger than the free space remaining on the dataset?

Do you have reservations set on other datasets that are making the free space on your current dataset so low the file won't fit?

I'm betting one of the two
 

hdezmora

Cadet
Joined
Mar 18, 2013
Messages
4
Guys, sorry about my delay in answering to you... a newborn at home... holidays... family.... high priority tasks at office... made me be idle on this problem I'm experiencing with my freenas system.

There is no practical limit - something is up with your system.

Eric, that's the issue I'm not able to identify the problem on my system... :(

try running the command limit or ulimit -a and see if there is a limit on filesize.

rs255, filesize is set to unlimited

Do you have a quota set and your file is larger than the free space remaining on the dataset?

Do you have reservations set on other datasets that are making the free space on your current dataset so low the file won't fit?

I'm betting one of the two

cyberjock, this is what I have:

[root@storage] ~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
mypool 27.2T 11.1T 16.2T 40% 1.00x ONLINE /mnt

[root@storage] ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 20.2T 232G 512K /mnt/mypool
mypool/MySQL 500G 451G 49.1G /mnt/mypool/MySQL
mypool/Oracle 500G 379G 121G /mnt/mypool/Oracle
mypool/OracleApps 1T 919G 105G /mnt/mypool/OracleApps
mypool/OracleBku 10T 3.89T 6.11T /mnt/mypool/OracleBku
mypool/bkuMySQL 500G 500G 356K /mnt/mypool/bkuMySQL
mypool/sandbox 500G 479G 21.3G /mnt/mypool/sandbox
mypool/oraArchDev1 10G 6.42G 3.58G /mnt/mypool/oraArchDev1
mypool/oraDataDev2 20G 12.7G 7.31G /mnt/mypool/oraDataDev2
mypool/test1 7.03T 2.97T 2.03T /mnt/mypool/test1
mypool/test2 202G 198G 1.63G /mnt/mypooltest2

The dataset in question is the OracleBku which has 3.89Tb available and the file I would like to transfer is 1.5Tb.

Thanks,
-Hugo
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That output doesn't include reservations or quotas, so that output doesn't help me. :P
 

hdezmora

Cadet
Joined
Mar 18, 2013
Messages
4
That output doesn't include reservations or quotas, so that output doesn't help me. :p
Is this input what you are asking for?

[root@storage] ~# zfs get reservation,quota
mypool reservation none default
mypool quota none default
mypool/MySQL reservation none default
mypool/MySQL quota 500G local
mypool/Oracle reservation none default
mypool/Oracle quota 500G local
mypool/OracleApps reservation none default
mypool/OracleApps quota 1T local
mypool/OracleBku reservation none local
mypool/OracleBku quota 10T local
mypool/bkuMySQL reservation none default
mypool/bkuMySQL quota 500G local
mypool/sandbox reservation none default
mypool/sandbox quota 500G local
mypool/oraArchDev1 reservation none default
mypool/oraArchDev1 quota 10G local
mypool/oraDataDev2 reservation none default
mypool/oraDataDev2 quota 20G local
mypool/test1 reservation none default
mypooltest1 quota none default
mypool/test2 reservation none local
mypool/test2 quota none local

Thanks,
-Hugo
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So if you have a 1.5TB file, the only place you can put it is in the pool itself, the test1, test2 and OracleBku datasets. Anywhere else is too small since your quota is too small.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Set 1:

Code:
# uname -r
8.3-RELEASE-p4
# zfs set compression=zle storage0/Scratch


Code:
root@ubuntu:~# mount storage0:/mnt/storage0/Scratch /tmp/foo
root@ubuntu:~# cat /dev/zero >> /tmp/foo/testing
cat: write error: File too large
root@ubuntu:~#


Code:
# ls -al /mnt/storage0/Scratch/testing
-rw-rw-rw-  1 root  wheel  1099511627775 Jan  6 09:12 /mnt/storage0/Scratch/testing


Set 2:

Code:
# uname -r
9.3-RELEASE-p5
# zfs set compression=zle storage1/Scratch


Code:
root@ubuntu:~# umount /tmp/foo; mount storage1:/mnt/storage1/Scratch /tmp/foo
root@ubuntu:~# cat /dev/zero >> /tmp/foo/testing


Code:
# ls -al /mnt/storage1/Scratch/testing
-rw-r--r--  1 root  wheel  1100836700160 Jan  6 09:28 /mnt/storage1/Scratch/testing


Suggested fix: run current FreeNAS.
 
Status
Not open for further replies.
Top