Why is my zpool completely full.

Status
Not open for further replies.

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
Good afternoon,

Current build is: FreeNAS-9.1.1-RELEASE-x64 (a752d35)
Performance has been running fine.

I have 3 raidz2's in my pool totaling 28.5 TB of usable space. Everything is showing as healthy. I allocated 23 TB to an iscsi extent. I had the iscsi drive attached to a Windows 2008 R2 server and it showed 2.2 TB free of the 23TB. However, freenas it is showing that the pool of 28.5 TB is 100% full. How is this possible?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Choose any one or more of the following:

1. Your iscsi extent, when first created, uses the largest blocks (128kb). As you use the extent it will be smaller blocks which are less efficient and take up more space. For this reason you should never create extents that will use more than 50% of your space as they *will* grow bigger and bigger as time goes on.
2. You have snapshots setup
3. Something else
 

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
Good evening cyberjock,

Thanks for the reply.

1. I understand that the reporting between windows and freenas would be different in the case that windows shows 2.2tb free and for example freenas would show 500 GB free. You are correct about the 128 block size. However, how can more data be used that what i gave the iscsi volume.
2. I am not using snapshots

[root@freenas] /# zfs list -t snapshot
no datasets available
[root@freenas] /#

3. Not sure what else. Still researching and learning. Thanks for the help.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Because every block has a checksum along with metadata. If your blocks get smaller you'll have more of them and hence more checksums and metadata. So both of those grow in relation to the number of blocks you have. ;)
 

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
cyberjock

Understood. I'm assuming there is no real fix to this other than to create another volume, copy the data, then destroy original volume and recreate it with the proper block size. What would your recommendation be for the block size so that this wont happen again in the future? This server is only used for backups.
 

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
The block size would be edited in the Target tab under iscsi correct? Mine is showing 512 as the logical block size.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, this is ZFS block size and you don't control the block size. You set a maximum size and ZFS sizes them down as appropriate based on the write requests. You shouldn't set an artificially low size either. Leave it as it is. There is nothing gained from destroying and recreating the pool from scratch unless you simply can't use the pool as-is or need to change the layout or something.

All you need to do is delete and recreate the iscsi extent of the proper size.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
The dataset would need to have the recordsize set to something less than 128K before the extent file is created. Another option is to turn on lz4 compression on the dataset before the extent file is created; this creates the possibility of using less space.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The dataset would need to have the recordsize set to something less than 128K before the extent file is created. Another option is to turn on lz4 compression on the dataset before the extent file is created; this creates the possibility of using less space.

Not true. The default is already lz4. You don't need to set it unless you un-set it.

And as I said above, don't set it down to something smaller "just because you can". That will hurt performance and you will see NO gain and only losses.
 
Status
Not open for further replies.
Top