New to FreeNAS - did I make a mistake with my pool?

thecoffeeguy

Dabbler
Joined
Apr 5, 2014
Messages
39
Hey folks.
I built my first FreeNAS server at my house and started down the learning path.
First, I really like FreeNAS. It just seems to work. Granted, I have some things to learn, but i will get there.
My setup is very simple.
FreeNAS box with (2) 2TB 7200 SATA drives. I set this up with iSCSI attached to a ESXi cluster. Works well.

However, as I was looking through the UI today, I realized I may have made a mistake when I started to create my first (and only) pool. Not sure how I did it but I can guess), but when I created the pool, I only selected about 900gb out of the total 1.8TB RAW. I am missing about 600gb+ of useable space.

So my simple question is, is there a way to recover from this, or do I need to start over?
I will start there and happily provide more information as needed to help me out.
Much thanks

TCG
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You really do not want to use more than 50-60% of an iSCSI share, and ideally only 25-50%. Over time, fragmentation will increase and the performance will drop.


From my perspective, your setup is already pretty ideal. Other people will point out that it's possible to go beyond 50%, and yes, it's possible to hit 95% without it being catastrophic, but VM IOPS density issues combined with fragmentation rapidly put you in a terrible position for performance. As you do updates on your vmdk's, sequentiality plummets and from the VM's PoV, both sequential and random performance falls dramatically.
 

thecoffeeguy

Dabbler
Joined
Apr 5, 2014
Messages
39
You really do not want to use more than 50-60% of an iSCSI share, and ideally only 25-50%. Over time, fragmentation will increase and the performance will drop.


From my perspective, your setup is already pretty ideal. Other people will point out that it's possible to go beyond 50%, and yes, it's possible to hit 95% without it being catastrophic, but VM IOPS density issues combined with fragmentation rapidly put you in a terrible position for performance. As you do updates on your vmdk's, sequentiality plummets and from the VM's PoV, both sequential and random performance falls dramatically.

O wow. That is amazing. That makes me feel so much better. I had visions of having to recreate all my work which would take me a long time.
Really appreciate that. Made my night for sure!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So you accidentally arrived at a great configuration? :)

For future drive-purchasing reference, you ideally want to buy drives that are substantially larger than your desired amount of data to be stored.

ZFS has two basic things it is good at:

1) Large sequential files (backups, ISO's, zip/tarballs, etc) which ZFS can handle very efficiently with a RAIDZ pool, and here the advice is not to fill the pool with more than 80, 85, *maybe* 90%. This is general advice and there are specific exceptional cases.

2) Small random I/O (databases, VMDK, source code trees with a million files, etc.) which ZFS deals with best through mirrors. ZFS fragmentation is a performance killer on HDD. ZFS mitigates this for writes by having gobs (and I mean ridiculous gobs) of free space. If you have a pool that is 10% full, you will see ZFS write both sequential and random data to HDD at speeds that would make you think it is SSD. Seeks for reads are mitigated by ARC and L2ARC. So you can get an insanely fast VM storage engine if you play the game correctly.

The thing is, most people cannot stomach losing "all that capacity." But this is simple computer science. You don't get something for nothing. ZFS transforms large amounts of free space into high write speeds and lower fragmentation. We don't have to like it but it's an equitable tradeoff.
 

thecoffeeguy

Dabbler
Joined
Apr 5, 2014
Messages
39
So you accidentally arrived at a great configuration? :)

For future drive-purchasing reference, you ideally want to buy drives that are substantially larger than your desired amount of data to be stored.

ZFS has two basic things it is good at:

1) Large sequential files (backups, ISO's, zip/tarballs, etc) which ZFS can handle very efficiently with a RAIDZ pool, and here the advice is not to fill the pool with more than 80, 85, *maybe* 90%. This is general advice and there are specific exceptional cases.

2) Small random I/O (databases, VMDK, source code trees with a million files, etc.) which ZFS deals with best through mirrors. ZFS fragmentation is a performance killer on HDD. ZFS mitigates this for writes by having gobs (and I mean ridiculous gobs) of free space. If you have a pool that is 10% full, you will see ZFS write both sequential and random data to HDD at speeds that would make you think it is SSD. Seeks for reads are mitigated by ARC and L2ARC. So you can get an insanely fast VM storage engine if you play the game correctly.

The thing is, most people cannot stomach losing "all that capacity." But this is simple computer science. You don't get something for nothing. ZFS transforms large amounts of free space into high write speeds and lower fragmentation. We don't have to like it but it's an equitable tradeoff.

I know. I feel very lucky that this happened. Talk about a blessing in disguise.
Very helpful information to have and know. I was just about to test running VM's off of my FreeNAS box. It is a dev environment, so i have a lot of room to test things out.
I appreciate the help!

TCG
 
Top