Recently I've completely rebuilt my Freenas box and did a fresh install moving from FreeNAS 9.3 to the new FreeNAS 11. Reason was I had run out of space and needed to add another vdev. I could have just kept the old pool intact and added it on but I decided to destroy the whole pool and start from scratch. Anyway all went well with the install and I've since restored all my stuff from backup. I copied everything drive by drive avoiding copying in parallel from multiple drives at the same time.
As you can see the pool shows over 30% fragmentation. Considering the pool is new and nothing has been deleted or moved around is this normal? It just seems way too high. Can someone please explain what is going on, have I missed something?
My old pool was about 54.5Tb and it was going on 2-3 years worth of use with stuff written then deleted. It was close to but not over 90% full and it showed 13% fragmentation.
System specs are.
Xeon E3-1220v2
Supermicro X9SCM-F
32 GB ECC
30 X 3 Tb WD drives ( Mix of Reds and Greens) running off of 4 X Lsi 9211-8i
Code:
[root@freenas ~]# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Main 81.8T 47.6T 34.2T - 32% 58% 1.00x ONLINE /mnt freenas-boot 74.5G 996M 73.5G - - 1% 1.00x ONLINE -
Code:
[root@freenas ~]# zpool list -v Main NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Main 81.8T 47.6T 34.2T - 32% 58% 1.00x ONLINE /mnt raidz2 27.2T 16.2T 11.0T - 34% 59% gptid/8483a424-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/859fab5d-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8695a54b-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/877852bf-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/886c1d5e-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/89520ebf-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8a42d9ea-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8b1d2894-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8c049fcb-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8cee810c-8970-11e7-9ca1-0cc47a00428b - - - - - - raidz2 27.2T 16.5T 10.8T - 34% 60% gptid/8e0dcc15-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8efdbefb-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/8feeaea3-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/926901a7-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/934c7ffa-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/94327c62-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/95190c9c-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/960758d3-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/97343ec5-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/98633dd2-8970-11e7-9ca1-0cc47a00428b - - - - - - raidz2 27.2T 14.8T 12.4T - 30% 54% gptid/99a9d0fd-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/9a87942a-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/9b871d52-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/9cdba1a4-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/9e287b7c-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/9f85d867-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/a10d6f4a-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/a2769093-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/a3ceef3b-8970-11e7-9ca1-0cc47a00428b - - - - - - gptid/a4bb238a-8970-11e7-9ca1-0cc47a00428b - - - - - -
As you can see the pool shows over 30% fragmentation. Considering the pool is new and nothing has been deleted or moved around is this normal? It just seems way too high. Can someone please explain what is going on, have I missed something?
My old pool was about 54.5Tb and it was going on 2-3 years worth of use with stuff written then deleted. It was close to but not over 90% full and it showed 13% fragmentation.
System specs are.
Xeon E3-1220v2
Supermicro X9SCM-F
32 GB ECC
30 X 3 Tb WD drives ( Mix of Reds and Greens) running off of 4 X Lsi 9211-8i