High frag rates with special vdevs?

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Anyone else notice high frag rates with special VDEVS?

My smaller supermicro box config:

2 data vdev, mirrored pair vdev WD Gold 10TB
1 speciall vdev, mirrored pair 240GB Intel DC SATA SSD
1 nvme M.2 500GB L2ARC
128GB RAM

Right now, I have dedupe turned on, with about 5TB of data, and I am seeing about 55% frag for the special VDEV. Before copying off the data and back on w/ dedup enabled, I recall I noted 25%; where the special vdev was doing only metadata instead of metadata+dedupe.

Oddly, my larger SM box doesn't seem to have this issue with the same data; frag rates are way less...though slightly different topology:
6 data vdev, mirrored pairs, WD Gold 10TB
1 special vdev, mirrored pair 240GB Intel DC SATA SSD
1 nvme m.w 1TB L2ARC
256GB RAM
1 dedupe vdev, mirrored pair nvme M.2 1TB


Though there is a difference; the smaller box is doing periodic snapshots.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Anyone else notice high frag rates with special VDEVS?
I wouldn't be too worried about fragmentation on an SSD/NVME (usually the special VDEV would be one of those, no?).
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
I wouldn't be too worried about fragmentation on an SSD/NVME (usually the special VDEV would be one of those, no?).


FreeNAS / ZFS Fragmentation isn't how we think of with windows file fragmentation, but more towards the fragmentation of free space - which is why I find this unusually high and strange that it occurs on special vdevs. Dedupe tables get pounded, and that in combination with all the metadata getting redirected; I get that it'd compound the issue; but, this seems to happen without dedupe involved; and the constrasting system does not see this level of fragmentation.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Fragmentation isn’t important for random access media like SSD. There’s no difference in seek time between adjacent and non-adjacent blocks.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Fragmentation isn’t important for random access media like SSD. There’s no difference in seek time between adjacent and non-adjacent blocks.
True, however, fragmentation matters when it comes to writes and finding free blocks. Granted, seek times are also not as concerning with SSD, but, as contiguous free blocks become less available, that could lead to file fragmentation on writes as well, which seems like that would compound the issue and perhaps affect performance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Seek times are what makes contiguous blocks important.

With no seek time problem, there's no contiguous block requirement. Blocks are just blocks, wherever they are.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Even SSDs benefit from sequential I/O, so while it's true that fragmentation is less of a problem on SSDs of all kinds, it's not an issue that no longer exists.

From memory, sequential scrub/resilver was about twice as fast as the legacy in-order scrub/resilver on the benchmarked SSDs (HDDs multiples of that).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Even SSDs benefit from sequential I/O, so while it's true that fragmentation is less of a problem on SSDs of all kinds, it's not an issue that no longer exists.

From memory, sequential scrub/resilver was about twice as fast as the legacy in-order scrub/resilver on the benchmarked SSDs (HDDs multiples of that).
I was puzzled by this at first until I did a bit of digging to understand what might be behind it... transactions groups at the OS level are easier for the OS to do with contiguous blocks...

I'm not sure it will make a world of difference on SSD anyway, but indeed there is an advantage. Maybe with the relatively small nature of transactions on metadata, the situation won't have much chance to play out in any case.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
and the constrasting system does not see this level of fragmentation.
The larger system (according to your description) has a vdev for dedup and a separate vdev for metadata. Having two separate vdevs serving these functions is going to reduce the potential for fragmentation.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Oddly, my larger SM box doesn't seem to have this issue with the same data; frag rates are way less...though slightly different topology:
6 data vdev, mirrored pairs, WD Gold 10TB
1 special vdev, mirrored pair 240GB Intel DC SATA SSD
1 nvme m.w 1TB L2ARC
256GB RAM
1 dedupe vdev, mirrored pair nvme M.2 1TB
Were you able to measure a performance difference from now to before the addition of the special vdev?
 
Top