Non-Native Block Size

tbutler

Cadet
Joined
May 31, 2021
Messages
5
I upgraded my RAID-Z to new drives and the resilvering process went well, but now I have a "One or more devices are configured to use a non-native block size. Expect reduced performance." warning. It appears to be configured to 512B blocks, but the native size is 4096B. I saw some discussion of this on the forums, but was unclear: is there a way I could correct this on a drive level basis one drive at a time to maintain the existing pool? Or do I need to start over with a new pool to correct this? I suspect the latter is the simplest course of action, but after replacing a failed drive and enlarging the pool successfully at the same time, I'm sort of seeing it as a challenge I'd like to accomplish to get everything configured without simply wiping and starting over.

Incidentally, is there a way to guesstimate how much of a performance hit I'm taking with the non-native size? Maybe this is a lot of worry over nothing? I'm getting about 84MB/sec both read and write over GigE at the moment. It's been a long time since I tested the performance, but I thought I was closer to the upper threshold of GigE's real world performance the last time I tested it, years ago.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Measuring the performance over network introduces factors outside the hard disk (esp. for SMB). If you want to measure the pure transfer rate of the pool, you need to do this locally. Also, make sure that your data set for testing is considerably larger than your ARC
 

tbutler

Cadet
Joined
May 31, 2021
Messages
5
Thanks. Is there a good disk I/O test you’d recommend for testing the drive locally via shell?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
fio might be a good candidate if you work out the right settings.
 
Top