Alan W. Smtih
Explorer
- Joined
- Aug 30, 2014
- Messages
- 54
The excellent [How To] Hard Drive Burn-In Testing shows two ways to run `badblocks`.
The basic way:
And with `-b` to tell `badblocks` to use a specific byte size that's sometimes necessary for disks over 2TB in size:
While that last command is working fine for me, I'd like to know if there is a way to determine in advance if it's needed (and what the number should be if that should change in the future).
The badblocks wiki has a section on Block Size that mentions looking those details via:
FreeNAS has that command. But, when I run it on a test FreeNAS system (which doesn't have any vdevs or zpools setup because I'm using to do a badblocks test), I get the following:
Is something else needed to make `dumpe2fs` work?
Or, is there another way to determine the block size to pass to badblocks for a given disk?
The basic way:
badblocks -ws /dev/ada#
And with `-b` to tell `badblocks` to use a specific byte size that's sometimes necessary for disks over 2TB in size:
badblocks -b 4096 -ws /dev/ada#
While that last command is working fine for me, I'd like to know if there is a way to determine in advance if it's needed (and what the number should be if that should change in the future).
The badblocks wiki has a section on Block Size that mentions looking those details via:
dumpe2fs /dev/<device-PARTITION>
FreeNAS has that command. But, when I run it on a test FreeNAS system (which doesn't have any vdevs or zpools setup because I'm using to do a badblocks test), I get the following:
Code:
dumpe2fs 1.43.3 (04-Sep-2016) dumpe2fs: Bad magic number in super-block while trying to open /dev/ada0 Couldn't find valid filesystem superblock.
Is something else needed to make `dumpe2fs` work?
Or, is there another way to determine the block size to pass to badblocks for a given disk?
Last edited by a moderator: