Resource icon

Hard Drive Burn-in Testing

16tb drives are fine with -b4096
With larger drives +14TB) badblocks will complain with "-b 4096".
"Value too large to be stored in data type invalid end block "

This can be fixed using larger block size:
"badblocks -b 8192 -ws /dev/daX"
Really helpful info - thanks qwertymodo.

In case it helps anyone else, I just ran into an issue trying to run badblocks on 18TB drives - it throws an error when the number of blocks to test is greater than the max value of an unsigned 32-bit integer (4,294,967,295):

root@delta:~ # badblocks -b 4096 -wsv /dev/da16
badblocks: Value too large to be stored in data type invalid end block (4394582016): must be 32-bit value

Assuming 4K physical blocks, 16TB and lower drives should be fine, but the problem will crop up on any drive 18TB or larger.

It appears there's two possible solutions to this:

1) Run badblocks with a larger block size (that's still a multiple of the drive's physical block size) - e.g. 8192, 16384, etc with 4k physical blocks. I did, however, read that using a non-native block size can cause false negatives - albeit this was anecdotal (a few mentions on forums, but I can't find a primary source).

2) Split the badblocks run into chunks of less than 4,294,967,295 blocks (i.e. each run only targeting only part of the disk). e.g. in my specific case:

badblocks -b 4096 -wsv /dev/da16 2197291008 0

followed by:

badblocks -b 4096 -wsv /dev/da16 4394582016 2197291009
Useful resource on HDD burning and the use of badblocks.
Some upgrades would be welcome like estimated time of badblocks, badblocks process and options (see Jon Bentley answer here: https://superuser.com/questions/153373/how-long-will-badblocks-vws-run) and disks temperature monitoring with `smartctl` and `grep` for instance.
Top