6TB White Label drives varying size

Status
Not open for further replies.

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
Hi All,

I have 12x 6TB White Label drives, hoping to add them to my existing pool. I ran some short/long smart tests and so far as i could tell they were OK. I did not run badblock tests.

I attempted to add these into my pool and it errored stating it could not determine the size of one of the disks, it failed, and my pool was detached. I was able to just import my pool with no significant side effects other than my rclone is now no longer working but I'm sure that's a simple fix.

I'm curious though, some of my drives are 6.0TB and some are 6.3TB. Is there a way to tell FreeNAS to just use 6.0 TB across all of them? I'd prefer to just replace them with 6TB instead of slowly replacing with 8+TB drives. And due to the size differences I had to manually create the vdev to extend my pool.

Questions:

Can I configure a drive so that it's only a 6TB Drive instead of a 6.3TB drive?
Are these WL Drives worth keeping? I have seen mixed reviews, and have been using 4 of them in a synology unit for about 2 years without any issues.
Should I be running badblock against these drives? In all my years managing storage between NetApp/3par/HWRAID/HP/EMC we've never done any precheck on drives, just add it and let it go.

Thanks!
 
Joined
Feb 2, 2016
Messages
574
1. Probably but I wouldn't bother. FreeNAS will automatically use as much space as is available on a drive without you having to specify the size. The only time mixing sizes is really a problem is if you have a drive fail and replace the drive with a smaller drive.

Are all drives the same part number, I can't imagine a difference as large as 0.3 TB. That's weird. Are they new drives? Or have you wiped them clean?

2. Sure. For the right price, we'll throw any drive into FreeNAS. I'm not a drive snob.

3. We don't do anything to our drives before putting them in the server. FreeNAS is really good about identifying trouble and doing the right thing. For as little effort as it is to swap a drive, we roll the dice.

Of course, we also replicate our data to a secondary server and are really good about replacing drives as soon as they fail. If you have questionable backups or no backups or will ignore a drive failure for weeks or months, testing drives before use may be warranted.

Cheers,
Matt
 

mellman

Dabbler
Joined
Sep 16, 2016
Messages
38
1) That's my concern. If a 6.3TB Drive fails, I'll need to buy a 6.3TB Drive again - or just buy an 8TB or 10 or 12TB drive. I'd prefer to not have to do that if I can avoid it so if I can just shave off that extra 300gb somehow that'd be best in my opinion. The drives are 'new' white label drives. slightly different model numbers - WL6000GSA6454 & WL6000GSA6457

I have my important data backed up to multiple locations and am good about replacing drives when they fail.

Thanks for the input!
 
Joined
Feb 2, 2016
Messages
574
Last I checked, 8TB and 10TB drives were still meaningfully more expensive per byte than 6TB drives. I'd roll the dice and stick with the drives you have.

Identify and set aside one of the 6.3TB drives as a spare? That way if you have to replace a drive, you can. Or maybe keep an 8TB on the shelf in case one of the 6TB drives fail?

Cheers,
Matt
 
Joined
Jan 18, 2017
Messages
525

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Should I be running badblock against these drives?
Yes, you should run badblocks against any new drive before you put it into production.
Is there a way to tell FreeNAS to just use 6.0 TB across all of them?
Shouldn't be any need to--the smallest disk in a vdev is going to determine the capacity. So as long as you have at least one 6.0 TB disk in the vdev, that's all the capacity it'd use on any of the disks there. If you're really concerned, you could manually partition the disks, but that really shouldn't be necessary.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Can I configure a drive so that it's only a 6TB Drive instead of a 6.3TB drive?
If you have all of them in one vdev FreeNAS will adjust the size to the smallest one. Just make sure you always have at least one sized 6TB (not 6.3) otherwise the vdev will automatically grow.

my pool was detached
Sounds scary to me, a noob. Are you trying to add drives/vdevs to an pre-existing pool? Is the pre-existing pool OK?

Sent from my mobile phone
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
Just make sure you always have at least one sized 6TB (not 6.3) otherwise the vdev will automatically grow.
There is a pool property named autoexpand that you can turn off to prevent this.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Is there a way to tell FreeNAS to just use 6.0 TB across all of them?
Confusingly enough, this is what the swap space allocation is meant for. I can't tell if you're replacing disks or creating a new vdev. If you're replacing, it's easy to set the swap allocation to what you want to restrict, add the drive, and then set it back to the original value. I'm not sure if there's a way to do this for a new vdev, but there's likely a way on the command line. You'd likely need to partition all of the drives yourself. I'm not sure what would be involved there.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Confusingly enough, this is what the swap space allocation is meant for. I can't tell if you're replacing disks or creating a new vdev. If you're replacing, it's easy to set the swap allocation to what you want to restrict, add the drive, and then set it back to the original value. I'm not sure if there's a way to do this for a new vdev, but there's likely a way on the command line. You'd likely need to partition all of the drives yourself. I'm not sure what would be involved there.
I think 300GB swap space is not what may make one happy.

And for sanity: IIRC there is some limit - if one has N disks/vdevs then the N+1-th and further disks/vdevs are not used for swap... I can't remember what big is N though :)
 
Status
Not open for further replies.
Top