Extend Volume from the same disk

Status
Not open for further replies.

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Come to think of it, I recall some discussion earlier that might have been relevant. It was in the context of > 2 TB disks on a controller/motherboard that only supported 2 TB, and what would happen if, after a pool was created on those disks, they were moved to a controller/motherboard that supported their full size. My memory is that, after some jiggery-pokery, the full capacity was usable, but I don't recall what jiggery-pokery was done. Some searching of the forums would probably find it, though.
Maybe if you offline and replace the drive with itself. It night resilver and use the extra space.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@pete748 Here is the fix for your problem.

Please post the output of gpart show da1 and it should look similar to this (note I only started witha 10GB drive and increased it to 20GB for testing):

Code:
root@freenas:~ # gpart show da1
=>	  40  41942967  da1  GPT  (20G)
		40		88	   - free -  (44K)
	   128   4194304	1  freebsd-swap  (2.0G)
   4194432  16777040	2  freebsd-zfs  (8.0G)
  20971472  20971535	   - free -  (10G)


Note that the free space is at the end of partitions directly after the freebsd-zfs partition and the partition we want to expand is #2.

To add the extra space at the end of the hard drive to the partition you need to follow these steps...

Step 1: Issue the command gpart resize -i 2 da1 assuming da1 partition 2 is where zfs resides.
Step 2: Reboot FreeNAS. It will fail if you do not reboot.
Step 3: Type command zpool list to get the name of the pool. In my case it's "singledrive" and in your case it should be "Volume1".
Step 4: Issue the command to grow the file system zpool online -e poolname /dev/da1p2

BAM! You should have all your storage now. Just go and verify it and enjoy it.
 
Last edited by a moderator:

pete748

Dabbler
Joined
Nov 14, 2017
Messages
10
Thank you! I'll give this a go tonight.
 
Last edited by a moderator:

KrisSpringer

Cadet
Joined
Jan 1, 2016
Messages
2
Worked perfectly! Thanks joeschmuck for the easy fix.
All the other's extended comments saying that FreeNAS shouldn't be used this way in a VM environment were not helpful in offering a solution. When seeking a solution to a simple problem it gets pretty tiresome scrolling through everyone replying and commenting about what they would have done differently but not actually addressing the original question. Modern social media mentality I guess.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
All the other's extended comments saying that FreeNAS shouldn't be used this way in a VM environment were not helpful in offering a solution. When seeking a solution to a simple problem it gets pretty tiresome scrolling through everyone replying and commenting about what they would have done differently but not actually addressing the original question. Modern social media mentality I guess.

I think it's unfortunate that you take this perspective on the offered help. The knowledge the forum members bring here is useful not only to solving problems, but also revealing more fundamental problems that should actually be solved. Tweaking the configuration of virtual drives under ZFS is quite like rearranging the deck chairs on the Titanic. ZFS and its benefits are mostly invalidated when used on any kind of abstracted storage. At best, that setup is a mismatch of tools, and at worst, that setup actually reduces performance and leads to increased risk for your data. If you must use some kind of abstracted storage, then use the appropriate tool for the job. We can help you find that tool, so you don't waste your time rearranging deck chairs on the Titanic.
 

wdeviers

Cadet
Joined
Nov 19, 2018
Messages
3
I think it's unfortunate that you take this perspective on the offered help. The knowledge the forum members bring here is useful not only to solving problems, but also revealing more fundamental problems that should actually be solved. Tweaking the configuration of virtual drives under ZFS is quite like rearranging the deck chairs on the Titanic. ZFS and its benefits are mostly invalidated when used on any kind of abstracted storage. At best, that setup is a mismatch of tools, and at worst, that setup actually reduces performance and leads to increased risk for your data. If you must use some kind of abstracted storage, then use the appropriate tool for the job. We can help you find that tool, so you don't waste your time rearranging deck chairs on the Titanic.

Nick et al:

Sorry to res a old thread, but this seems like a good place to start.

I'm in a similar situation. We need a reliable replacement for non-critical NFS and we're experimenting with FreeNAS. I've read the admonishments about not running ZFS on RAID, and I use ZFS on raw disks for my personal workstation. I get that it's "bad", but all I've ever found is exactly that... it's "bad". The FreeNAS hardware docs says "Genuine Very Bad Idea" with no additional followup as to why. I understand block storage intimately; we're using VMware against Nimble SSD storage arrays. It's "RAID", but it's not RAID in any practical sense. So you get VMDK abstraction, VMotion, and a lot of nice features in FreeNAS. I just don't happen to need ZFS redundancy.

My question, then, is what makes it so bad? It seems clear that running FreeNAS on a VM is almost universally considered an antipattern, but I cannot find an explanation that isn't somebody just repeating that somebody -else- said it was bad. It's certainly functional. Are we actually risking data by backing ZFS with a VMDK? Storage arrays have write cache, but it's not low-grade novice write cache like in a medium-grade hardware RAID controller.

I understand that you gain nothing additional from running ZFS on a storage array, but I'm skeptical of the argument that a single virtual disk in ZFS backed by an enterprise array is worse than raidz. Thoughts?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It seems clear that running FreeNAS on a VM is almost universally considered an antipattern, but I cannot find an explanation that isn't somebody just repeating that somebody -else- said it was bad.

The blog post "FreeNAS: A Worst Practices Guide" here summarizes it fairly well:
https://www.freenas.org/blog/freenas-worst-practices/

But even if you were able to guarantee that the RAID controller would provide the same cache-flush assurance that ZFS would give you with accessing the disks directly, the card did aggressive patrol reads and checksummed on every access; ZFS would be doing it too, so at best it's just adding potential performance overhead.

I understand that you gain nothing additional from running ZFS on a storage array, but I'm skeptical of the argument that a single virtual disk in ZFS backed by an enterprise array is worse than raidz. Thoughts?

My first thought here would be "why not cut the NFS export directly off the SAN?" Cut out the middleman.
 

wdeviers

Cadet
Joined
Nov 19, 2018
Messages
3
Thanks! My interpretation of the Worst Practices guide is that the largest danger is a RAID controller with a non-battery-backed write cache. I totally understand that with regard to running on hardware. But that's not really in play here, so the worst-case is that we're missing out on some of the features. I would assume that ZFS [on a single disk] is at least as resilient as XFS or ext4.

Nimbles don't have any fileservices. And, honestly, the file services on the old VNXes were subpar, so we decided to put NFS on VMs so at least we could migrate them around. Got burned too many times. Though in the future we might buy a dedicated low-end array; doing that feels like a step backward, though.

EDIT:
Now I'm curious how this would interplay with snapshots. It does seem to imply that we would expect ZFS to behave worse taking a crash-consistent backup than ext4 would. But I also don't really believe that. Or maybe I don't want to believe that. I'll play around with it.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Even with a battery or flash-backed cache on the RAID card there's still a window of opportunity for things to break.

Let's say ZFS issues a "flush cache" against a disk that's actually a RAID volume. Your controller card absorbs the 1GB or so of data into its protected RAM and starts spooling it out to disk.

Then your motherboard decides it's had enough, and goes poof. Your RAID card says "Hey, I lost power" and dumps the data to its internal NAND. That 1GB of data is trapped in your RAID controller card, and you likely won't be able to bring your pool back online without it. Hope there's no errors in the NAND flash on that card, or you'll then commit corrupted data to your pool.

With an HBA, all of the data is on the disks themselves. HBA dead? Swap it out, import the pool, you're up and running. Motherboard goes up in smoke? Pull all the drives and put them in another server, import the pool ... you get the idea. Having a RAID card involved ties the drives and controller together, and you lose portability as well as risking your pool being in a nasty, inconsistent state - which ZFS by its very nature was designed to avoid.

There's plenty of other cases beyond "lost features" such as "lost performance."

Let's say your RAID controller decides that it's time for a patrol read of your array. ZFS is trying to queue up I/O to disks that should be idle and doing nothing, but in reality they're thrashing trying to keep up with the RAID controller's demands as well as their own. With an HBA, ZFS goes "time for a scrub" and has visibility into how much I/O is hitting each device, so if there's a period of low activity, it will sneak a few extra read I/Os in there for scrubbing, with thresholds based on tunables.

Even better, let's say you have an older RAID card like the Dell PERC H700 series with a battery-backed write cache. Everything's working fine, the write cache is able to accelerate your disks. And then one day, write performance slams face-first into the ground. You look at the OS and try to figure out what's going on. Nothing. Just seems like your disks have decided to only be able to deliver a tenth of their previous speeds. You pull logs. Analyze. Try to decide if it's worth rebooting and causing downtime. Then suddenly things are back to normal. What the hell caused it? You're never able to sort it out. And then it happens again. Oh no, you think, here we go again. You post your ventings on a forum, like this one.

And someone goes "Oh, that's the 90-day battery learning cycle. It puts your drives into write-through mode. Performance will suck until it's done. Gotta upgrade to the newer model to fix that."

Now that's for end-users with RAID cards.

You're looking at layering ZFS on top of a LUN presented from a storage array that presumably addresses these issues, obeys flush commands, etc. So what do you lose versus XFS/ext4? A little bit of performance overhead, perhaps. Some peculiarities with a copy-on-write filesystem (which might be magnified at your array if it also uses copy-on-write or redirect-on-write). If you only present a single disk, ZFS can detect corruption, but it can't correct it - so you lose that major bonus. If your array can detect and correct it, then it's fixed there.

Basically, if you're willing to check and confirm that your array will fill in the gaps that ZFS won't be able to provide you - you won't lose anything. A commercial SAN array often can; a single HW RAID controller presenting virtual disks often can't.
 

wdeviers

Cadet
Joined
Nov 19, 2018
Messages
3
Thank you for a fantastic answer, I really appreciate it. That clarifies most everything; validates some assumptions and made me think a little harder about others.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Basically, if you're willing to check and confirm that your array will fill in the gaps that ZFS won't be able to provide you - you won't lose anything. A commercial SAN array often can; a single HW RAID controller presenting virtual disks often can't.
Some places routinely use SAN, (like from EMC), for ZFS LUNs. It generally works fine with the caveats @HoneyBadger listed. Agreed against using local HW RAID card used as HW RAID card.

Side note: One place I worked had a server using RAID-Z1 from EMC LUNs. Every other server I had to work on at that job, used plain LUNs, (or plain local disks). So this was an abboration. Turns out that server had some application performance problems. Some were solved by the prior EMC -> newer EMC migration. When I was assigned the next EMC -> newer EMC migration, I made a special effort to "fix" the problem. (Create new pool of new LUNs, copy datasets one at a time during outage windows...). That worked great. Got more performance out of the application.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Some places routinely use SAN, (like from EMC), for ZFS LUNs. It generally works fine with the caveats @HoneyBadger listed. Agreed against using local HW RAID card used as HW RAID card.

That's generally the reason it's advised against - most end users are using FreeNAS as their SAN, they aren't presenting LUNs from an existing one to be used as backing for a ZFS filesystem. And in the current era of "TL;DR" people will just look at "well someone else did it, so I can to" without digging into the nitty-gritty to understand why Person A's setup runs fine, but Person B's turned into a complete disaster.
 
Status
Not open for further replies.
Top