Do you need all drives available to create Raidz2, have it accept data?

Status
Not open for further replies.

jdubin

Cadet
Joined
Jul 27, 2011
Messages
5
A friend is looking for a storage solution for his growing photography business. He initially needs around 4TB of space, but wants room to expand. It's come down to either a Drobo FS w/ 5x2TB drives (~$1100) or a HP Microserver running FreeNAS w/ 4x3TB drives (also ~$1100), either way giving him roughly 5.44TB of storage. In either case, he wants the ability for two drives to fail before he has to worry about losing data (e.g. raidz2).

The issue I'm currently stuck on is extendability. Assuming the 5-bay Drobo FS will accept 3TB drives, it maxes out at 8.16TB. But with the Microserver, he can add a RAID/HBA card and an external SAS cage with, say, four more drives. The problem is the classic non-expandability of a ZFS volume. If he adds another raidz2 array with four disks, that'll double his storage to 10.88TB. But if he rebuilds the whole thing as a single eight drive raidz2 array, he'll get 16.32TB out of it -- quite a difference!

So here's my imperfect, but potentially acceptable plan: When it's time to expand, pull the two spare drives from the original array and build the new eight disk array, but initially supply only six drives (four new drives, plus the two spares from the original array). Then copy the data from the original array to the new one, and once the data is on the new array, add the remaining two drives from the original array to complete the 8x3TB raidz2 array.

Here are my questions: 1) Will the original array still let me read data from it if two of the disks are gone, or will it insist on being repaired first? 2) Will I be able to create the new array without all eight disks physically present, perhaps using /dev/null (or something) as the two extra disks? 3) Even if I can create it, will I be able to mount it RW to copy the old data onto it? 4) Will the CPU in the Microserver (AMD NEO N36L @ 1.3 GHz) be able to handle eight 3TB drives? There will be 8GB of RAM installed.

Thanks for any suggestions!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Thats the tough part about ZFS, expanding.

Yes, you could pull the 2 spare drives and run the array in a degraded state and copy stuff to the new array.

No, you can't add the other 2 disks later. You *might* be able to create 2 virtual/sparse files on a spare UFS disk and somehow mount them (loop), and then replace them with physical disks later (one at a time), I think I actually read about someone doing that.

Yes, the Microserver should be able to handle 8 x 3TB disks.

You might want to consider more RAM later, but that's a good start.

Since a sparse file doesn't actually use all the space, say 2TB, you could make 2 x2TB sparse files, on a 300GB disk (something reliable!).
You wouldn't want to put them on the original array because you still need them active while you're swapping disks and it could be a disaster.

I'll see if I can track down that post I think I saw about doing this and update if I find it.

Update: after a quick second thought, even though you might be able to trick ZFS with a sparse file type disk, if those sparse file weren't able to grow enough to contain the data from the original array, you could run out of space and fail miserably. You'd have to figure out how much space you are already using and how that used space would be distributed on your 'virtual' disks.

Update-2: Here's a link discussing what I was talking about:
http://opensolaris.org/jive/thread.jspa?messageID=409525
 

jdubin

Cadet
Joined
Jul 27, 2011
Messages
5
Thanks for that link -- very interesting.

So I'll create the new zpool using four new drives, the two pulled drives from the existing array, and two sparse files. I'll then immediately fail the two sparse drives (I don't intend any data to be ever written to them), which will leave the zpool degraded, but I'll still be able to copy data from the two remaining drives of the old array to the zpool with six physical vdevs working. Once I verify the data was copied over correctly, I add the two remaining drives and resilver the array.

I think I'm going to have to test this out on a VM using some virtual disks. If I get to it, I'll be sure to post my results.

So you don't think 8GB will be enough RAM should we bump it up to eight disks? Right now 16GB ECC is only $260... not terrible, considering.

Thanks!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
That sounds like it might actually work, the only thing is that I'm not sure how replacing 2 drives at the same time is going to work. Please post back and share your results!

If you have the extra $$ get the extra RAM, ZFS will be happy and down the road when we get a version of ZFS with the 'fancy' stuff like deduplication etc., it will help even more.
 

jdubin

Cadet
Joined
Jul 27, 2011
Messages
5
Hmm... this doesn't look promising. Under VMWare ESXi 4.1 (or whatever they're calling it at this point... it's the free version), I created a 64-bit FreeBSD machine w/ 2GB RAM, a 8GB HDD for the FreeNAS install disk, and eight 1GB independent, persistent disks. After installing FreeNAS, I created a couple of sparse files (dd if=/dev/null of=/var/tmp/sparse1 bs=1 seek=1073741824) and the initial four drive zpool (zpool create -f tank4d raidz2 /dev/da1 /dev/da2 /var/tmp/sparse1 /var/tmp/sparse2). I then went to take one of the sparse files offline (zpool offline tank4d /var/tmp/sparse1) and... CRASH. It crashed the VM, causing a reboot. Once the VM came back up, I did a zpool list, and crash. Another reboot. One more time, and crash again. I can't even destroy the pool without it rebooting (and that's even after recreating the (empty) sparse files).

I took a screen grab of the console when I did the last zpool list.

snapy.png
 

Attachments

  • vlcsnap-2011-08-01-11h57m44s148.jpg
    vlcsnap-2011-08-01-11h57m44s148.jpg
    19.7 KB · Views: 271

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Your virtual test disks need to be larger than 2GB, more like 4 or 5GB, because FreeNAS reserves 2GB on each drive for swap space. Give that a try and I'll bet you see an improvment.
 

jdubin

Cadet
Joined
Jul 27, 2011
Messages
5
I haven't had a chance to try it with larger virtual disks yet, but everything I did above was via the shell, not the GUI. Shouldn't that eliminate the swap space reserved by FreeNAS?

Also, I haven't yet been able to figure out where the config files for the zfs pools and such are kept. Even after I zero'd out the virtual disks and removed the sparse files, doing a zpool list would still reboot the VM, making me think that it was still trying to find that zpool I created. Any idea where this config data gets stored?
 

jdubin

Cadet
Joined
Jul 27, 2011
Messages
5
Holy cow -- that's a detailed account of his setup! Thanks, I'm going to have to go over that in more detail.

I tried it again with a fresh install of FreeNAS (v8.0-release). This time I just deleted one of the sparse files (instead of taking it offline first). I did a 'zpool list' and that didn't show me any problems. But then I did a 'zpool scrub', and, like before, the system immediately crashed (instant, spontaneous reboot). Once the system started back up, I could do a 'zpool list' and it shows the zpool as being degraded. But as soon as I do a 'zpool scrub' on it, it crashes again.

So my question is... is this a VMWare thing? Or is this because I'm using sparse files? My next step is to try building this on a real machine. I don't have eight drives to use, but maybe I can use a couple of old drives and create eight partitions on one.

Edit: Okay, I'm looking over the link you sent, and he's having the same issue I am with the panics when removing a sparse file. Well, glad it's not just me. I'll finish reading that in the morning, and it looks like he has a solution. Thanks again for finding that!
 
Status
Not open for further replies.
Top