Do we have to use ZFS with 8.3 for raid10

Status
Not open for further replies.

tomavery

Cadet
Joined
Jan 15, 2013
Messages
2
[noob]

Loaded 8.3 from a usb got things going, figured out IE didn't support Gui very well.

Switched to a non IE GUI.

Have 4 * 1 tb wd greens. Plus one ssd drive.

System sees all of them.

Could not from the gui make a UFS raid 10.

SO tried a simple ufs on the ssd standalone no problem made the raid used the gui and made the iscsi target and connected it to windows yay!

Go back to command line make a raid 1 and a second raid 1 both with same name.

did a gmirror status at cli all was complete.

Gui didn't see anything. ( did I need to use newfs here for the gui to see them?)

Back to command line

Make a gstripe from the two raid 1's no problem

gstripe status all is well

newfs and made a filesystem
Check again all is well.

Mount drive. in command line it sees it, gui still doesn't see it and it throws an error if I try to import the drives. Also cant use it for Iscsi target, also tried to make target by modifying istgt.conf no luck cant figure it out.

Am I doing something unsupported or just noobishness stopping me? ( you know lack of experience at doing this.)

Or is something I need to do in raw FreeBSD?

Or is something I should get a hardware raid controller for and just present it to the freenas?

I would use zfs but others are convincing me for large oracle work from a solaris box I need to have a raid 10. Thanks for your advice.



From
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't know the answer to your original question w.r.t. UFS, I'd have to go try it to see what happens.

However, I wanted to thank you for describing the underlying reason for your request.

"For large Oracle work from a Solaris box" is not a compelling reason not to use ZFS nor is it a compelling reason to use RAID10. That's more a result of database administrators who have learned what works for them on conventional hardware with hardware RAID controllers.

ZFS is a little different. The underlying storage principles are similar; you can, as admin says, make the equivalent of a RAID10, and many of the reasons to do so are similar between ZFS and other RAID systems. Simple mirroring and striping tends to be fastest for many workloads, especially database work.

So, if your setup is a total of 4 1TB Green drives, and your intention is to mirror them, you wind up with 2TB of space for your database. That's not a horribly big database. Do you have any idea what the working set of the database is? Because ZFS has some awesome features that can make your database go fast. For example, adding an SSD as L2ARC can allow your system to build up a cache the most frequently accessed data on the SSD, and a 240GB SSD for $160 allows up to 1/8th of your entire storage pool to be read cached in super fast SSD. There are also options available to accelerate synchronous writes with a ZIL.

One of the downsides to ZFS, however, is massive memory consumption. It makes good use of it all, but it's a bit frustrating to those of us who still fondly remember our first systems with more than one megabyte of memory.
 

tomavery

Cadet
Joined
Jan 15, 2013
Messages
2
Thanks for all the info, I did use the instructions and made a striped then mirrored raid of 4 disks using zfs.

Also bumped up the memory to 16gb. We did have an SSD in there but found we were having issues with boots when it was in so took it out and ran from the usb drive. All that seems to be fine and we could grab an extent with windows. Now I came to an unexpected challenge, i thought there was an iscsi client out there last year for solaris (sparc) 9 and now I can't seem to find it. Anyone have a link to one?
 
Status
Not open for further replies.
Top