JBOD vs a bunch of RAID0 disks

Status
Not open for further replies.

RthuR

Cadet
Joined
May 7, 2014
Messages
6
Hi everyone!

I'm planning on using a gen8 Microserver (which I already had), along with 4 x 2tb WD red drives as a home nas.
I understand that ideally the hardware raid controller would be set to JBOD / AHCI to let freeNAS directly access the drives, however this isn't optimal for the microserver - it leads to significantly increased fan speeds.
Is it acceptable to run each drive as a separate RAID0 array? I would still have smart monitoring through the onboard ilo (with email notifications). Are there any other disadvantages?

Thanks a lot!

Arthur
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, there is no "raid0" for ZFS. Second, there is no "JBOD" for ZFS.

You can run each drive as a separate pool.

You can also run each drive as a separate vdev in the same pool.

You could even create several pools with several disks in each.

But your question is nonsensical because your perceived definition of "raid0" and "JBOD" just don't exist to ZFS.

And if you are trying to mix hardware RAID with ZFS, you should just give up right now because you'll pay for it later with data loss. I'm dealing with a company right now trying to recover their data. It's costing them tons of money for every day of downtime, and we're already 100% sure 1/2 the data is gone for good.
 

RthuR

Cadet
Joined
May 7, 2014
Messages
6
Sorry for the confusion - I'm not talking about ZFS but rather about the hardware raid controller used by my motherboard. I can either set it in JBOD / AHCI mode in which case the drives are just presented as a bunch of sata drives to the OS, or I can create a RAID0 volume out of each drive as a workaround for the fan issues.

Is it a acceptable to run freenas (configured with whatever combination of RAIDZ and vdevs) on drives that are actually RAID0 volumes?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You want AHCI. ZFS is supposed to handle the "RAID-ing".

hardware RAID + ZFS = fail.
 

RthuR

Cadet
Joined
May 7, 2014
Messages
6
Thanks for the reply!

I fully understand that ZFS is supposed to handle the "RAID-ing", I'm just trying to understand what the issues are with presenting a bunch of RAID0 volumes to zfs.
I would have a RAID0 volume for every hard drive, thus letting ZFS actually handle the drive redundancy via RAIDZ2.
 

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
Thanks for the reply!

I fully understand that ZFS is supposed to handle the "RAID-ing", I'm just trying to understand what the issues are with presenting a bunch of RAID0 volumes to zfs.
I would have a RAID0 volume for every hard drive, thus letting ZFS actually handle the drive redundancy via RAIDZ2.


No issues at all.

I am doing just that. I presented raid0 volumes of one disk each to freenas. Because freenas has no way to pin point failed drives on a big array... You get the advantage of the raid card helping you on that...
 

RthuR

Cadet
Joined
May 7, 2014
Messages
6
No issues at all.

I am doing just that. I presented raid0 volumes of one disk each to freenas. Because freenas has no way to pin point failed drives on a big array... You get the advantage of the raid card helping you on that...

Great - thanks for the pointer!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Actually, by going with RAID0 you get all the worst case scenarios in your face.

1. You are forced to use that RAID controller for your pool forever(aka hardware lock-in).
2. You are likely no longer able to run SMART monitoring.
3. You are likely no longer able to run SMART testing.
4. You may be susceptible to the "write hole" and all it takes is one write hole in the ZFS metadata and your entire pool will be gone forever.
5. The RAID controller can mask drive errors without reporting them.

Do I need to say more? There's so many reasons RAID + ZFS = FAIL that it's just not even worth it. Some people have even gone so far to say that if you are convinced you need hardware RAID, you shouldn't even be using FreeNAS. No joke, about 75% of the people that contact me for data recovery.. used hardware RAID. Just like the guy I've been working with since Monday.

The manual says not to.. so why are we even having this discussion?
 

RthuR

Cadet
Joined
May 7, 2014
Messages
6
Actually, by going with RAID0 you get all the worst case scenarios in your face.

1. You are forced to use that RAID controller for your pool forever(aka hardware lock-in).
2. You are likely no longer able to run SMART monitoring.
3. You are likely no longer able to run SMART testing.
4. You may be susceptible to the "write hole" and all it takes is one write hole in the ZFS metadata and your entire pool will be gone forever.
5. The RAID controller can mask drive errors without reporting them.

Do I need to say more? There's so many reasons RAID + ZFS = FAIL that it's just not even worth it. Some people have even gone so far to say that if you are convinced you need hardware RAID, you shouldn't even be using FreeNAS. No joke, about 75% of the people that contact me for data recovery.. used hardware RAID. Just like the guy I've been working with since Monday.

The manual says not to.. so why are we even having this discussion?

Thanks for taking the time to point me in the direction - I'll definitely ensure I set my RAID controller in JBOD mode. I certainly don't believe I need hardware RAID - I completely understand that ZFS provides it's own software RAID mechanisms (RAIDZ1-3).

I really don't know much about any of this, apologies if I'm wasting your time. I'm however still a bit curious about the hypothetical issues (purely from an educational point of view).

1) Assuming the RAID controller doesn't actually do anything when every disk is it's own RAID0 volume (as is confirmed with my RAID card - it is possible to freely switch from using a bunch of RAID0 volumes to JBOD without any loss of data), this issue is pretty much mitigated.
2) 3) Assuming SMART monitoring and testing can be preformed using the onboard firmware, this issue is also mitigated.
4) I'm fairly certain that using a RAID0 volume per disk ensures that we are not susceptible to the write hole.
5) This issue appears to be the key one (at least to me). Is there any way to test this or to find out more?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
  1. Don't assume anything when it comes to RAID controllers. If and only if can successfully move your disks at will between the controller and a different one (like the chipset's controller) without any problems, you can assume it won't be a problem.
  2. Running those automatically will probably be a pain...
  3. ...especially if you can't set up e-mail notifications of failing disks.
  4. You're probably right.
  5. Carefully read through the manufacturer's documents. If your model is a popular one, someone here might have further insights, too, so you might want to tell us what model it is.

Typically, the big barrier is the communication between firmware and FreeNAS. Its simplicity/nonexistence causes most of the problems. That's why LSI's 2008/2308 controllers are so popular around here, with IT firmware they simply expose everything and let FreeNAS do the rest.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Rthur:

No. Just no. This has been explained dozens of times. Sorry, but I'm not really going to hash this out again.

What you(and the casual reader) need to know is this:

1. FreeNAS devs have made it abundantly clear that RAID is bad.
2. They've engineered FreeNAS to work best with no RAID controller.
3. We are not the first file server OS to have demands such as "no RAID controllers"
4. We've seen first-hand experience around here that RAID really is bad.
5. If you think you know better than everyone else, do it. I dare you. It'll be your data that is lost, and money in my pocket when you need data recovery. The last guy that did RAID and called a data recovery expert spent $21k for 450GB of data. They charge on a per-GB basis based on the size of the pool. So imagine what a "meager" 1TB pool would cost. ;)

Right now, as I write this response I have been working with another company that has lost 4 days of work because they chose to use RAID. I'm making some good money on this too.

If someone thinks they are so smart that you know better, do it. The risk is outrageously high. The benefit is literally non-existent.

Like I said above, my brain is short circuiting while trying to figure out why someone would even *think* this is a good thing.
 

xcom

Contributor
Joined
Mar 14, 2014
Messages
125
Rthur:

No. Just no. This has been explained dozens of times. Sorry, but I'm not really going to hash this out again.

What you(and the casual reader) need to know is this:

1. FreeNAS devs have made it abundantly clear that RAID is bad.
2. They've engineered FreeNAS to work best with no RAID controller.
3. We are not the first file server OS to have demands such as "no RAID controllers"
4. We've seen first-hand experience around here that RAID really is bad.
5. If you think you know better than everyone else, do it. I dare you. It'll be your data that is lost, and money in my pocket when you need data recovery. The last guy that did RAID and called a data recovery expert spent $21k for 450GB of data. They charge on a per-GB basis based on the size of the pool. So imagine what a "meager" 1TB pool would cost. ;)

Right now, as I write this response I have been working with another company that has lost 4 days of work because they chose to use RAID. I'm making some good money on this too.

If someone thinks they are so smart that you know better, do it. The risk is outrageously high. The benefit is literally non-existent.

Like I said above, my brain is short circuiting while trying to figure out why someone would even *think* this is a good thing.



I actually went back to see which of my systems is on production and has raid0.... And only our developer system is setup with raid0/jbod. The production system is on pass through.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
This is all about the fan speeds on the Microserver... they aren't even that loud when you do AHCI. This should be a non-issue in the first place. Do AHCI and protect your data. Put the server somewhere that noise won't be an issue (though again, it isn't very loud... it's one of the quietest items I have on my rack).
 

RthuR

Cadet
Joined
May 7, 2014
Messages
6
Apologies, I really didn't mean to make it seem like I was planning on going ahead with running freenas with hardware raid - your first posts were convincing enough - my raid controller is now in AHCI mode.
I was just trying to understand the reasoning / technical reasons behind this, purely from an educational point of view.

Thank you for all the great replies, and for taking the time to point me in the right direction.

Arthur
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I was just saying what I said for other readers that read this thread.
 
Status
Not open for further replies.
Top