hardware suggestions

Status
Not open for further replies.

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
I'm reading documents about my future Freenas 8 server and I'd like to have your suggestions about hardware to buy, my budget is 700-800 euro.
I thought to hardware RAID 1 (mirroring) but my doubt is for motherboard with integrated RAID controller (most AMD/Opteron system have this feature) or PCI-e external controller as LSI, Adaptec, MegaRaid. Did you have experience with it ?
I thought to buy n. 2 hard disk 1 TB SAS 10K 8 GB RAM, what do you think for SATA disk instead ?
Mainboard.. I don't know but I think to AMD platform, 64 bit , what do you think ? Cheaper mainboard ?!
I saw Sandy bridge, Ivy bridge are not supported ?!?!
My file system will be ZFS and I want quiet server, it's my number one among my requirements.

Freenas 8 - ZFS
RAM: 8 GB
n. 1 controller SAS - PCI-e
n. 2 hard disk SAS 10K - 1 TB
AMD 64 bit ...
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
you want to use ZFS, but want hardware raid? those 2 dont go together. just get something with a lot of ports and plug in your drives. let zfs handle the rest
 

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
you want to use ZFS, but want hardware raid? those 2 dont go together. just get something with a lot of ports and plug in your drives. let zfs handle the rest

Ok, I read it.
If I have to buy RAID controller to use it in JBOD mode It's better to use disk controller on motherboard ? That's all right ?
I have no experience to recover data in ZFS file system, e.g. if my NAS hardware died I can read data from Windows/Linux machine ?
Otherwise I'll use UFS...
My requirements are performance but If I used minimal raid ZFS (e.g. RAIDZ1) I'll have slow write speed ! ZFS It's software RAID !!!
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Why does anyone think that 'software raid' is slower than hardware raid?

you realize that your CPU and RAM on a built out box is faster and more powerful than a raid card, right?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll tell you exactly why everyone thinks software raid is slower. Because in Windows, it is slow.

Back in 2005 when I was in the military we bought(without my consult btw) a 300GB SAN. It had 4 100GB drives on a software RAID5. We didn't migrate any data too it. We basically turned it on and started making new shares.

Well, after we had about 200GB on this sucker we realized there was a problem. Considering the system hadn't been defragmented before and had been in use for 2 years I decided to do a defrag. Downtime was not an option so I opted to start a defrag on Friday at the end of the workday to minimize the number of files that would be open and therefore not defragmented. When I came in Monday morning the server was at about 30% complete. I was shocked. After further testing I figured out that the machine (which had a Pentium 4 Xeon I believe) couldn't even read data at 1.5MB/sec. There's no typo there.

Needless to say we had found the cause for our performance issues. It was easy to determine that the issue was that software raid in Windows is just absolutely horrible for performance. I knew several friends between then and 2009 that had used software RAID and it did get faster as CPUs got faster, but not enough that I'd ever consider using it for any situation. So while it was pretty ignorant of me to assume all software RAID must suck when I first read about ZFS and it said "software RAID" I thought "OMG the horror.. not a chance I'll trust it for performance" I think ZFS definitely is a very good option. Of course, ZFS is actually pretty darn smart and fast. You can't throw in a 5+ year old CPU and expect amazing performance, but it definitely smokes Windows software RAID.

At the time I had a home server with less powerful hardware and a hardware RAID controller and I could saturate Gb LAN. So when I compare hardware to software I know who sucked horribly. I also think this is precisely the reason why hardware RAID companies can charge so darn much for controllers. People are convinced they are necessary. I definitely wouldn't buy stock in the RAID companies until they come up with some kind of hardware solution that rivals ZFS' reliability. I think they are in real hurt-mode if they can't figure out how to fix silent corruption.
 

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
I'll tell you exactly why everyone thinks software raid is slower. Because in Windows, it is slow.

Back in 2005 when I was in the military we bought(without my consult btw) a 300GB SAN. It had 4 100GB drives on a software RAID5. We didn't migrate any data too it. We basically turned it on and started making new shares.

Well, after we had about 200GB on this sucker we realized there was a problem. Considering the system hadn't been defragmented before and had been in use for 2 years I decided to do a defrag. Downtime was not an option so I opted to start a defrag on Friday at the end of the workday to minimize the number of files that would be open and therefore not defragmented. When I came in Monday morning the server was at about 30% complete. I was shocked. After further testing I figured out that the machine (which had a Pentium 4 Xeon I believe) couldn't even read data at 1.5MB/sec. There's no typo there.

Needless to say we had found the cause for our performance issues. It was easy to determine that the issue was that software raid in Windows is just absolutely horrible for performance. I knew several friends between then and 2009 that had used software RAID and it did get faster as CPUs got faster, but not enough that I'd ever consider using it for any situation. So while it was pretty ignorant of me to assume all software RAID must suck when I first read about ZFS and it said "software RAID" I thought "OMG the horror.. not a chance I'll trust it for performance" I think ZFS definitely is a very good option. Of course, ZFS is actually pretty darn smart and fast. You can't throw in a 5+ year old CPU and expect amazing performance, but it definitely smokes Windows software RAID.

At the time I had a home server with less powerful hardware and a hardware RAID controller and I could saturate Gb LAN. So when I compare hardware to software I know who sucked horribly. I also think this is precisely the reason why hardware RAID companies can charge so darn much for controllers. People are convinced they are necessary. I definitely wouldn't buy stock in the RAID companies until they come up with some kind of hardware solution that rivals ZFS' reliability. I think they are in real hurt-mode if they can't figure out how to fix silent corruption.

Ok, thank you for your words.
I thought hardware Raid is faster software one because there was the "dedicated chip" (on controller card) which copies data into disk. OS is free to work without this additional processes because workload is on controller.
So if I wanted to configure RAIDZ in ZFS with minimal disk number, you suggest me ZFS RAIDZ1 ? It's ok ?
At this point my doubt if I have problem with disk and If I wanted to mount it into another machine (e.g. Linux) I'd loss this flexibility with ZFS .
What do you think ?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
you would lose that flexibility to just mount it on another system, but why would you need to? if your FreeNAS thumbdrive fails, or your hardware (not the drives) fails, you simply plug the drives into a new machine, load up a new FreeNAS USB stick, import your config and you are set.
 

whiskeyjack

Dabbler
Joined
Feb 17, 2013
Messages
24
If your only requirement is a silent machine, you should not have too much difficulty with your budget. But with that budget I would get more capacity and would skip the whole raid setup and go for ZFS.

I only have been researching my new build for the past few weeks, but from what I've learned you don't need the 10k drives performance wise and I've settled on the western digital Red line.

Also, there is silent and silent. You could plug in an aftermarket cooler if space permits, but the machines should be fairly quiet to begin. Or use an atom build (although I'm not sure what performance they give). You could also make sure your case has dampening rings or such to reduce hard drive vibrations or even encase the drives in special noise-reducing enclosures. Also pay attention to the case fans and psu noise.

My prospective build is here and is about €800 in total. It gives 6GB storage with Z1 (so one drive can fail without losing data). But my specific requirement was "small", so you will have a lot more options if you can go larger.
 

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
you would lose that flexibility to just mount it on another system, but why would you need to? if your FreeNAS thumbdrive fails, or your hardware (not the drives) fails, you simply plug the drives into a new machine, load up a new FreeNAS USB stick, import your config and you are set.

I thought this one because in emergency it's easy to find Windows machine and it's my curiosity... indeed by USB stick I can recover my freenas and its configuration, but this concept is depend on new hardware ? If I'll move my hard disks into another server which have different disk controller and different mainboard (but of course the same disk interfaces !) I have to worry myself ?
 

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
If your only requirement is a silent machine, you should not have too much difficulty with your budget. But with that budget I would get more capacity and would skip the whole raid setup and go for ZFS.
I can trust about ZFS ? In Itlian forum some people doesn't suggest me it.

I only have been researching my new build for the past few weeks, but from what I've learned you don't need the 10k drives performance wise and I've settled on the western digital Red line.

It's good !


My prospective build is here and is about €800 in total. It gives 6GB storage with Z1 (so one drive can fail without losing data). But my specific requirement was "small", so you will have a lot more options if you can go larger.

I read I3 processor, but it isn't contained in BSD hardware support ! Why this choise ?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
As @pirateghost said, if your hardware fails, you can move it to another machine, without worry.

Last year, my homebuilt rig starting suffering kernel panics on startup. With my spare time being a precious commodity, I bought a HP Microserver. Installed the latest FreeNAS OS on a thumbdrive, added the hard disks, "imported the volume", did a "scrub", and restored my configuration file. The only thing I need to fix was my NIC - it was different than the last one.





If I'll move my hard disks into another server which have different disk controller and different mainboard (but of course the same disk interfaces !) I have to worry myself ?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
FreeNAS, like Linux, does not care what its connected to. As long as the hardware itself is supported by FreeNAS, you have nothing to worry about. It is not like Windows where if you change nothing more than the motherboard, all it does is boot to a bluescreen :P

You can actually take your USB thumbdrive, put it in any machine, and boot it up. It doesnt care what its attached to. ;)
 

zio_mangrovia

Dabbler
Joined
Dec 27, 2012
Messages
20
FreeNAS, like Linux, does not care what its connected to. As long as the hardware itself is supported by FreeNAS, you have nothing to worry about. It is not like Windows where if you change nothing more than the motherboard, all it does is boot to a bluescreen :P

You can actually take your USB thumbdrive, put it in any machine, and boot it up. It doesnt care what its attached to. ;)

So you suggest me ZFS RAIDZ1 ith 3 disks ? Or UDF ?
 
Status
Not open for further replies.
Top