PCI-E Sata card

Status
Not open for further replies.

Leothlon

Cadet
Joined
Sep 8, 2014
Messages
5
Hi.
I am looking into building a NAS with FreeNAS.
But i have some questions/concerns... If i use a a PCI-E card with ~20 sata connections, if the PCI-E card break will i loose my entire zpool or can i just replace the card and start it up again as the drives are still intact?

With ~20 hdd's connected, if one break how do i know wich one? in what way does FreeNAS show wich drive crashed?

I read that FreeNAS is made to run on USB stick? what would happen if that usb drive crashes? do i loose everything or can i just install FreeNAS on it again?

I was planning to run Raid-Z3 (the one with 3 drives redundancy?) I know this will make write speed slow, but how does it affect Readspeeds?

Is 3 drive redundancy "enough" for 20* 4TB drives? what is the risk of loosing 4 drives at the same time with 20 drives? i know it goes up for every year that passes but say in 3 years or so.. is the risk of loosing 4 drives before the first have been replaced and rebuilt verry large?

Also i have read about "URE" beeing problem for large drives and rebuilding, but with ZFS this should not be a problem right?

Thankfull for any awnsers / ideas.

Best wishes
Leo
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Leothlon

Cadet
Joined
Sep 8, 2014
Messages
5
Thanks but there is still some questions tho.

"RAIDZ1 should have the total number of drives equal to 2n + 1. (ie 3, 5, 9, etc. drives for
the VDev)
RAIDZ2 should have the total number of drives equal to 2n q + 2. (ie 4, 6, 10, etc drives for
the VDev)
RAIDZ3 should have the total number of drives equal to 2n + 3. (ie 5, 7, 11, etc drives for
the VDev)"
and
"It is not recommended that VDevs contain more than 11 disks under any
circumstances."

So i guess 20drives RAIDZ3 is out of the picture, so i guess i'll go for 2* 10disk RAIDZ2 instead. and use it for 2 seperate zpools.. then if i would have 3 drives crash on same vdev i would atleast only loose half my data...
i was planning on going with Lian Li PC-D8000 (View: http://www.amazon.com/Lian-Li-PC-D8000-Aluminum-Computer/dp/B009FOXOOG)
So 20 drives will be max sadly.

Also: "The USB stick is kept in a read-only mode to maximize the USB stick lifespan." -but there is no info about what happens when the usb drive dies, even a read-only usbstick has rather short lifespan if i remember correctly.

If i understood that Noob guide correctly, with enough ram (that works correctly) URE's should not be a problem right?

But i found no information about the PCI-E sata card.
Also with 20 drives how can i tell wich one is broken? Will FreeNAS be able to tell what sata slot the drive is on with PCI-E card?
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
If you lose a vdev in your pool you lose it ALL. Not the half in the vdev that failed. Your data gets striped across all of the vdevs in the pool.

FreeNAS will show you the serial of the drive that failed. If your enclosure blocks the label add your own.

If your USB fails, your restore your backed up config to a fresh one and continue. Or just re-import the pool and reconfigure if settings are simple.

URE's have nothing to do with RAM. They are unrecoverable read errors on the drive. Bottom line is modern drives are so huge that the odds of hitting a bad sector while you resilver are significant. If there is no parity information available for that error, rebuilding the pool will fail. So we run Z2 or better to ensure redundancy exists even in a degraded state while we rebuild.

If your PCIe controller fails. You can swap in a fresh one. Or move the pool to another machine. ZFS is hardware agnostic, that is one of its primary advantages over hardware raid.

Keep reading, not sure you have the basics covered yet.
 

Leothlon

Cadet
Joined
Sep 8, 2014
Messages
5
If you lose a vdev in your pool you lose it ALL. Not the half in the vdev that failed. Your data gets striped across all of the vdevs in the pool.
Yes but i meant i would run 2 vdevs in 2 seperate pool's (1 vdev for each pool). then if i'd loose one vdev i'd only loose the pool "connected" to it, while if i ran 2 vdevs in 1 pool i'd loose it all. so splitting the vdevs up to seperate pools would match my needs better.

FreeNAS will show you the serial of the drive that failed. If your enclosure blocks the label add your own.
Ahh sweet. that makes things easier ^^

If your USB fails, your restore your backed up config to a fresh one and continue. Or just re-import the pool and reconfigure if settings are simple.
Ok so i set it all up and configure everything and then make a backup of FreeNAS and i'm good to go.

URE's have nothing to do with RAM. They are unrecoverable read errors on the drive. Bottom line is modern drives are so huge that the odds of hitting a bad sector while you resilver are significant. If there is no parity information available for that error, rebuilding the pool will fail. So we run Z2 or better to ensure redundancy exists even in a degraded state while we rebuild.
What i ment is that ZFS helps protect from URE right? and the ram i was talking about is that the guide realy stress the importans of big chunk of good ram. (so 64gb ECC ram should be enough right?)

If your PCIe controller fails. You can swap in a fresh one. Or move the pool to another machine. ZFS is hardware agnostic, that is one of its primary advantages over hardware raid.
Sweet, i was worried i'd loose my pools making it a single point of error.

Keep reading, not sure you have the basics covered yet.
'course, and i'll be testing stuff back and forth with it making sure i know how to set everything up correctly (with warning emails and such) before i start to store my data. i just wanted a few of my worries sorted before i decided to commit to FreeNAS.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
ZFS protects your data from standard read errors due to bit-rot etc. It finds them during a scrub and automatically repairs on access. Excess RAM is just used by ZFS as ARC, that will increase speed on cacheable reads. For random reads and one time reads from a media server... not so much. But it is smart and will do its best.

I'm down for more is better. But most people have a reason for jumping to 64GB RAM. You're going to end up on an E5 and likely double the costs of a std E3 or Avoton build. 16Gb is the sweetspot for most home user loads. If you are serving up many VM's or something interesting the more the merrier. But streaming and largely static data doesn't need that much firepower. 1Gbe is pretty easy to saturate.

You've given no clue as to intended use besides a big case. So there is no way to judge hardware requirements.
 

Leothlon

Cadet
Joined
Sep 8, 2014
Messages
5
Oh ok, i just read that the guide recommended 1GB ram per 1TB data or so :)

"For most home users just sharing some files and perhaps some plugins/jails, 16GB of RAM is an
excellent place to start. By far most home users will have a stable and fast FreeNAS server with 16GB
of RAM."
i read that but i figured most "home users" did not count 60+ TB NAS servers :P

The NAS will mainly be a mediaserver, but i think i will also set up a bit of it for my brother to store a "off-site" backups from his NAS. so mainly as you said random reads and one time reads.
write speeds are not as important as read speeds. and even that don't need to be the best of the best :P

As for my data safty to cost balance, i'd say my data would suck to loose, but so much it would kill me :) so 2 or 3 drive redundancy should be enough.

Thats the only case i found that had plenty of drive slots :)
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
1GB per TB is a nice rule of thumb that doesn't scale that well and is load dependent. Media loads for a small number of users are a case where the data is static, reads are sequential, and you are limited by your NIC. Pretty modest resources can pull that off.

I kinda like that case its pretty. But it has no hot-swap and with that many drives and no back planes, you are looking at a potential mess. Even a Norco 4224 might serve you better. Or a different smaller rack style case. But that's all personal preference.
 

Leothlon

Cadet
Joined
Sep 8, 2014
Messages
5
Well. i guess starting with 16 or 32GB RAM could be nice to.
And seeing as i will be using 2 vdevs with 10 drives each that alows me to buy every thing in 2 occasaions.
So it wont be all to expencive in one go :P
I could go with 10 drives and 16/32GB ram, then later on i could add another 10 drives and if needed more ram..

Yea the case looks nice but i just found it by going on a price comparasing site and selecting based on drive slots, this was the only one i found that had enough slots. i looked at rack cases now and they cost about 1500euro here in sweden while the Lian Li costs around 300.
the Norco 4224 seems pretty sweet tho. i will deffinetly start to look for shops that delivers to sweden without costing way to much.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Another way to go might be 2 silverstone ds380 cases, if they are available. Only 8 drives... but cheap, small, and hotswap. Pair em with an Avoton and 6 TB drives and you are looking at 48TB raw with space for ssds as well. That is a TON of space to grow into. Add a second when you fill it and you have a backup target, twice the bandwidth, ram, etc.

I'd grab the lian li all day if the only alternatives were 5x the dough. I have similar trouble finding local parts (Canada) .... but not quite that bad.
 
Status
Not open for further replies.
Top