Couple Hardware questions

Status
Not open for further replies.

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
I am looking to put together a FreeNAS ZFS server for our company. In putting together a hardware list, I came to some questions.

The LSI SAS 9211-8i HBA looks like a great option and they have a driver for BSD 8.2.0 but it isn't listed in at http://www.freebsd.org/releases/8.2R/hardware.html#DISK. Is it safe to assume that it will work?

Second question. I understand all the talk about needing a hoard of memory. What about CPU? Will one Intel Xeon 2.x Ghz quad like the E5606 do the job for a 12TB/12 disk array?

Third question. Figured I would use the Supermicro MBD-X8DTE-F-O MB. Anything I need to check to make sure it is compatible?

Thank you in advance,
Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I would suggest you check out http://www.ixsystems.com/truenas if you are doing this for a company. They offer commercial support which will be invaluable when you have a problem and need to get things fixed ASAP.

That said, the question about the CPU will probably depend more on the activity the box will be handling. If it's a dozen people doing CIFS work, 2GHz quad will probably suffice. If you're doing ESXi clusters, I'd probably get at least 2x 6core. I would also suggest 24G of RAM, and consider using an SSD for the ZIL, and an ssd for the cache. Small ssd for the ZIL, mid to large for the cache. It will help performance quite a bit, expecially if you are trying to push >900MB/s.

Supermicro is in general, fine.
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
Thank you for the advice Louis. We are going to be putting VMWare VMs on it but not heavily loaded. I was already going to do 24Gigs of ram so how about dual Xeon E5606s? Looking at using the X8DT3-LN4F-B which has a built in LSI2008 SAS controller with 8 ports. Figured on using those 8 and 4 from the ICH10R chipset to cover the 12 HDDs with two left over for SSD caching. Was also looking at using an Areca ARC-1880ix-12 but I figure if I am using ZFS, I shouldn't need it and it is $$$$$.

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I would get a board that will take 2 CPUs but I would probably start with only 1. Put in the second if you find that your NFS processes are consuming all the CPU.

Agreed that you don't need a fancy RAID controller, although we've had good luck with 3ware controllers, even w/o using the RAID aspects.
 

BobCochran

Contributor
Joined
Aug 5, 2011
Messages
184
I agree that if you are doing an NAS for a company you are better off getting an ix Systems product and their support. Hard drives are terrible about respecting cruise vacations or getaway trips and so forth. The moment of the disk crash is always totally wrong.

I agree with Louis as well that a dual processor machine and 24 Gb memory is a good idea. I've built one based on the Asus KGPE-D16 board. One way to reduce costs is to stick to AMD Opteron processors.

I hope you post again with a review of how well your new system performs for you.

Bob
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
I will keep you updated. I have contacted iX Systems. We will see. They emailed me and I answered their questions. Haven't received a recommendation yet.
I am now looking at using the following
SC826E1-R800UB Supermicro Case, Redundant PSU, EATX, 12 x 3.5" drive bays, has support for 3, full height PCI card on edge so I want to mount some redundant SSDs to the structure for the Cache. Also has a built in SAS 28 port expander.
MBD-X8DT3-LN4F-B Supermicro MB. Dual CPU, 12 DIMM slots, LSI 1068E 8-Port SAS Controller, 4 GigE ports
I am figuring on loading all 12 bays with 1TB Seagate Constellations(if they ever come). Haven't decided whether to use Mirroring or RAIDZ2. Opinions? Do I understand correctly that you cannot Mirror 2 RAIDZ arrays? i.e. Two 6 disk RAIDZ1 Arrays mirrored.
I will probably run at lease one Hot Spare either in a bay, or maybe taped to the side. :)
With all those SAS ports available, I can use the Intel ICH10R for the cache disks. And have ports for a JBOD down the road if I need it.

Any problems with my train of thought up to this point?

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I don't believe you can make a vdev out of vdevs, so I think your mirror of RAIDZ is a no. If you want mirroring, I'd just create mirrors and add them to the pool so it stripes across them. Works fine.

You could run 2x 5 spindle RAIDZ vdevs and still have a power of 2 for the data drives (conforming to best practice), this would give you the ability to have 2 hot spares. The downside is that you're losing 1/3 of the spindles to parity and spares, with 8T RAW. I'd probably opt for using 2T spindles and use mirrors, I think it will end up faster (you have more vdevs to stripe across).

Depending on how heavily you plan on hitting this box, the SSD cache may or may not make a difference. I would probably add those later if you determine that it would be effective. You should also consider adding a pair of SSDs (mirror) for the Log (ZIL). If you're doing a lot of NFS traffic (ESX), a fast ZIL will make a difference.
 

BobCochran

Contributor
Joined
Aug 5, 2011
Messages
184
Thank you both for the discussion. I am new to NAS and I'm following this to add to my knowledge. Stephen, I hope you continue to post back so I can learn from your choices.

Bob
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
You could run 2x 5 spindle RAIDZ vdevs and still have a power of 2 for the data drives (conforming to best practice), this would give you the ability to have 2 hot spares. The downside is that you're losing 1/3 of the spindles to parity and spares, with 8T RAW. I'd probably opt for using 2T spindles and use mirrors, I think it will end up faster (you have more vdevs to stripe across).

Unfortunately, I did not follow you here. If I have two vdevs of 5 drives, you just said I can't mirror them. Could you please clarify? Especially what you mean by "power of 2 data drives."

I have 12 Constellations back ordered. With the magic of ZFS, would it be reasonably safe to buy some consumer drives and keep some spares? Seagate ST2000DM001s seem to be readily available. I'm not a huge Seagate fan but their availability seems better than WD.

Also, for the ZIL. It seems as though even 60Gigs of SSD is overkill but that is just about the smallest available these days. However, it seems the write speed would be a "more is better" scenario. There are some smaller SSDs out there but the smaller you go the slower the write. What is a good Write speed target?

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Unfortunately, I did not follow you here. If I have two vdevs of 5 drives, you just said I can't mirror them. Could you please clarify? Especially what you mean by "power of 2 data drives."

I have 12 Constellations back ordered. With the magic of ZFS, would it be reasonably safe to buy some consumer drives and keep some spares? Seagate ST2000DM001s seem to be readily available. I'm not a huge Seagate fan but their availability seems better than WD.

Also, for the ZIL. It seems as though even 60Gigs of SSD is overkill but that is just about the smallest available these days. However, it seems the write speed would be a "more is better" scenario. There are some smaller SSDs out there but the smaller you go the slower the write. What is a good Write speed target?

Stephen

If you have 2 vdevs with 5 spindles ea, you can either put them in the same pool (resulting in a stripe across both vdevs), or you can put them in separate pools.

power of 2 data drive means that for a RAIDZ of 5 spindles, you have 4 data spindles, which is a power of 2. you could also have a 9 spindle RAIDZ, which would have 8 data spindles, again, power of 2.

I'm using a 32G SSD for a ZIL, it's overkill, but it was cheap and a ZeusRAM (http://www.stec-inc.com/product/zeusram.php) was not. Unless you're actually going to be pushing >1G of traffic, an SSD for a ZIL is more than sufficient.
 

b1ghen

Contributor
Joined
Oct 19, 2011
Messages
113
.

Also, for the ZIL. It seems as though even 60Gigs of SSD is overkill but that is just about the smallest available these days. However, it seems the write speed would be a "more is better" scenario. There are some smaller SSDs out there but the smaller you go the slower the write. What is a good Write speed target?

I have had my eyes on the Intel 311 "Larson Creek" 20GB SLC SSD for use as ZIL for a while now, haven't had the opportunity to try them out yet but on paper they look promising as ZIL drives.
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
Okay, so I have everything pretty much spec'd out but I am stuck on the ZIL SSD. Fast SLC is very expensive (30GB Deneva 2 at 400+MBps write is $350+) and then to mirror them? Ouch. There isn't much else out there in SLC that is small and fast without paying a ton. The Larson Creek only has a write speed of a little over 100MBps b1ghen so it doesn't seem like that is a worthwhile option. If you were to use MLC flash drives and Mirror them, can ZFS handle an error in that scenario? Fast is much cheaper in MLC. Sandisk just released some MLC SSDs. 120Gig for $170 on newegg and it does Max Sequential Reads up to 550 MB/s, Max Sequential Write up to 510 MB/s, 4KB Random Read up to 23,000 IOPS, 4KB Random Write up to 83,000 IOPS
Beyond that, when FreeNAS moves beyond ZFS v15 it won't be as necessary anymore. Whenever that happens.
I also have 2 Crucial C300 SSDs. In Raid 0 I get a read of ~700 MB/s. I am thinking about donating them to the cache since I think I would rather have a single 500MB/s SSD for my computer and not have to be so paranoid about imaging it all the time.

I will post final spec's when I have them, the cases and MBs are on order. The rest to hopefully follow suit tomorrow or early next week.

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
1) raw throughput is not the issue for the ZIL, IOPS is.
2) I don't see a reason why you wouldn't be able to mirror MLC devices

What won't be necessary once FreeNAS moves beyond v15?
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
So, if IOPS are the key, how does that drive measure up? Probably the 83K is great but with Reads at 23K...

V.19 of ZFS introduced automatic ZIL failover to the ZPOOL on failure of a standalone ZIL. The quote I got from Ix actually had redundant OS SSDs and a single ZIL SSD. It was a 30G OCZ Deneva 2 SLC SSD. Runs about $350 online but it was still only a single unit which means it can't be a critical component to data integrity. Still SLC but I am guessing that is for IOPS sake. The Den 2 is 55K write and 80K read IOPS, although I would guess that is in the larger capacities maybe?

Stephen
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
Well, finding an SSD with decent Read IOPS is like trying to find your wedding ring in the bottom of a murky pond. The Corsair Force series had me excited but suspicious. 85K Write IOPS. No Read IOPS listed... anywhere. Finally, I found on their forum where someone screamed load enough that a tech finally posted them... after some blustering. 19K for the 120G and 28K for the 240G. What a disappointment. After looking at tons of different SSDs, I see why the Deneva may be the best option. No one has benchmarked the 30Gig. If I get one, I might do that for everyone's benefit.

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
I would take a look at the wikipedia page (http://en.wikipedia.org/wiki/IOPS) and find out how SSD compares to your spinning disks. Once you understand the relative differences in performance, you can find the SSD which best fits your performance and budget.
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
So, I see that even a low IOPS SSD clobbers HDDs. Let me see if I have the logic right. If I have a ZIL that can be Read off to the ZPOOL at 19K IOPS and the data is being sent to an array of HDDs that can only write at ~100 IOPS per vdev............. to put it another way, scrambling for extra SSD read IOPS is a total waste? Did I win?

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Win!

The other thing to keep in mind is that most people won't need a LOG device. I'd probably do some testing and if things are slow, try adding a ZIL.
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
If you have 2 vdevs with 5 spindles ea, you can either put them in the same pool (resulting in a stripe across both vdevs), or you can put them in separate pools.

Okay, my first FreeNAS box is alive and I will post all the specs soon. I have a question. We talked about Pools of Vdevs and I thought I knew how that worked but now I am not sure. I have created 3 Vdevs of 4 disks each in Z2. Now I am not seeing how the next layer works that gives the clients a single target that is a stripe of the vdevs. Can you enlighten me?

Thanks,
Stephen
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Status
Not open for further replies.
Top