What to do, hardware/software FreeNAS or other?

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
But back to the original idea then. I have ordered a LSI SAS 9207-8i card. but can this be mixed with a Dell H310 card? So that i could get a total of 16 slots ready? Or should I find another LSI SAS 9207-8i card to match?
Yes, you can run both an LSI and a Dell card in your system, provided you have enough PCIE slots, and that both are flashed with the appropriate IT firmware. And, since you mentioned virtualizating FreeNAS, you will need to pass the controllers through to FreeNAS via VT-d.
All will be in the same zpool, but i think the best for me would be to run 3 disks on each vdev where 1 disk is the redundant disk.
(FWIW, it's called a 'pool', not a 'zpool': zpool is a ZFS command.) Not sure what you mean by this... do you mean a RAIDZ1 vdev? A 3-way mirror? For a VM datastore, mirrors are the best topology because they deliver the most IOPS. With as many disks as you plan on using, you could set up two pools: a smaller pool of mirror vdevs, perhaps SSD-based, to serve as a VM datastore, and a larger RAIDZ2 pool for general-purpose storage.
 
Joined
Jul 19, 2016
Messages
72
There is enough slots so that should be ok the LSI will run a IT P20 driver so i think that should be good. But the problem might happen when say 1 drive is on one card and the other 2 drives are on the other card. But since it should be pass trough and both should support 6 GB/s maybe it doesn't matter. But not sure if there is a good driver for this Dell H310 card, so I will have to research this before.

I was thinking of RaidZ1 Vdev's containing 3 disks in each. So each time i upgrade i will have to add 3 disks to make a full Vdev.

There will not be made any VM datastore from this as the VMware will be running an own SSD disk that will contain all VM's on it. The purpose of the FreeNAS is only to store big data, not to run VM's or anything of it.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
There is enough slots so that should be ok the LSI will run a IT P20 driver so i think that should be good. But the problem might happen when say 1 drive is on one card and the other 2 drives are on the other card. But since it should be pass trough and both should support 6 GB/s maybe it doesn't matter. But not sure if there is a good driver for this Dell H310 card, so I will have to research this before.

I was thinking of RaidZ1 Vdev's containing 3 disks in each. So each time i upgrade i will have to add 3 disks to make a full Vdev.

There will not be made any VM datastore from this as the VMware will be running an own SSD disk that will contain all VM's on it. The purpose of the FreeNAS is only to store big data, not to run VM's or anything of it.
There's no problem with having drives in a pool being run by different controllers.

RAIDZ1 isn't recommended for 'large drives', i.e., 1TB or larger, so you might want to consider a different topology if you plan on using large-capacity HDDs. For example, I use 7 x 2TB HDDs in a RAIDZ2 pool on my main system.
 
Joined
Jul 19, 2016
Messages
72
There's no problem with having drives in a pool being run by different controllers.

RAIDZ1 isn't recommended for 'large drives', i.e., 1TB or larger, so you might want to consider a different topology if you plan on using large-capacity HDDs. For example, I use 7 x 2TB HDDs in a RAIDZ2 pool on my main system.

I was first thinking of RaidZ2 with 8x3/4TB disks. But that ain't good as it also is too big. So I was more thinking down like 2 Vdevs with 4 disks in each on Radz1. But now I think more like 3 disks in 1 vdev running RaidZ1.

3 Vdevs with 9 disks would then have 3 redundancy disk, and if one fail there would only need to be 1 "minor" vdev that would need a rebuild, not the entire one if I had run 8 disks in 1 vdev with 2 disk in redundancy.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I was first thinking of RaidZ2 with 8x3/4TB disks. But that ain't good as it also is too big. So I was more thinking down like 2 Vdevs with 4 disks in each on Radz1. But now I think more like 3 disks in 1 vdev running RaidZ1.
Too big? What do you mean? There's no problem using an 8-disk RAIDZ2 vdev to create your pool. The old N + 'Power of 2' design ideas no longer apply.

3 Vdevs with 9 disks would then have 3 redundancy disk, and if one fail there would only need to be 1 "minor" vdev that would need a rebuild, not the entire one if I had run 8 disks in 1 vdev with 2 disk in redundancy.
"A chain is only as strong as its weakest link." Your topology design sacrifices 3 of 9 drives to parity... but if you lose 2 drives in any single vdev you will lose the entire pool! In an 8 (or 9) drive RAIDZ2 pool, you can lose any 2 drives and still not lose your pool, with the added benefit of not using as many drives for parity. The RAIDZ2 design is safer. If you're really paranoid about drive failure, use a 9-drive RAIDZ3 topology, which will allow for failure of any three drives without pool loss.

I suggest you study these threads before proceeding with your design:

https://forums.freenas.org/index.php?threads/zfs-primer.38927/
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
https://forums.freenas.org/index.php?threads/comprehensive-diagram-of-the-zfs-structure.38865/
 
Joined
Jul 19, 2016
Messages
72
Yes in one way it's not big but then again its big. After reading this article about Raid 5 is dead It seems that Vdev's that would be over 12.5 TB has a bigger chance of failure than other, since if 1 disk fails there is a much bigger chance that the second disk will fail too. But taking it a bit back and have several Vdev's that is under this limit will have bigger chance of not failing when rebuilding, and also take less time. And also from the look of it more speed.
 
Joined
Jul 19, 2016
Messages
72
Chassis: Supermicro CSE 846TQ
Backplane: SAS846TQ rev. 3.1
Motherboard: H8DME-2 2.01A
Processor: 1X Quad-Core AMD Opteron 1.8GHz 2346 HE
RAM: 16GB 8x2GB DDR2 ECC Registered
Power Supplies: 2 x 900 W
HD Controller: 3x Supermicro SAT2-MV8
IPMI: SIM1U+ with AOC-USB2RJ45

Did some more digging on this as as price wise this server cost about 150$ more than what 4 hot swap backplanes (3 x 4) would cost to mount into my current server. And then I would not need to run it under VMware either. And I would be able to expand more.

Yes the HD controllers are only rated to 3 GB/s, but that is like 300 MB/s per drive, and I will not run any SSD's but WD RED's.
Question then is, is there any big difference in a setup between 3 GB/s and 6 GB/s controllers?

If there is I can mount other 6 GB/s controllers instead, there is 2 x PCI-E x 8 (1.1) on the motherboard so I should be able to have at least 16 disks which is way more than I will need the next 5 years anyway.

Also PCI-E 3.0 vs 2.0 vs 1.1
Data Rate: PCIe 3.0 = 1000MB/s, PCIe 2.0 = 500MB/s, PCIe 1.1 = 250MB/s
Total Bandwidth: (x16 link): PCIe 3.0 = 32GB/s, PCIe 2.0 = 16GB/s, PCIe 1.1 = 8GB/s

So maybe the board is to old too...

My goal is that I will have around 100-200 MB/s tranfer rates if possible.
The seller says it supports disk up to 10 TB so that should not be the problem either.

Second is will the CPU be enough? And 16 GB memory seems not to be enough. So I would probably need more.

Someone earlier said it would be power hungry but I don't care much for that.

What does the people think?
 
Joined
Jul 19, 2016
Messages
72
I found another Supermicro powerhouse.

Chassis: Supermicro CSE846TQ-R900 (24 bay hotswap)
Mainboard: Supermicro X8DTE
CPU: 2x Xeon x5560
RAM :48GB ECC Registered DDR3
HD Controller: 3 x Dell H310
Nettwork: 2 x Intel® 82574L Gigabit Ethernet Controllers

All for just under 1000$

From the look of it the Dell H310 will be fast enough and work with FreeNAS. The Intel 82574L seems also compatible.

So is this a solution I should go for? Also this server will then only run FreeNAS not VMware or anything else.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
That should be adequate. I'm not a fan of the -TQ backplanes, for reasons that have already been discussed, but they certainly work. You'll want to flash the correct firmware on the H310s.
 
Joined
Jul 19, 2016
Messages
72
That should be adequate. I'm not a fan of the -TQ backplanes, for reasons that have already been discussed, but they certainly work. You'll want to flash the correct firmware on the H310s.

Yeah that's the idea. I will pick it up today.

I need to make 2 Vdev's as I have some data on 3 x 3 TB disks. And I have a total of 8 disks now. So 2 Vdev's with 4 on each running Raid Z1.
 
Joined
Jul 19, 2016
Messages
72
Ok so i got this setup now:

Chassis: Supermicro CSE846TQ-R900 (24 bay hotswap)
Mainboard: Supermicro X8DTE
CPU: 2x Xeon x5560
RAM :48GB ECC Registered DDR3
HD Controller: 3 x Dell H310
Nettwork: 2 x Intel® 82574L Gigabit Ethernet Controllers

Now I want to run some Hard Drive tests to be sure there is no errors with them before I start. I found the Hiren tool that I can boot from a USB. But there is so many tools to choose from, which one is the one i should use and thrust?

http://www.hiren.info/pages/bootcd
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ok so i got this setup now:

Chassis: Supermicro CSE846TQ-R900 (24 bay hotswap)
Mainboard: Supermicro X8DTE
CPU: 2x Xeon x5560
RAM :48GB ECC Registered DDR3
HD Controller: 3 x Dell H310
Nettwork: 2 x Intel® 82574L Gigabit Ethernet Controllers

Now I want to run some Hard Drive tests to be sure there is no errors with them before I start. I found the Hiren tool that I can boot from a USB. But there is so many tools to choose from, which one is the one i should use and thrust?

http://www.hiren.info/pages/bootcd
Quite a few of the forum member use the badblocks utility in combination with SMART testing, as described here:

https://forums.freenas.org/index.php?threads/how-to-hard-drive-burn-in-testing.21451/
 
Status
Not open for further replies.
Top