BUILD My 1st Server Build Ever

Status
Not open for further replies.

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Hi.
I am not far from putting my order up for all parts I need. I would like to list the main parts I am planning on purchasing so that people can take a look and comment any thoughts they have on them before I go ahead and order.

Motherboard: Supermicro X11SSH-LN4F
CPU: Xeon E-1275 V5
RAM: Samsung 8GB DDR4 PC4-17000 288p UDIMM M391A1G43DB0-CPB DR480L-SL01-EU21
HDD: Eight 1TB WD Reds
Enclosure: Fractal Design Define R5
Monitor: http://www.amazon.co.uk/dp/B00LA07RH8
Power Supply: EVGA 220-G2-0850-XR

I have already checked everything, however I would really really appreciate it if people can check it again just in case and tell me any thoughts they have. It's gonna cost around £1000 in total so I need confidence...
 

KJaneway

Dabbler
Joined
Jan 15, 2016
Messages
12
Hiho,

maybe you should specify what you want to do with your new server. Is it just a Box for streaming and datastoring, or will there be heavy VM load? If yes: What kind of Application do you want to run on the server?

Without that infos I would say:
MOBO: Good Choice!
CPU: If a Xeon is a must: E3-1230, if not: Pentium G4400 or any i3 should be more than enough for any storing solution.
RAM: Why only 8GB? Take at least 16 or better 32 GB of Ram (in 16GB modules, so that you can easily put in more RAM if required)
HDD: Why only 1TB Drives? Best Bang per Buck should be the 3TB WD Red which is around 110€ here in EU whereas the 1TB is around 60€.
and: why 8 drives? What Raidz-configuration do you plan to use? The optimal number of drives depends on that.

Why do you need a monitor? Your Mainboard has IPMI. That means: Plug in Lan Cable. Enter assigned IP in Browser. Login to webinterface. Go to Remote Console and you have your servers livescreen on any PC or Laptop. In home use, the idea behind IPMI is to get rid of the monitor and even the need to access your server physically.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You don't need a monitor at all for a FreeNAS box (you can use IPMI for the rare (after initial install) case where you need console access), so that would save you a bit. The CPU you've chosen is almost certainly overkill for your needs, though you don't state what they are, so it's hard to evaluate for sure. The xxx5 parts generally have built-in graphics that are completely unnecessary for a server.

I'd recommend fewer and larger hard drives--buying 1 TB drives today just seems like a waste.
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Hiho,

maybe you should specify what you want to do with your new server. Is it just a Box for streaming and datastoring, or will there be heavy VM load? If yes: What kind of Application do you want to run on the server?

Without that infos I would say:
MOBO: Good Choice!
CPU: If a Xeon is a must: E3-1230, if not: Pentium G4400 or any i3 should be more than enough for any storing solution.
RAM: Why only 8GB? Take at least 16 or better 32 GB of Ram (in 16GB modules, so that you can easily put in more RAM if required)
HDD: Why only 1TB Drives? Best Bang per Buck should be the 3TB WD Red which is around 110€ here in EU whereas the 1TB is around 60€.
and: why 8 drives? What Raidz-configuration do you plan to use? The optimal number of drives depends on that.

Why do you need a monitor? Your Mainboard has IPMI. That means: Plug in Lan Cable. Enter assigned IP in Browser. Login to webinterface. Go to Remote Console and you have your servers livescreen on any PC or Laptop. In home use, the idea behind IPMI is to get rid of the monitor and even the need to access your server physically.
I will be using it for Streaming, Data Storing and as a workstation sometimes. (Please don't ask how I would also use it as a workstation, I've already had a long discussion about it in another thread. If you're interested I can send you the link to that thread.)

Also, i'm confused about VM's on FreeNAS. How does it work? Can I for example install windows as a VM on FreeNAS? Give me an example please.

8Gb of RAM because 1Gb is required per TB of storage, right?

8 Drives because I will be using RaidZ3.

You're right, i'll drop the monitor.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
i'm confused about VM's on FreeNAS. How does it work?
Pretty much like anywhere else, though it's headless. Install a VirtualBox jail. Using your web browser, browse to the address of that VirtualBox jail. Create a virtual machine, and install your desired OS. There's web access to the VM's console, but I find it easier and more reliable to just use a VNC client (or RDP, in the case of a Windows VM) when I need/want to have console access to the VM.
8Gb of RAM because 1Gb is required per TB of storage, right?
No, 8 GB is the minimum required RAM. If you're going to be running Plex or VMs on the box, you should plan to bump up to at least 16 GB.
8 Drives because I will be using RaidZ3.
Why are you planning to use RAIDZ3, and why do you connect that to using 8 drives?
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Pretty much like anywhere else, though it's headless. Install a VirtualBox jail. Using your web browser, browse to the address of that VirtualBox jail. Create a virtual machine, and install your desired OS. There's web access to the VM's console, but I find it easier and more reliable to just use a VNC client (or RDP, in the case of a Windows VM) when I need/want to have console access to the VM.

No, 8 GB is the minimum required RAM. If you're going to be running Plex or VMs on the box, you should plan to bump up to at least 16 GB.

Why are you planning to use RAIDZ3, and why do you connect that to using 8 drives?
Thanks for the info about the VMs.

Alright then, I guess I will be going for 16Gb.

I am planning to use RaidZ3 because I want more redundancy.
What do you mean by "why do you connect that to using 8 drives?"?
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
What do you mean by "why do you connect that to using 8 drives?"?
Pretty much what I said--you said "8 drives because I will be using RAIDZ3", and I don't see any causal connection between those two things.

Eight 1 TB drives in RAIDZ3 will give you ~5 TB, or 4.5 TiB, of net capacity, of which a little over 3.5 TiB will be usable when accounting for the recommendation to not fill your pool to more than 80%. Accepting @KJaneway's numbers, it will cost 480 Euros. That pool will tolerate the complete failure of up to three drives without data loss. In comparison, four 3 TB drives in RAIDZ2 will give you ~6 TB, or 5.4 TiB, of net capacity, of which about 4.5 TiB will be usable. It will cost 40 Euros less, use less power, take less space in your chassis, and still tolerate the total failure of any two drives without data loss.

There's no question that RAIDZ3 provides more data protection than RAIDZ2, but I think there's a real question as to how useful that additional data protection is. To lose data from a four-disk RAIDZ2 pool that's properly set up and monitored (i.e., email alerts are working both for the system and for SMART monitoring, regular SMART tests are running, scrubs are running on schedule), you'd need to have half of your drives catastrophically fail within a few days, and have an unrecoverable read error on a third drive while trying to resilver the drives. Even in that case, the data loss would be confined to the extent of the URE.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

Arman

Patron
Joined
Jan 28, 2016
Messages
243
The way that you wrote:

makes it sound as if the two are synonymous and they are not.
Ohhh yes. I Didn't mean to write it like that...
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Pretty much what I said--you said "8 drives because I will be using RAIDZ3", and I don't see any causal connection between those two things.

Eight 1 TB drives in RAIDZ3 will give you ~5 TB, or 4.5 TiB, of net capacity, of which a little over 3.5 TiB will be usable when accounting for the recommendation to not fill your pool to more than 80%. Accepting @KJaneway's numbers, it will cost 480 Euros. That pool will tolerate the complete failure of up to three drives without data loss. In comparison, four 3 TB drives in RAIDZ2 will give you ~6 TB, or 5.4 TiB, of net capacity, of which about 4.5 TiB will be usable. It will cost 40 Euros less, use less power, take less space in your chassis, and still tolerate the total failure of any two drives without data loss.

There's no question that RAIDZ3 provides more data protection than RAIDZ2, but I think there's a real question as to how useful that additional data protection is. To lose data from a four-disk RAIDZ2 pool that's properly set up and monitored (i.e., email alerts are working both for the system and for SMART monitoring, regular SMART tests are running, scrubs are running on schedule), you'd need to have half of your drives catastrophically fail within a few days, and have an unrecoverable read error on a third drive while trying to resilver the drives. Even in that case, the data loss would be confined to the extent of the URE.
Sorry I didn't mean to write it like they are related...

Yes, having a smaller quantity of drives at a higher capacity will use less power and space. However, higher capacity drives have a higher failure probability. This also includes a higher probability of "Bit Rot". Now i'm not a computer scientist or and IT expert of any kind so correct me if i'm wrong.

You mentioned a 4 disk setup in raidz2. I'm assuming the pool has been split into 2 vdevs. If 2 of the drives from 1 vdev fail wouldn't you lose the whole pool?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You mentioned a 4 disk setup in raidz2. I'm assuming the pool has been split into 2 vdevs.
Why are you assuming this? It's not possible to make two RAIDZ2 vdevs out of four disks; four disks is the minimum in a single RAIDZ2 vdev*. Your question suggests to me that you may not understand what RAIDZ2 (and by extension, RAIDZ1 or RAIDZ3) is. How, in brief, do you understand those systems to work?

Edit: With respect to failure probability, there are two things to consider: the probability of total drive failure (i.e., the drive simply dies), and the probability of a read error (either the drive simply fails to read a requested block, or it returns incorrect data). On the former, I'm not aware of any kind of direct correlation between disk capacity and failure rate--a 3 TB disk isn't, as such, significantly more likely to fail catastrophically than a 1 TB disk. As to the latter, the published read error rates do directly track with capacity--they're in the form of errors per byte.

* Well, not without doing anything crazy. I could make a RAIDZ2 vdev on a single disk by partitioning it into four slices and then doing something like 'zpool create stupidpool raidz2 ada0p0 ada0p1 ada0p2 ada0p3', but there'd be no reason to do anything like that.
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yeah, RaidZ1 is out of date.
Yes, I am aware that if I use 1 VDEV I would have to replace all drives. Lets say I had 7 drives. I made a Vdev containing 4 drives and another vdev containing 3. Lets say all the 3 drives on the second vdev die. Doesn't losing a vdev mean you've lost the whole pool? Sorry if i've misunderstood somethings. Can you tell me about the cons of having more than 1 vdev?
Replying to question in other thread to keep things together...

The only possible "con" I can think of in using multiple vdevs is that you would lose some space. All thing being equal of course:
  • A single vdev of 8 drives in RaidZ3 would use 3 drives for redundancy; resulting in a net of 5 drives worth of space
  • Two vdevs of 4 drives in RaidZ2 would use 2 drives from each vdev for redundancy; resulting in a net of 4 drives worth of space

While you mentioned 7 drives in your reply, you stated 8 drives in the original post. IMO, the better scenario would be to have two vdevs consisting of four drives in each vdev (if you truly want to fill all available bays)... Of course you could do as others have stated and simply get fewer, larger drives from the start and go from there.

Yes, losing any vdev that is part of a pool would result in total loss of the pool.

There is always a balance between Space, Redundancy and Speed that you need to decide upon when setting up your system. It does get confusing and it is actually a good thing that you are giving this thought now as opposed to it being an afterthought. If you haven't already, check out "Slideshow explaining VDev, zpool, ZIL and L2ARC for noobs!", it will help a lot in getting you familiarized.
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Why are you assuming this? It's not possible to make two RAIDZ2 vdevs out of four disks; four disks is the minimum in a single RAIDZ2 vdev*. Your question suggests to me that you may not understand what RAIDZ2 (and by extension, RAIDZ1 or RAIDZ3) is. How, in brief, do you understand those systems to work?

Edit: With respect to failure probability, there are two things to consider: the probability of total drive failure (i.e., the drive simply dies), and the probability of a read error (either the drive simply fails to read a requested block, or it returns incorrect data). On the former, I'm not aware of any kind of direct correlation between disk capacity and failure rate--a 3 TB disk isn't, as such, significantly more likely to fail catastrophically than a 1 TB disk. As to the latter, the published read error rates do directly track with capacity--they're in the form of errors per byte.

* Well, not without doing anything crazy. I could make a RAIDZ2 vdev on a single disk by partitioning it into four slices and then doing something like 'zpool create stupidpool raidz2 ada0p0 ada0p1 ada0p2 ada0p3', but there'd be no reason to do anything like that.
In brief I just know that RaidZ2 allows for up to 2 disks to fail before losing the whole pool and RaidZ3 allows for 3. I went back and I recapped VDEVs again.
Just to check: Each VDEV has its own RAID configuration. For example if I have 10 drives I can split them into 2 sets of 5 drives in each VDEV which is configured to RAIDZ3 for maximum parity/redundancy. Does that sound good? I can lose up to 3 drives in each VDEV and still be able to recover all my data. (6 drives in total) Right? However if I want to go with 8 drives it would be best if I create 2 VDEVs with each containing 4 drives configured to RAIDZ2. I could lose up to 2 drives from each VDEV (4 in total) and still be able to recover all data. Right?

I went on Newegg and I seen that I can get five 2TB Reds for £314.95. I can get five 3TB Reds for £379.95.
Screen Shot 2016-05-06 at 6.12.02 pm.png


Screen Shot 2016-05-06 at 6.12.15 pm.png


If I need more storage I can make another VDEV consisting of 5 drives. Right?
@Mirfster @danb35
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
In brief I just know that RaidZ2 allows for up to 2 disks to fail before losing the whole pool and RaidZ3 allows for 3. I went back and I recapped VDEVs again.
  • Correct
Just to check: Each VDEV has its own RAID configuration.
  • Close enough
For example if I have 10 drives I can split them into 2 sets of 5 drives in each VDEV which is configured to RAIDZ3 for maximum parity/redundancy. Does that sound good? I can lose up to 3 drives in each VDEV and still be able to recover all my data. (6 drives in total) Right? However if I want to go with 8 drives it would be best if I create 2 VDEVs with each containing 4 drives configured to RAIDZ2. I could lose up to 2 drives from each VDEV (4 in total) and still be able to recover all data. Right?
  • I think you have the idea in your head that a certain number of drives means you have to use a particular RaidZ.
    • That is technically not the case (but has been advised on some cases).
    • I think there is a "rule of thumb" that does mention not going above 11 drives in an single vDev; but even that has been done by others without issue
  • There is nothing really wrong with other configurations (using 10 Drives as an example); would just be dependent on your "use-case" mostly...
    • 1 x 10 Drive Raidz3 vDev
    • 1 x 10 Drive Raidz2 vDev
    • 2 x 5 Drive RaidZ2 vDevs
  • Is there a particular reason that you deem RaidZ3 as needed?
    • Not that I am knocking it, but consider what I mentioned in my first bullet
If I need more storage I can make another VDEV consisting of 5 drives. Right?
  • Correct
  • Just some additional "food for thought"...
    • Data at rest will stay at rest. In this I mean if you start off with 1 vdev of 5 drives. Then later you add another vdev to the pool/volume.
      • Any data/file that is not being/been modified/changed will still reside only the first vdev. It is not re-written to span both vdevs unless it has been changed/modified.
    • So, what does this really mean?
      • It means that you do not truly get the increased IOPS when it comes to "stagnant" data after you add another vDev
      • You will get those benefits with newly added data though
Not trying to drag you into the weeds, just providing my opinions. Of course, I may be incorrect in my assumptions and if so I hope others correct me.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
For example if I have 10 drives I can split them into 2 sets of 5 drives in each VDEV which is configured to RAIDZ3 for maximum parity/redundancy.
Sure, you could. That would use, effectively, four drives for data and six for parity. That's an extremely conservative configuration. Taking the results of the calculator you posted, that gives you a mean time to data loss of 114 billion years, several orders of magnitude more than is necessary IMO. Cut this in half to account for the fact that failure of either vdev results in the loss of your pool, and you get only 57 billion years.

I understand wanting to be conservative with your data. I've lost enough data to hard drive crashes that I'm working hard to see that it never happens again. For me, six-disk RAIDZ2 vdevs are plenty safe to do that.
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
I think you have the idea in your head that a certain number of drives means you have to use a particular RaidZ.
Oh, no I think you're misunderstanding. I just meant if I have a higher number of drives it allows me to choose a configuration with more parity.

There is nothing really wrong with other configurations
Nope I don't think there is anything wrong with other configurations. However the reason I keep insisting on using RaidZ3 is because it is more redundant. There isn't anything problematic about that, is there?

Any data/file that is not being/been modified/changed will still reside only the first vdev
Thanks for telling me about that. I didn't know. Is there not some sort of "refresh" option where it would automatically rearrange and spread the data across the other VDEV when you add it? If not, would iX systems be able to add such a feature in the next update of I let them know about it?

Not trying to drag you into the weeds
Of course not! :) I appreciate all thoughts and comments provided that they are accurate and not misleading.
 

Arman

Patron
Joined
Jan 28, 2016
Messages
243
Sure, you could. That would use, effectively, four drives for data and six for parity. That's an extremely conservative configuration. Taking the results of the calculator you posted, that gives you a mean time to data loss of 114 billion years, several orders of magnitude more than is necessary IMO. Cut this in half to account for the fact that failure of either vdev results in the loss of your pool, and you get only 57 billion years.

I understand wanting to be conservative with your data. I've lost enough data to hard drive crashes that I'm working hard to see that it never happens again. For me, six-disk RAIDZ2 vdevs are plenty safe to do that.
I'm quite confused :( What do you mean? All those billions of years for what? what is it? Can you give me a comparison between the setup I suggested and the setup you have? what is a mean time to data loss? What is IMO? Sorry if I seem annoying... I'm just new to these stuff so im trying to absorb as much as I can...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
However the reason I keep insisting on using RaidZ3 is because it is more redundant. There isn't anything problematic about that, is there?
RAIDZ3 is slower than RAIDZ2, which is itself slower than RAIDZ1, and uses more of your disk space for parity. In return, it offers greater data protection. It's up to you whether the greater data protection is worth it, but I don't believe it generally is. I believe that most of the time, for the home user, RAIDZ3 is a waste of CPU cycles, disk space, energy, and consequently money. The money would be better spent on (1) a lower-spec machine to act as a backup server, ideally stored offsite; or (2) some form of cloud backup solution.
Is there not some sort of "refresh" option where it would automatically rearrange and spread the data across the other VDEV when you add it? If not, would iX systems be able to add such a feature in the next update of I let them know about it?
No, there is no such feature, and it's unlikely iXSystems will be able to add one. This is baked into the core of ZFS.
All those billions of years for what? what is it?
The screen shots you posted indicate, based on the assumptions you entered (which include an MTBF of 500,000 hours for the drives, and that you'd replace a failed drive within 72 hours), that a RAIDZ3 vdev consisting of five, 2 TB drives will, on average, go 114 billion years (that's what 1 x 10 ^ 15 hours means) before you'll lose data to disk failures.
Can you give me a comparison between the setup I suggested and the setup you have?
No, because I'm not able to reach @Bidule0hm's calculator from here. But if you want to look at the numbers for a 6 x 4 TB RAIDZ2 vdev with 500khour MTBF (mine are WD Red drives, so it's actually 1Mhour, but I doubt it will make a difference), you'll see that the MTTDL line is still a very large number.
What is IMO?
In My Opinion.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
I'm wondering if vdev and zpool are being used for the same thing in some of these posts. Isn't a zdev part of a zpool and as such spanned across all drives in said zpool? Or did I miss some context in some earlier posts.
Also bit rot is not much of a factor if you have disk scrubs scheduled at least once a month.
But disc failure in regards to RAIDzX has been spot on in my understanding.


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top