Your Opinions

Status
Not open for further replies.

EQNish

Dabbler
Joined
May 18, 2017
Messages
22
I was able to get a good deal on some 4TB 7200RPM SAS drives and bought enough to fill my Chassis! Since I'm replacing all my 1TB drives and have already off loaded my data to temp storage I have the opportunity to move things around or change my build a bit.
My current Build is;
I5-3350
16G Mem (DDR3 ECC) (I know I should upgrade memory, this is in the works to 32GB)
x2 LSI 9210 -8i SAS controllers
x10 WD red 1TB 5400rpm
1 dual port 10G NIC
1 4 port 10/100/1000 NICs
* this box is on it's own UPS and is set to gracefully shutdown on power loss after 5 mins

1 10G switch (for storage network between servers)
1 FE switch for work (normal\basic traffic)

Newer build swaps on the Hard drives for HGST 4TB SAS drives
so, my question(s) is this
I recently upgrade my workstation to a 500G SSD which leaves a 120G SSD, I was going to put this in as a SLOG drive...thoughts?


I had the 1TB drives in one volume RAIDZ2, with 3 iSCSI luns and 2 SMB shares; 2 iscsi were 2TB each and 1 2GB (quarm) , and SMB was 1 TB each
Since I'm adding 6 additional drives, would it be better to split my Disks into more volumes? what would be the benefit

I mainly use this as a host for VMs in my lab, 6 servers (3 vmware, 3 Hyper-V) running up to as many as 100VMs

anyways what do you think about the disk set up\options
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Well, off hand and quickly as I am watching the NBA finals and commercial forum surfing, I don’t know of any i5’s that support ECC. It’s not the biggest deal to run non ECC, but if you already have ECC ram....


Sent from my iPhone using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
which leaves a 120G SSD, I was going to put this in as a SLOG drive...thoughts?
It might help you with Sync write speed, but you really need something better than a SATA SSD because that limits your throughput to 2Gb/s at most.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
You need a Xeon
some motherboards support core i3 (some i3s support ECC) and/or some Pentium Gold CPUs, but I haven't checked whether a motherboard that support i5 gen3 may support them too
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
some motherboards support core i3 (some i3s support ECC) and/or some Pentium Gold CPUs, but I haven't checked whether a motherboard that support i5 gen3 may support them too
The OP didn't say what the board was, which is why I asked, because it matters.
What system board is it?
It is all down to what chip-set is on the board.
 

EQNish

Dabbler
Joined
May 18, 2017
Messages
22
Thanks guys, I was mistaken about the ram, what is in the system is NON-ECC (I have tons of ECC 8GIG sticks, and thought I had used that), anyways I appreciate the information and guidance, but I think you all are missing the questions.

I recently upgrade my workstation to a 500G SSD which leaves a 120G SSD, I was going to put this in as a SLOG drive...thoughts?


I had the 1TB drives in one volume RAIDZ2, with 3 iSCSI luns and 2 SMB shares; 2 iscsi were 2TB each and 1 2GB (quarm) , and SMB was 1 TB each
Since I'm adding 6 additional drives, would it be better to split my Disks into more volumes? what would be the benefit

I mainly use this as a host for VMs in my lab, 6 servers (3 vmware, 3 Hyper-V) running up to as many as 100VMs

All of the other info is so you have a better understanding of what I have, and what I am using it for!

In the not to distant future I plan on upgrading the MB\CPU\RAM, but not now...the new drives blew my yearly IT budget!!!!

Thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
but I think you all are missing the questions.
Mostly, that is because you didn't give enough information to really answer the questions.
Since I'm adding 6 additional drives, would it be better to split my Disks into more volumes? what would be the benefit?
I mainly use this as a host for VMs in my lab, 6 servers (3 vmware, 3 Hyper-V) running up to as many as 100VMs
You have a pool of drives that you use for iSCSI and you run as many as 100 VMs... That (to me) means you need a lot of IOPS and that there will be a lot of sync write activity. If I were setting up a pool for that, I would be using mirror vdevs instead of RAIDz2. More vdevs generally translates into more IOPS.
4TB 7200RPM SAS drives and bought enough to fill my Chassis!
x10 WD red 1TB 5400rpm
x2 LSI 9210 -8i SAS controllers
Since I'm adding 6 additional drives
Putting all these things together (if you had followed the forum rules, it would have been presented together to begin with) I am able to determine that you have a 16 bay chassis and you will have these drives connected to 2 different SAS controllers, so not a chassis that has an expander backplane.
Performance wise, you would do better to get a SAS expander and run all the drives from one SAS controller. I have tried it both ways. Voice of experience. Something like this:
https://www.ebay.com/itm/Intel-RES2...der-Card-SAS-SATA-PCI-Express-x4/141788425753
Don't get the HP ones, they are only 3Gb/s where this one is 6Gb/s
Then, you should set your drives as 8 mirrored pairs. That will give you around 22TB of practical, usable, capacity and the IOPS should be (guessing at the speed of the drives) around 2720 IO/s... That is much better than you will ever get with RAIDz2...
which leaves a 120G SSD, I was going to put this in as a SLOG drive...thoughts?
As for the SLOG. Yes, you need one, no, the 120GB SSD that came out of a desktop computer is not going to be a good one. I commented on that up above, but you appear to have ignored that along with the questions that were asked.
Would you like a suggestion about what would make a good SLOG?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Chris is spot on here. Just having 100VM's in and of itself presents a load. You don't mention what kind of OS the VM's are running, but... Unix/Linux kernels generally call some kind of system-wide fflush() every 30 seconds or so. So at 100 VM's, you could be catching 3 of those per second...

You need IOPS, mirrored stripes, and probably an intent log (Note: nomenclature has changed on me, I'm using the old Solaris nomenclature here). zRAID reduces you write throughput because it requires a number of drives to complete writes before a call can return complete. The worst case rule-of-thumb in RAID5-ish kind of arrays, you get some read performance improvement, but the write speed of a single disk spindle. RAIDz does a bit better, but not much. Stripe's scale. Mirroring adds redundancy to stripes, and scales read performance even further. You can think of it as hyper-threading for I/O. The more unblocked threads, the better.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
[QUOTE="EQNish, post: 459414, member: 74869"In the not to distant future I plan on upgrading the MB\CPU\RAM, but not now...the new drives blew my yearly IT budget!!!![/QUOTE]
Before you blew up the budget, you should have asked what was a good way to spend the money.
You could have had a lot more bang for your buck.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Status
Not open for further replies.
Top