new system for use in schools

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
We ran into the same issue when we recently looked to add another pair of Nimbles, after HPE bought them they about doubled the price per array.

We are looking into IaaS (Infrastructure as a Service) to pay OpEx rather than CapEx.
https://www.hpe.com/us/en/services/flexible-capacity.html
 

damienginty

Dabbler
Joined
Jan 23, 2019
Messages
15
Hi, thanks for the reply.

I am happy to look at rebuilding the appliance, I’ve found it all rather interesting.

I’ve attached a print screen of netdata; its a typical view when i have a few classes saving video projects to the windows file servers hosted on freenas.

Our network:
Each distribution switch is connected to the core with 2 x 10Gbps LAG. The core switch is made up of 3 x 48 port HP 5900 in an IRF with 4x 40Gbps connections between each). In turn our virtual servers have 4 x 10 Gbps connections to each side of the core. The iSCSI side of things is based on 2 x HP 5920AF in an IRF, all virtual hosts have 4 x 10Gbps connections to this in an MPIO arrangement.

I get the throughput to this appliance is limited because of the number of 10 Gbps connections to it (2 at the moment in an MPIO config) but when writes get close to the network limit the CPU usage is close to 100%

The past several months have given me enough trust in freenas to consider replacing all of our existing storage with 2 x freenas servers, that said i would need to be confident that they could handle all the IO.

At the moment we use a mix of equallogic, nimble and jetstore iSCSI

I am hopeful to have access to 2 x 18 bay dell r720 that we could repurpose if they would be suitable? If so, it’s what hardware to put in them…….
 

Attachments

  • FN1.jpg
    FN1.jpg
    109.2 KB · Views: 247

damienginty

Dabbler
Joined
Jan 23, 2019
Messages
15
So, since this thread started we have demoted a Dell Power Edge R720 and swapped over some components from the R620 that we had been using with Freenas. The server specs are as follows:


2 x E5-2695 v2
384 GB Ram
2 x 12DNW (Flashed with the IT firmware to connect to MD 1220s)
1 x LSI720 Card (for system drives and L2ARC)
1 x 57840S Quad Port SFP+Network Daughter Card
1 x Intel NVME 375GB PCIE Card (For use as Zil)
1 x Intel duel port X540 network card

All firmware is as up to date as can be and the system is running FreeNAS-11.2-U7.

The Pool is encrypted and setup with 47 mirrored drives all 600GB SAS @ 10k (see attached ZFS2.jpg for more info). in addition it has 14 Intel SSD for l2ARC and an Intel Optain PCIE NVME drive for ZIL.

It is hoped that this kind of setup would be a realistic option to replace Nimble, Jetstore and Equallogic; as such the majority of its workload will be iSCSI. Ideally i would like 2 x 10 gig connections to each side of the iSCSI core and use MPIO to balance the load.

So, the problems.

Writing data to the pool is much slower than i would of expected (from within a VM), reading data is okay once it is cached.


My main issue arises when data is moved to the pool in bulk, the system appears to hang for 10 seconds and the web console becomes slow and unresponsive, netdata on the other hand keeps working and doesn’t show anything alarming.

what are the best ways to go around troubleshooting what is going on?

physical test.jpg shows results from crystaldiskmark from a physical computer connected to freenas with sync set to standard. Physical test sync always.jpg shows the same computer doing the same test but with sync set to always. The final tests was run from a VM that was running on the computer the other two tests came from.

are these results in line with what is expected??

any help or advice would be very much welcome.

Sincerely
 

Attachments

  • ZFS2.jpg
    ZFS2.jpg
    263.4 KB · Views: 203
  • Physical test.jpg
    Physical test.jpg
    75.6 KB · Views: 187
  • Physical test Sync always.jpg
    Physical test Sync always.jpg
    75.3 KB · Views: 196
  • test within vm.jpg
    test within vm.jpg
    72.2 KB · Views: 191
Top