ESXi bare-metal FreeNAS Expansion Build

Joined
Oct 26, 2019
Messages
8
Cheers All! First Post, but long-time lurker; have posted on FB page a few times…

Appreciate the community and all the knowledge especially from the long-time members! Sometimes your patience is awe inspiring lol… Hopefully I don’t try it here, but probably will

I’ve have been running a system and learning along the way for some time, but still feel ignorant compared to some here.
My current system is a Family/Work/Pleasure ESXi bare metal setup:
  • MB: X11SPM-TPF
  • CPU: 4114
  • Ram: 192 GB (6x32GB)
  • Drive(s):
    • X4 4TB Red Pro’s (Pool-1) Z1
    • X4 6TB Red Pro’s (Pool-2) Z1
    • X1 4TB Purple
    • X1 512GB Samsung Pro NVMe
    • X2 128GB Supermicro SATADom’s
    • X1 10GB Fiber with x1 1GB fail-over
      • Going to add a larger 10GB/s Switch US-16-XG-US 10G
  • All in a Fractal Node 804

This is my first fully operational system that is on 24/7 and utilized for very mixed use:
  • VM’s:
    • NextCloud (Internal and about 20 external users)
      • 7.5TB of current Data and growing
      • NFS & SMB
      • Also serves as a customer product information and instructional video access (intermittent use)
      • Family Lineage vault for external family sharing and external family data uploads (very intermittent use, but when accessed data uploads and downloads are large i.e. 1,000’s of photos and multiple video up/down)
  • Obviously FreeNAS with Intel x8 AHCI controller I/O pass through
    • Two pools because I had x4 4TB from a WD PR4100 setup then purchased the x4 6TB because I wanted more storage.
  • Windows 10
    • Plex (about 5 intermittent external users)
      • 15TB of current data
    • Central Windows Machine Back-up and file history
      • Five machines
    • Central File server
      • 2TB
    • ESXi sSATA controller serving the 4TB Purple to Windows
      • Used for HDHomerun DVR with original goal for security system as well
    • HDHomerun
      • Heavy use and a mother-in-law that keeps here TV on 24/7
        • She’s definitely tested my 24/7 wireless data reliability
    • Syncthing
      • Primarily bother and I very large file sharing (don’t ask what)
  • NFS disk and multiple SMB’s
  • Ubuntu, but more for a test VM and learning breaking things
  • Plane for more VM’s for other work-related usage more on that another time
Have learned a lot since build and primarily that my NFS performance with ESXi was due to no SLOG, but will be fixing that soon:

Here is where I’m at: My NextCloud Family Linage site has been more popular than I anticipated, now I have documents and photos that go back to the early 1800’s and early 1900’s. I’m adding more members pretty regularly and data is growing pretty fast with about 3TB since first initial load this year. This is the critical data wanting to ensure is secure and long-term degradation is minimized.

Work data is important, and speed of access is critical, but this is replicated data that can be recovered easily.

Next is the pleasure part as I travel all the time and data access and ease of access is important. This has been great, but honestly the heavy hitter on resources is really just Plex transcoding when I’m away. The other hit is when I’m moving large amounts of data for use and I’m highly impatient, so speed is very important. My workstation is already connected to my 10GB switch and speed is great, but was not good with FreeNAS NFS/ESXi/Windows was poor i.e. no SLOG

I've have learned a lot and know that current system is compromised on pool setup and other areas, but here is my plan to resolve:
  • Move current system to a 1U for vSphere Management (Have Essentials License) console and possibly noncritical operations
  • Adding a 2U Storage Server ESXi bare metal
  • Improve pool set-up and increase failure tolerance from Z1 to Z2
  • Fix NFS/ESXi issues with proper SLOG
  • Add ability for hardware fail-over
New Equipment (all items purchased unless noted):

1U Chassis: Supermicro BPN-SAS3-815TQ
  • Move/add the 6TB drives and other hardware from above
  • Reduce to 64GB Ram (x2 32GB)
  • Move to a Supermicro Passive Cooler: SNK-P0067PS
  • 4TB and WD PR4100 goes to brother
  • Noctua Cooler, PS and Case to be sold

2U Chassis: Supermicro SC826BE2C_R920LPB:
Choose the SC826BE2C_R920LPB due to the Duel 12GB/s Backplane for future data expansion ability​
  • Motherboard: X11SPH_nCTPF
  • CPU: Silver 4214
    • Been happy with the 4114 and the 4214 just adds some ability and close to cost of original 4114
    • Supermicro 2U Active Cooler: SNK-P0068APS4
  • RAM: 192GB DDR4 ECC Registered @ 2,666 will operate at 2,400(x6-32GB)
    • Only had to purchase x2 sticks
  • Drives: x10 12TB Seagate Exos X12 nearline SAS 12GB/s: ST12000NM0027
    • SAS3008 passed directly to FreeNAS
    • These will be used for a single Z2 Pool
    • $38.90/TB
  • NVMe 1Tb Samsung Pro; Primarily for Windows VM
  • SATADom’s: Boot devices for ESXi and FreeNAS and other OS that are not data intensive VM’s
  • SLOG: Haven’t purchased - This is where I need help, looked at the P900, but I want PLP so looking at the DC P4800X (P900 is much less, but seems pointless with no end to end or PLP). Other suggestions?
  • L2Arc: Don’t believe I need for my use, but the P900 seems to be ok here as losing is not detrimental from my understanding due to being read
Goal will be to bring new system up and stable then start migrating data onto new pool. Also, for offsite back-up and I know not the best system, but the PR4100 is going to be used as an offsite back-up solution.

Obviously open to suggestions and comments and I thank all those that have spent so much time helping others!

Cheers
 
Joined
Oct 26, 2019
Messages
8
Ok, I know my post was just a big info dumb and not really any direct questions... apologies for that as I was all excited about first post...

So a couple updates and some real questions:
- Decided to run with the P900 for the SLOG for now as will have a rack mount UPS/APC with NC... doesn't really meet the end to end or PLP, but lower cost than the other Optane card to start to ensure the SLOG works as planed.

- After thinking about more should I just setup up the 2U system as FreeNAS only and pass as iSCSI to ESXi in the 1U system i.e. the 1U system would be the ESXi/VM's and direct connected via SPF+ 10GB/s fiber cable to 2U FreeNAS system.
- My original thought was to just have the 2U as ESXi and Freenas as a VM with the LSI-3008 and SLOG directly passed to FreeNAS?​
I know each will work and if I do as planed would use NFS as I'm not sure iSCI would work if everything is on the same system? I'm sure it can be done as it's just virtual NIC's at that point, but I have not done as of yet.

- If I do first method of separating ESXi and FreeNAS then current hardware is utilized, but not sure if it is truly beneficial?
- Secondly I did not realize ESXi Essential does not have HA, I thought it did as current system was asking to set-up, but only single box so can't and then reviewed license again and no way I'm paying an $5K upgrade to get HA... maybe just do the 2U and use 1U as a means to learn KVM and abandon the VMware ***** in the future...​
Again thank you all as I would have never got this far without all the knowledge here.​
Cheers​
 
Joined
Oct 26, 2019
Messages
8
Christmas Early:
Capture.JPG


Rack delivers tomorrow, supposedly 80% sound reduction; should be it weighs 269#'s:
Capture2.JPG
 

droeders

Contributor
Joined
Mar 21, 2016
Messages
179
Never heard of XRackPro until you posted this. Very curious to hear your thoughts on it, especially with respect to noise and cooling.
 
Joined
Oct 26, 2019
Messages
8
Never heard of XRackPro until you posted this. Very curious to hear your thoughts on it, especially with respect to noise and cooling.
Will let you know; delivery delayed a day as I had it shipped to a friends shop as shipping to a residential address was insane.

I looked for sometime under a so called "Soundproof Rack" and this is the only one I could find that looked to have decent build quality and a price that wasn't to crazy; though this was way more than I wanted to spend on a rack. Figured if it is high quality then I'll have for years and helps justify in my mind anyhow;)

I looked at many others, but prices were either double or more and total weight of unit(s) were questionable per description.

What I like is all panels and doors fully seal and there's two filters for bottom air intake. What I don't like is the rear fans run at a 100% and have no temperature control. I currently have a 4 zone AC-Infinity controller in my homemade rack and will be looking to do some retrofitting.

Will let you know and will be providing some before and after rack setup pics.

Cheers...
 
Joined
Oct 26, 2019
Messages
8
Ok so was a little busy since the last post. Have all components moved from home-made rack to the new. Not completely done yet as I still have wire clean-up to do and the Rack UPS to install (just a small unit being used for now). I was able to find a reconditioned APC SRT2200RMXL-NC from Amazon at half cost; seems to be in as-new condition. And the Fractal components will move to a 1U chassis once the new 2U is running stable for a few weeks.

Ran stress test for the last week and full disk tests that took 19hrs to complete so about 9.5min/TB; All is well here with highest temps at full system load at 366watts per IPMI and 51C at CPU. Normal load I'm sitting about 32-37C

My current HDD availability and the following are my planned use:
  • x2 128GB SM SATADom's
    • Not Sure what I'm going to do with now as have decided to add two of the 280GB SSD to the SAS backplane for direct access by FreeNAS. Was going to use for FreeNAS boot.
  • x2 Intel S4610 980GB SSD's
    • VM's and other storage for ESXi
  • x2 Intel S4610 280GB SSD's
    • Thinking of just using one as these for FreeNAS boot as they are high reliability drives. I think mirror is probably overkill on these, but I'm open to doing mirrored as well???
  • X10 Segate 12TB
    • FreeNAS Pool
  • x1 P900
    • SLOG if needed, but 95% sure that I will need based on ESXi and NFS usage plan
  • x1 Samsung Pro 512GB
    • Windows VM
Please chime in on other thoughts and/or recommendations.

droeders: The rack is fabulous, though I did change out the rear fans for 5V fans that I can speed control based on temp from my AC-Infinity four zone controller. Even with the 2U server at full fan speed it is not that loud. You can hear it, but does not drown out the room. Even during system stress test the temps never went above a point where full fan speed was required and it was still quiet; at normal loads it is whisper quiet.

Based on feedback on HDD set-up ESXi and FreeNAS will be installed this weekend; I've already done test loads and all good so far.


Cheers all...
 

Attachments

  • Photo 19-11-04 20-25-00 1555.jpg
    Photo 19-11-04 20-25-00 1555.jpg
    209.7 KB · Views: 365
  • Photo 19-11-08 11-14-29 1573.jpg
    Photo 19-11-08 11-14-29 1573.jpg
    127.5 KB · Views: 434
  • Photo 19-11-09 21-54-09 1587.jpg
    Photo 19-11-09 21-54-09 1587.jpg
    303.7 KB · Views: 351
  • Photo 19-11-09 22-08-12 1588.jpg
    Photo 19-11-09 22-08-12 1588.jpg
    279.8 KB · Views: 349
Joined
Oct 26, 2019
Messages
8
I love this person:

Updated by Sisyphe - over 1 year ago 24116

I found a simple fix for this issue by adding the Optane 900P device ID to passthru.map :)
- ssh to ESXi
- edit /etc/vmware/passthru.map
- add following lines at the end of the file:
# Intel Optane 900P
8086 2700 d3d0 false
- restart hypervisor
I can now pass through the 900P to Freenas 11.1-U5 without issue:
optane900-freenas.png

Enjoy!


Spent the afternoon dealing with the P900 pass-thru issue.

I used the ESXi Shell via Alt-F1 and replaced edit with vi
And used the edit/vi @
cd /vmfs/volumes/virtual_machine_datastore/virtual_machine_folder/
vi virtual_machine.vmx
pciPassthru0.msiEnabled = "FALSE"

Note virtual-machine-datastore & virtual_machine_folder need changed if you've used custom names.

Also, ensure your P900 is is position 0 in you VM set-up...

It was Sisype post that resolved my issue as the pciPassthru0Enabled = "FALSE" did not work on it's own

building pools now:)

cheers,
 

Attachments

  • Capture.JPG
    Capture.JPG
    86.7 KB · Views: 333
Joined
Oct 26, 2019
Messages
8
Ok, finally had time to start back in on system.

Have Pool built and testing now with current pool. Will change up until I get to where the performance is to my liking/ receive feedback on performance.
Pool:
pool.JPG


This is the largest system I've ever ran so any feedback on initial basic performance numbers would be great:
Speed_test_2.JPG


Plan to use the Intel NAS tests unless there are other suggestions?
 
Last edited:

Rand

Guru
Joined
Dec 30, 2013
Messages
906
These perf value don't tell much if don't specify what your pool layout /dataset settings are ...;
to me it looks like you have compression enabled (default) to all the zeroes from /dev/zero get compressed nicely;)
 
Top