kngpwr
Dabbler
- Joined
- Dec 7, 2014
- Messages
- 13
So I recently decided it was time to retire the Dell C2100/FS12-TY build and make a new one from scratch. This is to be an HCI build (https://en.wikipedia.org/wiki/Hyper-converged_infrastructure). I've been doing burn in for a month+ and it has been rock solid. Before I move it into production, I wanted to ask here if anyone has anything they'd like tested on a setup like this before they potentially make a similar investment/build. I can't promise I will do everything asked but I think some of you might have some fun stuff I can demonstrate.
Software:
Is there anything you all want to know or want me to try?
Software:
- ESXi 6.5.0 (5310538)
- FreeNAS-11.0-RELEASE (a2dc21583) - virtual machine
- Windows 10 Creator Update - virtual machine
- Ubuntu 16.04 LTS - virtual machine hosting plex w/ HW transcoding
- pfsense 2.3.4 - virtual machine
- Supermicro X11SSH-CTF w/ BIOS 2.0a
- 64GB Kingston DDR4 2133Mhz ECC Unbuffered RAM (4x16GB)
- Intel Xeon E3-1275 v6 (Kaby Lake)
- 8x Western Digital Red 8TB (WD80EFZX)
- 400GB Intel DC P3700 NVMe SSD
- 512GB Samsung 850 Pro SATA SSD
- 40GB Intel DC S3700 SATA SSD
- Intel DC P3700 has PCI passthrough enabled and is attached to the FreeNAS VM
- Intel DC S3700 has been turned into an RDM via ESX command line. Attached to the FreeNAS VM
- Intel P630 iGPU has PCI passthrough enabled and is attached to the Ubuntu VM for plex hardware transcoding (once its fixed; still in beta)
- LSI 3008 HBA has PCI passthrough enabled as is attached to the FreeNAS VM
- Booting from the Samsung 850 Pro
- The rest of the Samsung 850 Pro is used a a datastore
- 2 vSwitches configured (storage & LAN)
- An NFS v3 datastore is configured and hosted by FreeNAS via the storage network
- An iSCSI LUN backed datastore is configured and hosted by FreeNAS via the storage network
- Booting from a vmdk on the Samsung 850 Pro datastore
- LSI 3008 (via passthrough) for HBA for the WD Reds
- Intel P3700 (via passthrough) for SLOG
- Intel S3700 as RDM to the VM for L2ARC
- RAID-Z2 configured on the 8 WD Reds with "sync=always"
- 32GB RAM provisioned
- 2 vCPUs
- vNIC for dedicated HCI storage network (no egress) with MTU 9000
- vNIC for on LAN protocol access with MTU 1500
- iSCSI, SMB, NFS v3 enabled
- 1 vCPU
- 512MB RAM
- 2 vNICs (LAN, WAN)
- 4 vCPUs
- 8GB RAM
- Intel iGPU drivers installed from 01.org (hardware transcoding tested and working)
- NFS mounting the "media" dataset for plex from FreeNAS
- 4 vCPUs
- 8GB RAM
- VMDK on ESXi NFS datastore attached
- VMDK on ESXi iSCSI datastore attached
- iSCSI LUN attached directly from FreeNAS via the storage network
- SMB v3 share mounted directly from FreeNAS via the storage network
- Making the Intel DC S3700 into an RDM was done to avoid passing through the entire SATA controller on the motherboard and removes the overhead of VMFS from the SSD. Since L2ARC can be lost with no data loss, using it this way is an accepted risk since the data is safe if it fails without warning (due to lack of SMART).
- I realize the Intel DC P3700 is being used as a SLOG without a mirror. That is a thing that will be fixed since I am paranoid in general about data integrity. Right now the only risk is if the SLOG fails and the system crashes very close to the same time. As I understand it, in the event of a SLOG failure the system should just revert to only using the on-disk ZIL with the successive write flushes. This is the smallest size the P3700 came in and even so its heavily over-provisioned. That should only help it be more reliable in the long run.
- I have passed the iGPU through to both Windows and linux separately without any obvious issues. I tested hardware transcoding in plex server on both setups to compare speeds and loads but the current plex "beta" that supports the feature has some showstopper bugs that prevent me from recommending it to anyone yet. When it works, it works well and the CPU overhead change is dramatic. This is especially true when working with HEVC/H.265 content.
- The Samsung 850 Pro SSD is being used to boot ESXi and to host the FreeNAS boot vmdk right now. I have also configured it for VM swap should the need ever arise but I could have certainly gone a different route for that piece of the system. I just happened to have it available on the test bench so I put it to use.
- The Windows VM has been my primary benchmarking rig. That's why is has so many types of storage.
- Write speeds with various benchmarks are showing 250-600MB/sec depending on the I/O pattern and protocol
- Read speeds vary from 450-1200MB/sec depending on the I/O pattern and protocol
- iSCSI vmdk's have been impressive on this rig. Passmark 9.0 Disk Mark "very long" tests (2 iterations) are showing results of over 5300 (compared to 4400 on NFS vmdk's). For comparison, a Samsung 840 EVO direct attached gets ~4400 and a Samsung 950 Pro direct attached gets ~7500.
- The WD Reds are coasting. Most of the testing I have done has not pushed any individual drive past 55MB/sec. I'm glad I went with the cooler/quieter 5400 RPM models.
- I have tested datasets twice the size of the VM's RAM (to push the ARC) and the results were still similar to what is stated above.
Is there anything you all want to know or want me to try?
Last edited: