BUILD X10SRH-CLN4F

Status
Not open for further replies.

datarimlens

Dabbler
Joined
May 3, 2014
Messages
12
Start of the new year, using a new Supermicro Build (kind of the big brother of the X10SL7F):
  • X10SRH-CLN4F in SSG-5028R-E1CR12L Chassis.
  • 64GB Samsung DDR4 ECC Memory, the 32GB sticks recommended by SMicro.
  • E5 1650, hexacore CPU, affordable but swift
  • 3008 Flashed to IT mode using the P6 software from the Supermicro site, flashed from EFI shell.
  • IPMI interface for management and installations, using remote mounts.
  • ESXi 6 beta, should go ESXi 6 permanent as it becomes available.
  • Currently using cheap disk stuff for testing, eventually intended to go Hitachi 7k6000 disks for permanent use as they become available more broadly. One can find the statistics for Hitachi/WD/Seagate, it is interesting, although it needs interpretation. Then there is of course the SAS/SATA etc. debate, channel error rates, etc., not to forget T10. The 3008 + the right SAS disk could be helpful.
  • Testing includes: networking, logging, recovery (as in with and without virtualization, importing pools, etc.), power saving features, etc.
What for?
  • Intended use with sufficient testing: Scalable Backend Archive/AllInOne. I do quite a bit of searching/indexing/processing on my stored data and intend to do more, so the AllInOne side was promising better performance than a networked/multi-machine setup at lower cost with more flexibility. Yes, I am aware of the do not virtualize FreeNAS mantra, therefore more testing ... and then some more testing ... and some more ...
How is the setup doing so far?
  • Likes: Most of the setup is pretty solid, including software, even the ESXi 6 RC beta.
  • Dislikes: Power Supply sensor data on the IPMI appear not accurate enough to be useful up to at least 50% of power. Some SuperMicro parts do not fit as intended, the rear 2.5 disk holder for example prevents the air shroud from going back in.
A quick search did not show a post for this setup yet, except for a but fix for a monitoring daemon, so I started this new thread. Feedback and discussions, especially from the X10SL7F AllInOne crowd (Robin&Co), are of course appreciated.
 

RXWatcher

Cadet
Joined
Nov 26, 2014
Messages
8
I am running a a very similar build..except the the following: E5 1620v3(almost went your 1650 but it was too expensive and I figure I could replace it later as prices drop), 32GBs of ram, Esxi 5.5 with the latest LSI drivers incorporated and a SC836E26-R1200 chassis.
I'm running into slow performing network issues on Ubuntu VMs...CentOS VMs and FreeNAS VMs seem ok. I switched from using the 'recommended' open source tools to the vmware provided tools but that hasnt helped.

I'm about to go bare metal on a Ubuntu system to confirm its the ESXi as I'm out of ideas to correct this internal to the VM.

Did you have to incorporate the LSI drivers on the ESXi 6 beta? I could give that a go to see if it fixes my issues. It would be awesome if it does.

-Jim
 

datarimlens

Dabbler
Joined
May 3, 2014
Messages
12
- What do you consider "slow"? "80+MByte/s on a Gb network" or ...? On FreeNas I see a significant difference between small (a few k) and large files (video), at least a factor 10.
- The ESXi 6 did not get any additional driver install. It runs the 3008 out of the box.
- However, as recommended, I am running the 3008 as passthrough with the FreeNAS VM. How do you integrate the 3008 controller? Did you flash it to IT?
- The 64GB let me go with LRDIMMS vs. RDIMMS.
- The processors are almost the same.
- The SC836E26-R1200 is gen2 SAS, probably including the backplane, but 3U vs. the
SSG-5028R-E1CR12L that is all gen3 SAS, includes the new style SAS cabling and backplane. But you should not see a difference with a few disks.

Certainly I am curious what your bare metal tests will show.
 

RXWatcher

Cadet
Joined
Nov 26, 2014
Messages
8
Bare Metal is/was fast. I had no issues pushing 100mbit.
On a Ubuntu VM with all the memory/CPU reserved for it and no other VMs running, I get maybe 10mbit and I get timeouts on speedtests and even getting patches. I have recreated this many times over..always fresh VMs.
CentOS 7 VM is pushing the 100mbit and has no issues with anything.

It cant be the host config because the CentOS and Freenas do not have issues.

I am passing the LSI to the Freenas as well with the ESXi on USB and the VM attached to the intel sata controller. Lesson learned on that..cant use the bios raid in esxi.

Its kinda disappointing because my plan was to have freenas for the file shares and Ubuntu for things like sickbeard, etc. I wanted more control than running them via bare metal freenas.

I guess I'll dig deeper.

Would you be willing to fire up an ubuntu vm and try even a simple speed test? I've tried both the open-vm-tools and the vmware tools...no difference.

thanks!
 

RXWatcher

Cadet
Joined
Nov 26, 2014
Messages
8
nevermind..seems ubuntu 14.04.1 doesnt give me any issues and thats without the tools being installed. thanks anyway.
 

datarimlens

Dabbler
Joined
May 3, 2014
Messages
12
Glad to hear. It sounded more like a network setup, maybe jumbo frames disabled or similar.
 

datarimlens

Dabbler
Joined
May 3, 2014
Messages
12
Follow-Up: Looking at the 7k6000 Hitachi drives. T10/T13 setups, etc. Those drives come in different interface flavors and formattings.
Here is an overview:
HUS7260xxALyy1z
the xx is the capacity, e.g., 40 for 4TB.
The AL is the Generation code and the drive height, L for 26.1mm.
yy is the interface, e.g., 42 for 4k native and 52 for 512e SAS12, and codes for 6GB also.
1 for 128MB buffer
z is for firmware features:
0 instant secure erase
4 secure erase (overwrite only)
1 bulk data encryption/TCG SED for SATA/SAS
5 TCG encryption with FIPS

Questions: Are 4k native sectors easily working on Freenas 9.3? I did not see much posted in forums. Most drives with 4k sectors are still sold as 512e. Performance/power differences between 4kn and 512e?
Anyone using those 7k6000 drives yet? Which interface and why?
 

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Anyone using those 7k6000 drives yet? Which interface and why?
Hey,
just wanted to ask if there are any updates on this question? Is anyone using hgst drives? I'm wondering if the instant secure erase technology (ISE) may get you in trouble when using with ZFS. As far as I know, ISE is encrypting all your content by default together with a key that gets generated when you start the drive for the first time. The raw data is only "readable" with this key...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Hey,
just wanted to ask if there are any updates on this question? Is anyone using hgst drives? I'm wondering if the instant secure erase technology (ISE) may get you in trouble when using with ZFS. As far as I know, ISE is encrypting all your content by default together with a key that gets generated when you start the drive for the first time. The raw data is only "readable" with this key...
Why would that be a problem with ZFS specifically?

I mean, all the drive has to do is store the key somewhere on the platters and then erase it very well when asked to do so. Not much to go wrong. All data is scrambled to hell and back on HDDs anyway, to cram all those bits onto the disks.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Follow-Up: Looking at the 7k6000 Hitachi drives. T10/T13 setups, etc. Those drives come in different interface flavors and formattings.
Here is an overview:
HUS7260xxALyy1z
the xx is the capacity, e.g., 40 for 4TB.
The AL is the Generation code and the drive height, L for 26.1mm.
yy is the interface, e.g., 42 for 4k native and 52 for 512e SAS12, and codes for 6GB also.
1 for 128MB buffer
z is for firmware features:
0 instant secure erase
4 secure erase (overwrite only)
1 bulk data encryption/TCG SED for SATA/SAS
5 TCG encryption with FIPS

Questions: Are 4k native sectors easily working on Freenas 9.3? I did not see much posted in forums. Most drives with 4k sectors are still sold as 512e. Performance/power differences between 4kn and 512e?
Anyone using those 7k6000 drives yet? Which interface and why?

Yes, zfs supports a property called ashift (alignment shift). 9 is 512, 12 is 4KiB.

When creating the pools ZFS/FreeNAS should detect the right ashift, but it's worth checking the pool after creation to make sure.
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
OT: Supermicro posted new firmware on June 18, 2016 for the Avago/LSI 3008 SAS chip found on the X10SRH-CLN4F.
This version FINALLY pairs up with the driver version contained in the current version of FreeNAS 9.10.xxxx.
P12 firmware is currently correct for the V13 driver in FreeNAS 9.10.

3008_FW_PH12.00.02.00.zip

ftp://ftp.supermicro.com/Driver/SAS/LSI/3008/Firmware/
 
Last edited:

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
I have not yet re-cabled my drives from the PCIe LSI 2308 SAS card to the onboard LSI 3008, but I did just complete UEFI-flashing the LSI 3008 to 3008_FW_PH12.00.02.00 (from Supermicro's ftp site) and am NOT seeing the firmware mis-match error when I log into FreeNAS!!!

I have two Adaptec 2280000-R cables in-bound (right angle mini-SAS to x4 straight SATA) and hope to get everything switched over to the LSI 3008 this weekend.

FYI - Adaptec also make "straight mini-SAS to x4 straight SATA" and "straight mini-SAS to x4 right angle SATA" versions of this cable.

The Adaptec I-rA-HDmSAS-4SATA-SB-.8M is an internal "right angle" Mini-SAS HD x4 (SFF-8643) to (4) x1 Serial ATA (adapter based) fan-out cable with sideband signals. It measures 0.8 meters and is used for connecting a Series 7/7Q adapter to SATA disks, or a SAS/SATA backplane.
 
Status
Not open for further replies.
Top