TrueNAS 12.0 Features

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
I image once we get TrueNAS Core 12 with KVM those small shops could just migrate to KVM and be done with VMware.
How do you imagine this to come about? KVM is strictly Linux based, while TrueNAS 12 will continue to be based on FreeBSD. We could see bhyve improvements, though.

Kind regards,
Patrick
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
How do you imagine this to come about?

I assume we will have the TrueNAS Core skin on top of some OS that is Linux and then we can run KVM...

If VMware ran md or zfs I think that might be enough for some users. Too bad VMware did not have loadable modules like old Netware had. We could have FreeNAS as a NLM. LOL
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I assume we will have the TrueNAS Core skin

Yup, TrueNAS Scale. Which is, from the tea leaves, more than just TrueNAS on Linux - it's TrueNAS on Linux with gluster for scale-out storage. Scale-up is something TrueNAS/ZFS already does great; scale-out is something it doesn't do at all.

I am wondering whether TrueNAS Scale could be a hyperconverged solution. Have storage nodes, compute nodes, and it all comes together as one big TrueNAS cluster. Very keen to see this in action.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
@morganL I too would find it at least very interesting if there was a TrueNAS license that permitted the advanced features on my own hardware. Getting hold of iXsystems gear in Germany is not easy, stocking spare parts and everything. So if there was an "approved" list of e.g. Supermicro systems and the offer to buy a license from you, that would simplify things a lot, since we could use our established distribution channels.

I see the added time and effort to support externally supplied hardware, of course. So possibly it is simply out of the question due to capacity constraints on iX's part.

Kind regards
Patrick

We currently haven't run TrueNAS on Supermicro hardware with HA. If its not HA, are there any specific features you need?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Yup, TrueNAS Scale. Which is, from the tea leaves, more than just TrueNAS on Linux - it's TrueNAS on Linux with gluster for scale-out storage. Scale-up is something TrueNAS/ZFS already does great; scale-out is something it doesn't do at all.

I am wondering whether TrueNAS Scale could be a hyperconverged solution. Have storage nodes, compute nodes, and it all comes together as one big TrueNAS cluster. Very keen to see this in action.

What a great idea, perhaps we should whip something up :smile: Do you want to contribute to the TrueNAS Suggestions page. https://www.ixsystems.com/blog/truenas-bugs-and-suggestions/
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Probably an edge use case, but according to the matrix NVDimm support is an Enterprise Feature while it happily runs on my current FreeNas box.

Will that continue to be the case (available by simply loading the proper module) or will this be changed? Or maybe you could elaborate on the functionality that is (will be) in Enterprise but not in Core in regards to NVDimms?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Probably an edge use case, but according to the matrix NVDimm support is an Enterprise Feature while it happily runs on my current FreeNas box.

Will that continue to be the case (available by simply loading the proper module) or will this be changed? Or maybe you could elaborate on the functionality that is (will be) in Enterprise but not in Core in regards to NVDimms?

Good to know it works. Anything that appears as a drive in FreeBSD 11.3 should work. In the Enterprise version, the NVDIMMs can be managed and paired across a two HA nodes.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Ok, good to know, thanks:)

O/c I wouldnt mind having some of the manageability (monitoring) capabilities from Enterprise - software only of course and no support :)
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Also, I created a clone of my RDMA support for FreeNas/TrueNas ticket from earlier this year - the old one was not voteable due to it being closed
If you are interested in this functionality please express your thoughts there.


Sorry if this is not the appropriate place but I assumed it would fit here since this will be in 12 at the earliest I assume.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Also, I created a clone of my RDMA support for FreeNas/TrueNas ticket from earlier this year - the old one was not voteable due to it being closed
If you are interested in this functionality please express your thoughts there.


Sorry if this is not the appropriate place but I assumed it would fit here since this will be in 12 at the earliest I assume.

Rand, I'd suggest adding to any suggestion a brief writeup of why you want the specific feature. I assume you need RDMA to solve a performance issue? It useful for us to know what the specific issue is so we can determine whether or not that issue is common and whether we think RDMA is the best/easiest solution. Implementing iSER/RDMA may be very NIC dependent.. so understanding the hardware setup you have is also useful. TrueNAS 12,0 is already feature complete.... so most major future changes will only happen in later releases.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Sure will do, although its not something that only I personally have an issue with, its the fact that with todays superfast pools (dozens of SAS3 or NVME drives) as well as the cheap availability of 40G networking (old Mellanox HW) the discrepancy between local pool performance and remote accessible speed is diverging more and more due to network induced loss (mostly latency induced).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I think RDMA is only part of the puzzle. More broadly, it’s about supporting NVMe over Fabrics. vSphere 7 just introduced support for this, with two fabrics initially supported: FC, and RoCE. The latter is “RDMA over Converged Ethernet v2”, that’s the part being asked for here.

The idea is to avoid the latency-inducing NVMe to SCSI translations.

I do not see how this is relevant to SAS3 arrays.

An article about configuring this functionality in vSphere 7:


An older article speaking about “why RDMA”: http://storagegaga.com/the-rise-of-rdma/

The use case then: All-flash NVMe arrays to backend a vSphere 7 server with low latency, using NVMe-oF with an underlying FC or RoCE transport.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Yorick, Do you see NVME-oF being useful where the backend storage is actually on ZFS? It's easier to understand iSER, but its not clear how much faster it would be.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
RDMA over Converged Ethernet
Remote DMA over Converged Ethernet
Remote Direct Memory Access over Converged Ethernet

Still doesn't beat eSATAp:
eSATA powered
external SATA powered
external Serial ATA powered
external Serial AT Attachment powered
And who knows what the hell AT is supposed to stand for.

I do not see how this is relevant to SAS3 arrays.
I'm not in the high-performance storage networking loop, but I'm wondering to what extent a ZFS pool, even if backed by "slow" storage, might benefit from being accessible using NVMe instead of SCSI as the abstraction.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,175

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
That's one option, but the party line apparently was that it didn't mean anything.
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,175
https://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-developer.html

"The "AT Attachment" (ATA) name originated after the 1984 release of the IBM Personal Computer AT, more commonly known as the IBM AT. The IBM AT's controller interface became a de facto industry interface for the inclusion of hard disks. "AT" was IBM's abbreviation for "Advanced Technology"; thus, many companies and organizations indicate SATA is an abbreviation of "Serial Advanced Technology Attachment". However, the ATA specifications simply use the name "AT Attachment", to avoid possible trademark issues with IBM."
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yorick, Do you see NVME-oF being useful where the backend storage is actually on ZFS?

I don’t have the expertise to answer that question, sorry.

From what I’ve read, the idea of NVMe-oF is to avoid the translation layer between SCSI and NVMe commands, so reducing latency, when the backend storage is addressed using NVMe.

iSER similarly seeks to reduce latency, also uses RoCE, and is targeted towards backend storage that’s addressed with SCSI, so that’s your SAS3 use case.

If run over Ethernet, both would benefit from “Converged Ethernet”, DCB, so that the overhead of TCP isn’t needed. That requires switches that support DCB.

vSphere can do either: iSER since 6.7, and NVMe-oF since 7.0.

I’m sure your performance-minded customers will let you know which they prefer: iSER towards SCSI storage, or NVMe-oF towards NVMe storage.

I’m just googling things over here.
 
Top