Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

TrueNAS 12.0 Features

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
6,301
FYI to our loyal alpha testers. Bugs ahoy. I'm merging in code to add a kqueue-based libtevent backend to samba.

vfs_aio_fbsd is being changed from having its own dedicated kqueue to submitting AIO kevents directly using tevent_add_fd(). This will reduce some memory allocations and locking. The change will only impact shares with the aio_fbsd vfs object (check "testparm -s" output). I will do more thorough testing in the coming days. If you run into an issue with it, feel free to ping me.
 

VladTepes

Member
Joined
May 18, 2016
Messages
244
I've read the timeline that the TN12.0 software release version won't be til October-ish; but I was wondering if the TrueNAS12.0 manual exists in some form so I can read up on it?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
6,301
FYI to our loyal alpha testers. Bugs ahoy. I'm merging in code to add a kqueue-based libtevent backend to samba.

vfs_aio_fbsd is being changed from having its own dedicated kqueue to submitting AIO kevents directly using tevent_add_fd(). This will reduce some memory allocations and locking. The change will only impact shares with the aio_fbsd vfs object (check "testparm -s" output). I will do more thorough testing in the coming days. If you run into an issue with it, feel free to ping me.
It appears like there may be an issue with aio_fsync() in samba right now, which will impact MacOS clients ability to write over SMB. If you encounter an issue with Macs on 12.0, either override the vfs objects with an auxiliary parameter (to remove aio_fbsd), set "strict sync=no" on the share, or roll back to the previous nightly.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
54
I've read the timeline that the TN12.0 software release version won't be til October-ish; but I was wondering if the TrueNAS12.0 manual exists in some form so I can read up on it?
FreeNAS 11.3 User Guide is very similar. Easiest solution would to run the Nighly release in a VM and explore the new capabilities. New docs will be coming with BETA release in June 30.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
54
I wonder if we could please get a clarification about HA.

It'll be easier to just paste here what I just posted on Reddit,
thanks in advance:
-------------------------
Hi,
I guess so far we can't "officially" enjoy HA (high availability) unless we buy some TrueNAS hardware from iXsystems.
I please wonder if this is going to change with the releases of version 12.

Please don't get me wrong, I'll be more than happy to pay for an HA feature/service, even by a subscription model, knowing that I'm contributing for iXsystems to keep improving their open NAS solution.

My only concern is that I can't afford a TrueNAS system but I still consider my hardware sufficient for implementing an HA solution (and I can't create/hack a solution of my own because of my very limited skills, so here's revealed the reason behind this post :)

I truly understand that we already have an army of people running FreeNAS in unsupported/less_than_ideal/discouraged platforms whom then demand for help/support which eventually end most of the time with them blaming the whole community for being toxic if they don't get the wanted answer. I also understand that HA principles/requirements must be very tight/strict to avoid reliability issues with all the layers/mechanisms involved.

But from my little research I think there's plenty of people that have been requesting it in the past, often quoting the competing Synology HA offer (that requires only 2 identical "HA-supporting" models).
I know that we already can solidly/reliably replicate our NAS to a 2nd/3rd box (rsync & ZFS send) but I guess also know iXsystems must already have an HA solution in place that more users could enjoy ;)

I'm pretty sure that addressing my itch will shift solution/design overheads from users to iXsystems engineers but I'm hoping to not be alone in willing to pay for it.
That sparks also the question: why aren't we buying a TrueNAS solution instead of whinging?
In my case it's unaffordable but maybe that's really what iXsystems wants for avoiding the average/regular users to demanding time consuming support for unrealistic hardware :)

In the end I guess I'm probably personally more happy to pay a subscription to you guys rather then paying/investing in Synology products.

What you guys think?
Thanks

PS: not just interested in HA for shortening down time on services but mainly for creating an integrated (or maybe the word is hyperconverged) and fault-tolerant system

PPS: I'm also posted something related on the Proxmoz sub:
I wonder if we could please get a clarification about HA.

It'll be easier to just paste here what I just posted on Reddit,
thanks in advance:
-------------------------
Hi,
I guess so far we can't "officially" enjoy HA (high availability) unless we buy some TrueNAS hardware from iXsystems.
I please wonder if this is going to change with the releases of version 12.

Please don't get me wrong, I'll be more than happy to pay for an HA feature/service, even by a subscription model, knowing that I'm contributing for iXsystems to keep improving their open NAS solution.

My only concern is that I can't afford a TrueNAS system but I still consider my hardware sufficient for implementing an HA solution (and I can't create/hack a solution of my own because of my very limited skills, so here's revealed the reason behind this post :)

I truly understand that we already have an army of people running FreeNAS in unsupported/less_than_ideal/discouraged platforms whom then demand for help/support which eventually end most of the time with them blaming the whole community for being toxic if they don't get the wanted answer. I also understand that HA principles/requirements must be very tight/strict to avoid reliability issues with all the layers/mechanisms involved.

But from my little research I think there's plenty of people that have been requesting it in the past, often quoting the competing Synology HA offer (that requires only 2 identical "HA-supporting" models).
I know that we already can solidly/reliably replicate our NAS to a 2nd/3rd box (rsync & ZFS send) but I guess also know iXsystems must already have an HA solution in place that more users could enjoy ;)

I'm pretty sure that addressing my itch will shift solution/design overheads from users to iXsystems engineers but I'm hoping to not be alone in willing to pay for it.
That sparks also the question: why aren't we buying a TrueNAS solution instead of whinging?
In my case it's unaffordable but maybe that's really what iXsystems wants for avoiding the average/regular users to demanding time consuming support for unrealistic hardware :)

In the end I guess I'm probably personally more happy to pay a subscription to you guys rather then paying/investing in Synology products.

What you guys think?
Thanks

PS: not just interested in HA for shortening down time on services but mainly for creating an integrated (or maybe the word is hyperconverged) and fault-tolerant system

PPS: I'm also posted something related on the Proxmoz sub:
Horizonbrave ... you anticipated the TrueNAS SCALE project by a few weeks. Check it out. https://www.ixsystems.com/community/threads/truenas-scale-project-start.85211/
 

cookiesowns

Member
Joined
Jun 8, 2014
Messages
31
@morganL

Can you comment on standards compliant NVDIMM (ACPI 6.0 NVDIMM-N) support in TrueNAS core? NVDIMM is listed as a TrueNAS feature only, but it'd be great as seconded by many here to have a NFR/LAB license or subscription for non production workloads in lab.

I personally have a R740xd loaded with NVDIMM's that are ready to go, but will be a bummer if I have to switch to ZOL+Debian to use NVDIMM DAX as SLOG, instead of sticking with TrueNAS core.


Yorick, Do you see NVME-oF being useful where the backend storage is actually on ZFS? It's easier to understand iSER, but its not clear how much faster it would be.
Absolutely, granted you'd want to do tuning.. but the benefit of RDMA is reducing the over-head of mem-copies when doing storage I/O. Along with being able to use DCB/RoCE to help control the network portion of your environment.

All in NVMeoF is the way to go, while the ZFS overhead will reduce the pure benefits of it, reducing overhead on the e2e stack is always beneficial.

However SLOG enhancements or async CoW needs to hit mainline first as well as polling support before NVMeoF will really benefit IMHO

I've experimented with a pure NVMe array on ZFS and the results were interesting, i'm sure with NVMeoF i'd see better results, but it's not bad by any means. Just the enhancements with ZFS CoW in 12 already gave me quite a bit more when comparing to 11.3
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
54
@morganL

Can you comment on standards compliant NVDIMM (ACPI 6.0 NVDIMM-N) support in TrueNAS core? NVDIMM is listed as a TrueNAS feature only, but it'd be great as seconded by many here to have a NFR/LAB license or subscription for non production workloads in lab.

I personally have a R740xd loaded with NVDIMM's that are ready to go, but will be a bummer if I have to switch to ZOL+Debian to use NVDIMM DAX as SLOG, instead of sticking with TrueNAS core.




Absolutely, granted you'd want to do tuning.. but the benefit of RDMA is reducing the over-head of mem-copies when doing storage I/O. Along with being able to use DCB/RoCE to help control the network portion of your environment.

All in NVMeoF is the way to go, while the ZFS overhead will reduce the pure benefits of it, reducing overhead on the e2e stack is always beneficial.

However SLOG enhancements or async CoW needs to hit mainline first as well as polling support before NVMeoF will really benefit IMHO

I've experimented with a pure NVMe array on ZFS and the results were interesting, i'm sure with NVMeoF i'd see better results, but it's not bad by any means. Just the enhancements with ZFS CoW in 12 already gave me quite a bit more when comparing to 11.3
If you have a system, please test it out. Its really an issue for whether FreeBSD makes the NVDIMM available as a drive to TrueNAS CORE. There is nothing that disables the standard FreeBSD capabilities. I don't know of a reason it will not work.

For TrueNAS Enterprise, there is support for specific NVDIMMs and mirroring them between two controllers via PCIe. This is needed for HA and high performance.
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,248
NVDIMMs show up in FreeBSD with a device handle for the DIMM and one for the GEOM provider. The latter should just work like any other mass storage device.
 

cookiesowns

Member
Joined
Jun 8, 2014
Messages
31
NVDIMMs show up in FreeBSD with a device handle for the DIMM and one for the GEOM provider. The latter should just work like any other mass storage device.
The problem is that the GUI doesn't enumerate the NVDIMM devices for use.


If you have a system, please test it out. Its really an issue for whether FreeBSD makes the NVDIMM available as a drive to TrueNAS CORE. There is nothing that disables the standard FreeBSD capabilities. I don't know of a reason it will not work.

For TrueNAS Enterprise, there is support for specific NVDIMMs and mirroring them between two controllers via PCIe. This is needed for HA and high performance.
Ah gotcha! I figure it might just be adding support in the UI for the NVDIMM devices then when enumerated by FreeBSD12.0
 

swk

Newbie
Joined
Jun 12, 2020
Messages
2
It appears like there may be an issue with aio_fsync() in samba right now, which will impact MacOS clients ability to write over SMB. If you encounter an issue with Macs on 12.0, either override the vfs objects with an auxiliary parameter (to remove aio_fbsd), set "strict sync=no" on the share, or roll back to the previous nightly.
I am on Win10 build 2004 and using SMB3_11 protocol. with aio_fbsd enabled general copy, read and write operation using file explorer works well but simple git operations(like: pull, clone, fetch) fails most importantly win10 file explorer hangs until git exits.

so I had to remove the aio_fbsd or set strict sync = no
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
6,301
I am on Win10 build 2004 and using SMB3_11 protocol. with aio_fbsd enabled general copy, read and write operation using file explorer works well but simple git operations(like: pull, clone, fetch) fails most importantly win10 file explorer hangs until git exits.

so I had to remove the aio_fbsd or set strict sync = no
In this case Windows is sending an SMB2_FLUSH, which causes an aio_fsync call. I'm still working on the libtevent_kqueue changes at the moment (will probably merge on Monday).
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
6,301
I am on Win10 build 2004 and using SMB3_11 protocol. with aio_fbsd enabled general copy, read and write operation using file explorer works well but simple git operations(like: pull, clone, fetch) fails most importantly win10 file explorer hangs until git exits.

so I had to remove the aio_fbsd or set strict sync = no
I fixed the issue with fsync in the latest nightly TrueNAS-12.0-MASTER-202006170424 (3e0fd0860).
 

bmh.01

Member
Joined
Oct 4, 2013
Messages
50
I get that, and that is also what I said, but there is a difference between support and "locking out" a feature.

I mean most would be Qlogic QLE24xx (QLE25xx unless you want to keep that out as it's 8GB) and similar as well as Brocade. Many are sold dirt cheap as data centers moved to 16G+

One advantage for home or lab use is that you can get cheap QLE24xx and replace any local storage with that, having full power of FN protection with snapshot and everything. Can't really boot windows with iSCSI in a simple way.
You are going to have drivers for QLE24xx and QLE25xx anyway since some custumers are going to use them.
@no_connection FYI, FC works perfectly well with FreeNAS so I’d expect it to continue with CORE as well. You just have to manage the ctl configuration and load the isp driver yourself, I’ve written some startup/shutdown scripts to manage this and have been running FC in my home lab for a couple of years now. It’s been rock solid, the only thing it really misses is LUN masking but that’s missing from ctl rather than FreeNAS/TrueNAS itself. And I’ve managed to achieve something similar with a mixture of NPIV and zoneing.
 

RegularJoe

Member
Joined
Aug 19, 2013
Messages
234
BMH,

With VMware we can do round robin so that all 4 FC ports get used to query the storage subsystem. Are you using VMware as the FC initiator? Are you seeing all your ports busy? I am working on a project right now at work to see if all 4 ports on FreeNAS and VMware get loaded like I can do with iSCSI and NFS 4.1.

Thanks,
Joe
 

bmh.01

Member
Joined
Oct 4, 2013
Messages
50
BMH,

With VMware we can do round robin so that all 4 FC ports get used to query the storage subsystem. Are you using VMware as the FC initiator? Are you seeing all your ports busy? I am working on a project right now at work to see if all 4 ports on FreeNAS and VMware get loaded like I can do with iSCSI and NFS 4.1.

Thanks,
Joe
Yes and Yes. All paths are active with I/O over dual fabrics. I've had much improved performance/cost with 4gb and 8gb FC over iSCSI due to the switching requirements for good iSCSI.
 

JoeAtWork

Member
Joined
Aug 20, 2018
Messages
53
Yes and Yes. All paths are active with I/O over dual fabrics. I've had much improved performance/cost with 4gb and 8gb FC over iSCSI due to the switching requirements for good iSCSI.
I hated Equallogic iSCSI as them and other vendors want to put all the targets in the same L2 subnet. I have seen some FreeNAS/FreeBSD users complain on FC they only get 200 megabytes per second on each path. I seem to remember there is an edge case where if you are using a thin provisioned vmware guest and the zvol is sparse that unmap is not automatic.
 

bmh.01

Member
Joined
Oct 4, 2013
Messages
50
I hated Equallogic iSCSI as them and other vendors want to put all the targets in the same L2 subnet. I have seen some FreeNAS/FreeBSD users complain on FC they only get 200 megabytes per second on each path. I seem to remember there is an edge case where if you are using a thin provisioned vmware guest and the zvol is sparse that unmap is not automatic.
Microbursting (and having switches with big enough buffers) has been the biggest issue I've had with iSCSI, anything this side of a Cisco 4948 i've had underwhelming performance from. FC not so.

The non-automatic unmap I presume you're referring to the VMware limitation in vSphere < 6.5 where you can only unmap from esxcli, they fixed that (iirc) from 6.7 onwards. There's a page on the datastore configuration that lets you enable automatic unmap and set the priority for it, it works for me.
 
Top