We Want to Hear Your Ideas

Status
Not open for further replies.

Vazzer

Dabbler
Joined
Aug 23, 2013
Messages
10
Idea 6: Given dedupe requires generous resources (that I don't have), it would be quite interesting to be able to run some sort of report on duplicate data to give me an idea about how much space is wasted due to duplication.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
I would like to see a virtual interface that can be composed of a 1Gb and then a 10Gb interfaces. When the 10Gb interface is up use it , if its down use the 1Gb.
That's built-in. Perhaps the better request would be to make it easier and more intuitive, where a system has multiple NICs, to specify how they should work together.

Maybe an easy way would be like this (see mockup):
  1. Allow the user to create named "interface groups" - pretty much a field to enter a group name, and a description of how the group works (failover/parallel, and an ordered list of priority for the NICs in it). The user can see which NICs are in the group, and can order them to set priority (for failover) or specify they work in parallel.
  2. Within the config for each interface, add a field to specify whether the interface is independent, or part of some NIC group.
  3. Done.
 

Attachments

  • grp.png
    grp.png
    50.9 KB · Views: 341

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Different datasets have different snapshot and replication requirements. Some snaphots only exist a week others for three months; some get snapshots every 15 minutes, some once a day. Some snapshots we want replicated immediately others wait until after normal business hours.

(This question really goes to one of my strongest FreeNAS concerns... is the FreeNAS future a consumer product or an enterprise product? The pre-11 GUI was clearly enterprise in nature. No frills and lean. It was designed for functionality and allowed those with multiple datasets, volumes, interfaces, etc. to see those things easily and from one screen. The post-11 interface is flashy and consumer-oriented, will look good on the side of a box and in product reviews but doesn't have the information density or clarity necessary for those of us with installations beyond a single volume, single dataset, single interface, single replication task. I fear the utility of FreeNAS at the SMB level is being hurt by implementing the consumer grade GUI. Yes, I know the core is mostly the same. But management is done through the GUI and the new GUI is horrible for larger installations.)

Cheers,
Matt
I would urge you to take a good look at the pfSense GUI as a live install/VM (and especially its dashboard, system graph reports, and general UI controls layout). In WebUI terms, it does a lot of what I think *NAS is (or should be) striving for with NewUI, and what the comment above is expressing concern about. The WebUI isn't completely minimalist - like NewUI it uses a modern JS backend that adapts to different screen layouts - but at the same time it's clearly aimed at functionality as well.

The system reporting graphs are a pleasure to manage and very usable (zoom, clear colours, very fast, quickly updated, much more precise data [compare to newUI smoothed curves], key min/max/avg stats textually below, etc); even with a lot of data the dashboard allows selection of what's useful for a specific user and individual panel customisation within that; there are nicely balanced transitions that are minimal and don't annoy; and the WebUI as a whole supports a simple level of theming. Overall I think they have something of "the right kind of idea" about WebUI for a product that's used both enterprise and consumer, and iX - who are trying to work on this area right now - could benefit by looking at how they have already achieved it and how it works out.

Please take a look and try it out for a few minutes, and see what ideas it gives about how to tread a path nicely and strike a really good balance between consumer/"pretty" and enterprise/"efficient".
 
Last edited:

majerus

Contributor
Joined
Dec 21, 2012
Messages
126
That's built-in. Perhaps the better request would be to make it easier and more intuitive, where a system has multiple NICs, to specify how they should work together.

Maybe an easy way would be like this (see mockup):
  1. Allow the user to create named "interface groups" - pretty much a field to enter a group name, and a description of how the group works (failover/parallel, and an ordered list of priority for the NICs in it). The user can see which NICs are in the group, and can order them to set priority (for failover) or specify they work in parallel.
  2. Within the config for each interface, add a field to specify whether the interface is independent, or part of some NIC group.
  3. Done.


How is this built in... as far as I can tell everyone said its not possible due to networking within BSD?
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Easyish method for submitting FreeNAS hardware config and server statistics to searchable database. Non-identifiable, of course. Voluntary (but possibly the default), of course.
(snip)
Sometimes an idea hits and goes "wow!". This was such an idea. I suspect it might have a very positive impact and ripple out to the wider benefit of iX/*NAS.
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
How is this built in... as far as I can tell everyone said its not possible due to networking within BSD?
See http://doc.freenas.org/11/network.html - specifically, the subsection on Link Aggregations. This covers four kinds of aggregation, including the two best-known kinds, namely: "I want to use 2 or more connections in parallel to increase bandwidth" and "I want my main connection to fallback to a secondary connection if it fails". The second of those is what you're asking about.

What you're confused by, is that assigning multiple IPs to multiple interfaces on the same subnet is not recommended (and blocked by the *NAS UI), which is how you're imagining it being done. But LACP/LAGG (aggregation) doesn't need to do that.

That said, both "ends" of the connection need to "understand" the LACP/LAGG protocol/connection to use it correctly. LAGG/LACP is built into most smart/managed switches (which are quite cheap especially 2nd hand) and also into most common Intel NICs, and can be set up via Intel's drivers on Windows (Intel ProSET), as well as being built into most common *nix platforms like FreeBSD, FreeNAS/TrueNAS, Linux distros, etc. You'll have to look up how, or ask in the forums, but it's there and works on common platforms.
 
Last edited:

majerus

Contributor
Joined
Dec 21, 2012
Messages
126
See http://doc.freenas.org/11/network.html - specifically, the subsection on Link Aggregations. This covers four kinds of aggregation, including the two best-known kinds, namely: "I want to use 2 or more connections in parallel to increase bandwidth" and "I want my main connection to fallback to a secondary connection if it fails". The second of those is what you're asking about.

What's not possible is assigning multiple IPs to multiple interfaces on the same subnet, which is how you're imagining it being done. But LACP/LAGG doesn't need to do that.

The issue is two different speeds in the LAGG.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
FreeNAS Sponsoring via Crypto Mining? configurable off course, like: If CPU usage < 20% allow minig up to 70% CPU usage
That's a clever idea. My NAS is idle at night, it wouldn't be a huge burden provided I could limit electricity use - I imagine it's CPU intensive not HDD intensive which would be fine (18 HDDs but just one CPU!)
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Please let me set a timestamp when that update shall be installed and the reboot be made so this can be scheduled for non-office hours.
Can I request automated minor update handling ("apply minor updates on detection/X hours after detection"?). I think major updates might be best left manual, but minor don't often break things and may have important security/bug implications.
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Edit: I'm trying to avoid commenting on the suggestions here, other than to offer currently-existing alternatives to a few of them, as I'm seeing the purpose of this thread as brainstorming. But I do think the firewalling suggestions are pretty far afield.
I don't think that tools to allow the *NAS box to manage its own security and login are off-track. It sounds like there's some conflation between general firewall capabilities (eg for other traffic routing through), and basic securing/locking down of the NAS itself (eg detecting brute force attempts). I do agree that more sophisticated defences should be implemented on whatever router/firewall the NAS is behind (it is behind one, somehow, right?) but that's not the same as disallowing anything onboard. For example imagine a household that has a shared LAN (some users live with parents/friends/relatives) - one might want to restrict access to a login to specific IPs/hosts/interfaces even on a LAN, or prevent a shared service in a shared house being used as a starting-point for mischief. Some degree of access control isn't unreasonable, to a point below that which would suggest getting a separate router/firewall and sticking the NAS behind it even on a LAN.
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
I'd like to be able to NOT expose snapshots via windows previous versions when snapshotting and sharing but currently if I snapshot I am forced to expose this to the end user.
As far as I know (I could be wrong?) you can disable it in two ways: 1) don't enter anything in the "periodic task" field for the share - you can still have the task but this is where it gets the data "should I expose File History" from, AFAIK). 2) you can manually override almost any Samba settings in the auxilliary parameters, so if your aux params for that share contain a custom (textual) vfs objects parameter that excludes shadow_copy and shadows_copy2, or contains vfs params for shadow_copy thast prevent exposure, then it will override the inbuilt default. Perhaps it should be easier but yes it's doable.
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
The issue is two different speeds in the LAGG.
That's only a limitation AFAIK for LACP configuration (multiple connections for increased throughput), which is reasonable since frames/packets couldn't arrive in comparable order if the links differed much in speed, which can cause severe issues in packet buffering. But I couldn't find anything saying it also applies to failover configuration. For example, if your wired 1G LAN fails over to a slow Wifi link, you wouldn't expect it to downrate universally and run at 54 mbps even on wired, when it's not actually using the wifi link at all. You'd expect 1gbps on wired, dropping to 54 if it fails over to wifi. (I should test this - I could do with the 1G Intel NIC as a failover for the 10G Chelsios :D)
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Suggestion - revisit how tuning is done, and add a simple performance monitoring daemon that tracks key indicators and spots suboptimal conditions (network, ZFS) so the user can be made aware if the NAS isn't running smoothly or optimally.


My experience is that a fair bit of the current tuning is very poor for some common uses. For example:
  • Tthere is a bunch of tuning that is widely recommended by apparently reputable sources (FreeBSD wiki, Calomel) if you're using 10G NICs: disable harvesting, assign larger buffers (unless memory is small).
  • There is tuning suggested if you have smaller levels of RAM, but less consideration seems to have been given to 100GB+ levels of RAM, where it's easy to assign larger caches for vdevs, larger network buffers and so on, if there is an issue which would be helped by a few hundred MB or a couple of GB of RAM.
  • If a user knows they'll use dedup, and can mark this in a "likely usage" list as "quite likely", then the NAS should usually try to reserve more than the usual 25% of ARC for metadata.
  • If a user wants to optimise for throughput vs latency, then changing the default buffering on the ZFS and network sides may be valuable.
  • If the user wants to use their L2ARC to the maximum, then a checkbox might be relevant to disable "throttle L2ARC to conserve SSD lifetime" (my 500 MB/s L2ARC SSD gets limited by default to 8 MB/s or something tiny!)
  • and so on.
In terms of basic "make it work reasonably efficiently" tuning, there are two main aspects: (1) hardware, (2) user knowledge of their likely usage. I envisage a more usage-focused tuning system, where the user themself specifies from a list of common uses and load types, to describe something of their own usage-style priorities, so that these can be taken into account during any kind of automatic tuning.

I'd especially like to see a middleware daemon that continually monitors key performance indicators from the major subsystems, and can give direct hints to the user, about what it's "seeing" - that the network buffers are congesting, or disk IO queues are staying at a high level more than expected, or excessive networking collisions/errors, or CPU is overloaded or whatever. So the user has access to continual info about these metrics, and gets direct info when it spots something that's suggestive of a suboptimal situation with NAS tuning, and hints about possible fixes.

This shouldn't actually be that hard to do, because there are only a few truly critical subsystems in the core NAS (networking, ZFS, RAM, CPU) and each has well-known metrics acessible to determine how they're holding up and detect if common suboptimal conditions occur (latency, buffer full, non-coalescing of small HDD writes, grossly unbalanced RW throughput/latency across vdevs, whatever). So it's more a case of gathering stats ongoing, and every 15 - 60 seconds check them for signs of common suboptimalities.
 
Last edited:

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
FreeNAS needs a Distributed filesystem like GlusterFS or Ceph (too soon for pNFS?): The simple traditional master/backup design became too old and limited.

GlusterFS might not be the way to go for this. When I was part of the GlusterFS team at Red Hat, we started introducing FreeBSD support (around 2014/2015). However, the main developer on the programming side was head hunted from Red Hat to Minio. So the GlusterFS port to FreeBSD pretty much fizzled out. FreeBSD support in Minio is doing fine however. ;)

Note, I also left Red Hat mid way through 2015. It's possible the FreeBSD support has been picked up again since, but no idea personally, as I haven't kept any kind of eye on GlusterFS.
 
Last edited:

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
Integrate Let's Encrypt as an option in SSL cert generation, to be able to create certificates for services like S3 (Minio).

Thinking about this, it'd probably need to be done using the DNS verification method so people don't have to make their FreeNAS system(s) accessible to the outside world. Should be do-able.

---

Hmmm, on my FreeBSD servers I'm using py-certbot for the certificate generation:

https://github.com/freebsd/freebsd-ports/tree/master/security/py-certbot

Not sure if that (yet) supports the DNS verification method as I'm using the older/deprecated HTTPS verification approach. If the DNS verification method is supported, this idea probably wouldn't be too hard.
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
Improve replication transfer speed. When I created a new replication server it took around 4 days for it to finish because it wouldn't transfer faster than ~120Mb/s, on a dual 10Gb/s link.

Just to check... do you remember if you were using the default encryption algorithm (secure, but slow-ish)? If you were, and you're operating on a known-secure network then choosing one of the faster-to-process (but less secure) encryption algorithms can make a big difference.

That aside, it should be feasible to multiplex a replication stream over several parallel ssh streams to a target system. Various programs exist to do this for large file transfers (eg lftp) to workaround the ssh cpu bottleneck problem, so a similar approach could probably be used.
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
Since it's not properly out in the wild yet, it's a bit early to say if people are going to submit plugins for the new infrastructure.

When it's ready for people to use, it shouldn't be all that difficult to create some reasonably streamlined process for submitting new plugins and/or updates to existing ones. No real need to force git on non-git-using-people. :)
 
Last edited:

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
Embrace the cloud. One can get a cheap Synology or QNAP box and effortlessly manage it from anywhere, access files from anywhere and sync folders from anywhere. I know this would incur in infrastructure costs for iXsystems, but I would happily pay for this.

Sounds like it'd be a new product line for iX. Done right, it might be pretty lucrative. :)
 

MasterTacoChief

Explorer
Joined
Feb 20, 2017
Messages
67
Just to check... do you remember if you were using the default encryption algorithm (secure, but slow-ish)? If you were, and you're operating on a known-secure network then choosing one of the faster-to-process (but less secure) encryption algorithms can make a big difference.

I had it set to use default encryption for transmission. By the time I remembered that trick it was already 24hrs+ into the transfer so I wasn't going to change it then.
 
Status
Not open for further replies.
Top