Is there a glossary for all ZFS terms built into TrueNAS?

vitaprimo

Dabbler
Joined
Jun 28, 2018
Messages
27
[I'm sorry if this is too long, I wanted to be thorough but it sort of derailed]

I'm experimenting with record size for my little VM datastore server but my first attempt didn't go so good from the looks of it.

Performance dropped, well, technically I'm just moving the VMs into place from another storage server so I've no real basis for comparison except about a month back when I was using FreeNAS (v11 not 12, same hardware, same VMs) for the same purpose but over iSCSI and it was ridiculously fast and that was without metadata VDEV unlike now. The only problem with that is that VMDKs were being corrupted, I learned this is a thing on VMDK virtual disks over iSCSI so I had to switch to NFS.

I'm trying to find the right size for the records size but so far it's not that easy. Using this command to calculate a record sizes, once the special_small_blocks were set now yields a buttload of stuff:

Code:
vmbp:~ v$ ssh zx1
Last login: Thu Nov  5 01:07:51 2020
FreeBSD 12.2-PRERELEASE 4912790fb32(HEAD) TRUENAS

    TrueNAS (c) 2009-2020, iXsystems, Inc.
    All rights reserved.
    TrueNAS code is released under the modified BSD license with some
    files copyrighted by (c) iXsystems, Inc.

    For more information, documentation, help or support, go here:
    http://truenas.com
Welcome to TrueNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@zx1[~]# zdb -LbbbA -U /data/zfs/zpool.cache z 

Traversing all blocks ...

81.3G completed (28249MB/s) estimated time remaining: 0hr 00min 00sec       
    bp count:               4290680
    ganged count:                 0
    bp logical:        202460729344      avg:  47186
    bp physical:        94470026752      avg:  22017     compression:   2.14
    bp allocated:       94835040256      avg:  22102     compression:   2.13
    bp deduped:                   0    ref>1:      0   deduplication:   1.00
    Normal class:       79443218432     used:  2.66%
    Special class       15074242560     used:  3.16%

    additional, non-pointer bps of type 0:      14577
     number of (compressed) bytes:  number of bps
             17:      3 *
             18:      6 *
             19:      7 *
             20:      8 *
             21:      0
             22:      3 *
             23:      1 *
             24:      2 *
             25:      9 *
             26:      3 *
             27:      1 *
             28:     32 *
             29:     36 *
             30:      0
             31:      1 *
             32:    106 *
             33:     39 *
             34:      0
             35:      0
             36:      0
             37:      4 *
             38:      2 *
             39:      2 *
             40:      2 *
             41:      8 *
             42:    133 *
             43:      0
             44:      0
             45:      1 *
             46:      1 *
             47:      1 *
             48:      4 *
             49:      2 *
             50:      1 *
             51:      2 *
             52:     10 *
             53:     15 *
             54:     13 *
             55:     11 *
             56:     23 *
             57:     15 *
             58:     13 *
             59:     11 *
             60:     25 *
             61:     17 *
             62:      8 *
             63:      7 *
             64:      4 *
             65:     16 *
             66:     13 *
             67:     17 *
             68:    765 *****
             69:     16 *
             70:     56 *
             71:    168 **
             72:     15 *
             73:     30 *
             74:     53 *
             75:    310 **
             76:      4 *
             77:     17 *
             78:      7 *
             79:   6440 ****************************************
             80:    127 *
             81:    611 ****
             82:    565 ****
             83:    101 *
             84:     69 *
             85:     73 *
             86:    522 ****
             87:    134 *
             88:     90 *
             89:     97 *
             90:    484 ****
             91:    273 **
             92:    197 **
             93:    111 *
             94:    149 *
             95:    726 *****
             96:     74 *
             97:    170 **
             98:    316 **
             99:     99 *
            100:    173 **
            101:     66 *
            102:     62 *
            103:     57 *
            104:    100 *
            105:     38 *
            106:     58 *
            107:     44 *
            108:     44 *
            109:     31 *
            110:     46 *
            111:    100 *
            112:    251 **
    Dittoed blocks on same vdev: 25559

Blocks    LSIZE    PSIZE    ASIZE      avg     comp    %Total    Type
     -        -        -        -        -        -         -    unallocated
     2      32K       8K      24K      12K     4.00      0.00    object directory
     5    2.50K    2.50K      60K      12K     1.00      0.00    object array
     1      16K       4K      12K      12K     4.00      0.00    packed nvlist
     -        -        -        -        -        -         -    packed nvlist size
     -        -        -        -        -        -         -    bpobj
     -        -        -        -        -        -         -    bpobj header
     -        -        -        -        -        -         -    SPA space map header
   283    4.42M    1.11M    3.32M      12K     4.00      0.00        L1 SPA space map
 1.02K     130M    56.3M     169M     166K     2.32      0.19        L0 SPA space map
 1.30K     135M    57.4M     172M     133K     2.35      0.19    SPA space map
     6      96K      96K      96K      16K     1.00      0.00    ZIL intent log
    15    1.88M      60K     120K       8K    32.00      0.00        L5 DMU dnode
    15    1.88M      60K     120K       8K    32.00      0.00        L4 DMU dnode
    15    1.88M      60K     120K       8K    32.00      0.00        L3 DMU dnode
    16       2M      64K     132K    8.25K    32.00      0.00        L2 DMU dnode
    28    3.50M     200K     452K    16.1K    17.92      0.00        L1 DMU dnode
 1.11K    17.8M    4.44M    9.77M    8.80K     4.00      0.01        L0 DMU dnode
 1.20K    28.9M    4.88M    10.7M    8.93K     5.93      0.01    DMU dnode
    16      64K      64K     132K    8.25K     1.00      0.00    DMU objset
     -        -        -        -        -        -         -    DSL directory
    18    9.50K    1.50K      24K    1.33K     6.33      0.00    DSL directory child map
     -        -        -        -        -        -         -    DSL dataset snap map
    33     482K     120K     360K    10.9K     4.01      0.00    DSL props
     -        -        -        -        -        -         -    DSL dataset
     -        -        -        -        -        -         -    ZFS znode
     -        -        -        -        -        -         -    ZFS V0 ACL
    18     576K      72K     144K       8K     8.00      0.00        L3 ZFS plain file
   167    5.22M    1.42M    2.84M    17.4K     3.68      0.00        L2 ZFS plain file
 21.9K     700M     218M     436M    19.9K     3.21      0.48        L1 ZFS plain file
 4.07M     188G    87.7G    87.7G    21.6K     2.14     99.31        L0 ZFS plain file
 4.09M     188G    87.9G    88.1G    21.6K     2.14     99.79    ZFS plain file
    15     480K      60K     120K       8K     8.00      0.00        L1 ZFS directory
   908    1.72M     652K    2.36M    2.66K     2.70      0.00        L0 ZFS directory
   923    2.19M     712K    2.48M    2.75K     3.15      0.00    ZFS directory
    15      15K      15K     120K       8K     1.00      0.00    ZFS master node
     -        -        -        -        -        -         -    ZFS delete queue
     -        -        -        -        -        -         -    zvol object
     -        -        -        -        -        -         -    zvol prop
     -        -        -        -        -        -         -    other uint8[]
     -        -        -        -        -        -         -    other uint64[]
     -        -        -        -        -        -         -    other ZAP
     -        -        -        -        -        -         -    persistent error log
     1     128K       8K      24K      24K    16.00      0.00    SPA history
     -        -        -        -        -        -         -    SPA history offsets
     -        -        -        -        -        -         -    Pool properties
     -        -        -        -        -        -         -    DSL permissions
     -        -        -        -        -        -         -    ZFS ACL
     -        -        -        -        -        -         -    ZFS SYSACL
     -        -        -        -        -        -         -    FUID table
     -        -        -        -        -        -         -    FUID table size
     1       1K       1K      12K      12K     1.00      0.00    DSL dataset next clones
     -        -        -        -        -        -         -    scan work queue
    45      25K    4.50K      32K      728     5.56      0.00    ZFS user/group/project used
     -        -        -        -        -        -         -    ZFS user/group/project quota
     -        -        -        -        -        -         -    snapshot refcount tags
     -        -        -        -        -        -         -    DDT ZAP algorithm
     -        -        -        -        -        -         -    DDT statistics
     -        -        -        -        -        -         -    System attributes
     -        -        -        -        -        -         -    SA master node
    15    22.5K    22.5K     120K       8K     1.00      0.00    SA attr registration
    30     480K     120K     240K       8K     4.00      0.00    SA attr layouts
     -        -        -        -        -        -         -    scan translations
     -        -        -        -        -        -         -    deduplicated block
     -        -        -        -        -        -         -    DSL deadlist map
     -        -        -        -        -        -         -    DSL deadlist map hdr
     1       1K       1K      12K      12K     1.00      0.00    DSL dir clones
     -        -        -        -        -        -         -    bpobj subobj
     -        -        -        -        -        -         -    deferred free
     -        -        -        -        -        -         -    dedup ditto
    31     354K      37K     132K    4.26K     9.55      0.00    other
    15    1.88M      60K     120K       8K    32.00      0.00        L5 Total
    15    1.88M      60K     120K       8K    32.00      0.00        L4 Total
    33    2.44M     132K     264K       8K    18.91      0.00        L3 Total
   183    7.22M    1.48M    2.96M    16.6K     4.88      0.00        L2 Total
 22.2K     709M     219M     440M    19.8K     3.23      0.49        L1 Total
 4.07M     188G    87.8G    87.9G    21.6K     2.14     99.51        L0 Total
 4.09M     189G    88.0G    88.3G    21.6K     2.14    100.00    Total

Block Size Histogram

  block   psize                lsize                asize
   size   Count   Size   Cum.  Count   Size   Cum.  Count   Size   Cum.
    512:    755   378K   378K    755   378K   378K      0      0      0
     1K:    620   724K  1.08M    620   724K  1.08M      0      0      0
     2K:    684  1.87M  2.95M    684  1.87M  2.95M      0      0      0
     4K:   599K  2.34G  2.34G    398  2.00M  4.94M   593K  2.32G  2.32G
     8K:  1.39M  13.8G  16.1G    660  6.39M  11.3M  1.39M  13.6G  16.0G
    16K:  1.34M  25.4G  41.5G  1.33M  21.3G  21.3G  1.36M  25.8G  41.7G
    32K:   476K  17.4G  59.0G  1.92M  61.3G  82.7G   477K  17.5G  59.2G
    64K:   192K  15.8G  74.7G    177  15.5M  82.7G   192K  15.7G  74.9G
   128K:   106K  13.3G  88.0G   845K   106G   188G   106K  13.3G  88.2G
   256K:      0      0  88.0G      0      0   188G    345  98.0M  88.3G
   512K:      0      0  88.0G      0      0   188G      0      0  88.3G
     1M:      0      0  88.0G      0      0   188G      0      0  88.3G
     2M:      0      0  88.0G      0      0   188G      0      0  88.3G
     4M:      0      0  88.0G      0      0   188G      0      0  88.3G
     8M:      0      0  88.0G      0      0   188G      0      0  88.3G
    16M:      0      0  88.0G      0      0   188G      0      0  88.3G


That seems like it's super helpful if I knew what things mean. I took me forever just to find LSIZE, PSIZE, ASIZE which I already forgot but I do remember my the first results I got were for catwomen clothing sizes--not costume, clothing. You cannot forget that…as if catwomen roamed the streets and these kittens needed their fashion fix. Purr against or scratch a mannequin, IDK.

Anyway, my attention went immediately to the "compression bps":
79: 6440 ****************************************

I had not finished pretending I knew what it meant when below it was:
Dittoed blocks on same vdev: 25559
I know ditto is to duplicate. Not sure how relevant or useful it is because I'm not using duplication. I mean for my case of course, I'm sure it's useful for others.

Going all the way up there's:
Special class 15074242560 used: 3.16%
Metadata VDEV is already filling up, but performance isn't there. Maybe I just need to wait for the big files to finish being rewritten, I read somewhere that because it's adjusting read and write sizes it's inherently bad at both. Then there was this thing about "read amplification" (or similar) caused by the wasted space in a block times the pieces to be read times your daily horoscpe, it was insane empty data it needed to read skyrocketing latency.
Back down another catwomen table starts, one I had not seen before and packed with information like all the Ls, if this were networking that would be layers but it's not so it must be levels, then boobjobs and DSLs, I know about DSL, but it's definitely not this one. No idea what any of those mean. Then lastly the first catwoman table reappears, AKA the "Block Size Histogram", the one I used to select the 16K record size (despite being recommended 32K.)

These are the resources I've found so far:
- The FreeBSD Handbook / The Z File System / 19.8. ZFS Features and Terminology​
- FreeNAS® 11.3-U5 User Guide / 27. ZFS Primer - 27.1 ZFS Feature Flags (several helpful links): here I found that boobjobs is just a product of my dyslexia, what this actually are relates to snapshots, so for the time being it doesn't affect me. "bpobj" is the word (?). I had to read it carefully. :)​
I also found several other pages but they basically repeat what's on the FreeNAS documentation but less targeted, so a little less helpful too.
Is this info by chance included in something like manpages? Or where can I find what these things mean? (Please don't say "RFC#####"… )

I got ZFS for macOS to try things without messing up the array and realized the versions are vastly different while reading one of the manpages in both macOS and TrueNAS. If I can find this on manpages could you please tell me where/how?

Thanks!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Ah, we meet again.

Performance dropped, well, technically I'm just moving the VMs into place from another storage server so I've no real basis for comparison except about a month back when I was using FreeNAS (v11 not 12, same hardware, same VMs) for the same purpose but over iSCSI and it was ridiculously fast and that was without metadata VDEV unlike now. The only problem with that is that VMDKs were being corrupted, I learned this is a thing on VMDK virtual disks over iSCSI so I had to switch to NFS.

We've got a couple things to note here, so please help me to confirm these statements:

1. You switched from old hardware on iSCSI to new hardware on NFS.

iSCSI does not use synchronous (safe) writes by default. This will result in blazing fast writes, but potential for data loss in case of power loss or other unexpected system shutdowns. NFS, on the other hand, does use synchronous writes when serving vSphere/ESXi hosts. I suspect that is correlated with ...

2. You were having problems with VMDK corruption on iSCSI.

This is not expected behaviour; I suspect your corruption issues are related to the default sync=standard that is used on iSCSI. You can use sync=always and be just as safe, but ...

3. Metadata hardware matters, but your lack of SLOG is likely hurting you more.

Before responding that a sync write is complete, ZFS needs to place it on stable storage - either the in-pool ZIL (ZFS Intent Log) or ZIL on a Separate Log Device (SLOG) - this device is typically a smaller, high-speed, low-latency device with high write endurance. If you don't have an SLOG, your sync writes are punishingly slow as the writes happen to your regular pool device. Metadata vdevs won't help with this, but once an SLOG is added, they definitely help versus not having one. If I only have two SSDs though, and my data is valuable enough to sync-write, I'm assigning both to SLOG.

Next up, let's talk recordsize. recordsize (or in case of zvols, volblocksize) is a maximum value - smaller blocks can be written and read (down the size specified by 2 to the power of your ashift value, typically 12 resulting in 4KB) but larger ones will be split into multiple records. Smaller records can be faster for smaller operations, but they don't compress as well since compression occurs on a per-record basis. Pick your poison. Like I said, I've found 32K to be a good balance point.

So with all this being said - please post the hardware specifications of your old and new system, including CPU, motherboard, RAM, disk controller(s), disk types/models, and pool layout. More information is good.
 

vitaprimo

Dabbler
Joined
Jun 28, 2018
Messages
27
There's not a lot out there written on this feature yet, but some information is here:

Thanks, I appreciate it.
Sidenote: I can't believe I forgot searching for OpenZFS when it has been staring me in the face this whole time.

It sort of sorted itself out, this appears to be affecting this 16K dataset only. It's painfully slow but it's getting there (almost empty so I can remove it). I did find this amazing site, Activity of the ZFS ARC (dtrace.org), that goes in painful detail of what you guys already know that despite I fixed my issues and can happily go my way I'm still charging an iPad to read it away from a computer 'cause it's super interesting and I've only skimmed it a little past halfway. It obviously lacks the more recent stuff from the ZFS on Linux (it's 8 years old) but it looks like it still holds up.

Meow.

Ah, we meet again.
Yep, I'm sorry, it seems I can't stay out of trouble more than three days in a row, the pfSense guys can vouch for that, just last week I tried overlaying IPv4 on top of IPv6 because it was one of those days than end up in *day. It rhymes with OCPD & ADHD.

We've got a couple things to note here, so please help me to confirm these statements:
1. You switched from old hardware on iSCSI to new hardware on NFS.

iSCSI does not use synchronous (safe) writes by default. This will result in blazing fast writes, but potential for data loss in case of power loss or other unexpected system shutdowns. NFS, on the other hand, does use synchronous writes when serving vSphere/ESXi hosts. I suspect that is correlated with ...
Not exactly, but sort of exactly, as in it was exactly the same hardware, only the OS changed, like 3 times. In the meantime the VMs were moved to a bigger storage appliance.

2. You were having problems with VMDK corruption on iSCSI.
This is not expected behaviour; I suspect your corruption issues are related to the default sync=standard that is used on iSCSI. You can use sync=always and be just as safe, but ...
YES I was having corruption issues on iSCSI, and it is not expected behavior if exactly what you described… are you a fortune teller? Do you have crystal ball and levitate on your own personal cloud? Do you own any pointy hats? Hadn't happened. I didn't mean to imply this is at all caused by ZFS, TrueNAS or FreeNAS. I knew about (a)synchronous writes but only thought of them as a performance issue on NFS, I never thought any of it in relation with iSCSI and this might be because I've only dealt with iSCSI--as I'm typing this it makes more sense--from just a handful of vendors, but mostly Synology.

Now it seems odd that VMDK corruption in iSCSI shares happened no more than 10-12 times in many years, including those on FreeNAS, but they never happened on a Synology backed iSCSI datastore and they tend to abstract a lot of things to make it easier for the general user, regardless the consequences; usually for most these are never faced. TrueNAS has a more, um "raw" approach to things, so you're intoxicated with all that power and don't want to give it back once you taste it. To put it someway.

However, when two out of three domain controllers refused to start, I knew it was time to flush my stash. The third was online probably because it's a baremetal DC.

3. Metadata hardware matters, but your lack of SLOG is likely hurting you more.

Before responding that a sync write is complete, ZFS needs to place it on stable storage - either the in-pool ZIL (ZFS Intent Log) or ZIL on a Separate Log Device (SLOG) - this device is typically a smaller, high-speed, low-latency device with high write endurance. If you don't have an SLOG, your sync writes are punishingly slow as the writes happen to your regular pool device. Metadata vdevs won't help with this, but once an SLOG is added, they definitely help versus not having one. If I only have two SSDs though, and my data is valuable enough to sync-write, I'm assigning both to SLOG.
Yeah, I considered this but I kept reading there's only so much that can be utilized by SLOGs, and add to that the flash drives are half, a little more I believe that the spinning capacity that's not expected to grow, if a VM becomes too big either is wiped or scavenged to data which is sent to main storage, it has no place there anymore. I've been trying to downsize to consume less energy, I even walk to places now…if it's within 500-600m (about ½ mi) not that we can go many places these days. –– Again, you can ask the pfSense guys.
The last reason I skipped it in favour of the metadata VDEV is the memory, 8GB is the minimum recommended, then a gig per tera, or something like that, right? It's also not very clear but it seems like the recommendations take into account that the user will make some light use of the hypervisor or jails/plugins. I don't plan on ever storing more than 2TB if VM data and currently there's 16GB of RAM in the server and it's only a storage server, not the Swiss Knife it can be. Only for VM binaries and stuff, not what they process, that goes into other servers. It still has room to double the memory, the processor is always-in-idle generic i5. I don't think the little server is constrained at all. In the case I needed to step it up I could always take out one of the decommissioned servers, the smallest is a vacuum-cleaner-noisy 1U dual-Xeon Cisco server but if that were the case it would be best to just move back to vSAN and burn trees at night instead of using street lighting.

Next up, let's talk recordsize. recordsize (or in case of zvols, volblocksize) is a maximum value - smaller blocks can be written and read (down the size specified by 2 to the power of your ashift value, typically 12 resulting in 4KB) but larger ones will be split into multiple records. Smaller records can be faster for smaller operations, but they don't compress as well since compression occurs on a per-record basis. Pick your poison. Like I said, I've found 32K to be a good balance point.

So with all this being said - please post the hardware specifications of your old and new system, including CPU, motherboard, RAM, disk controller(s), disk types/models, and pool layout. More information is good.
I have read so much these days that I don't remember exactly where but it was a serious place, probably official documentation, that said ZFS will automatically adjusts the record size, which I now realize it's exactly what you have been telling me with different words. I was eating when that sunk in for some reason. It continues saying for long spans of the same stuff if size can be matched it improves performances, now I'm paraphrasing. But it's hard and going too low can cripple performance, going too low on special_small_blocks on the other hand is safer from what I've learned these days, so I can always grow it and because that's entirely another device many times faster than the spinners, with my skill level I'd probably would never notice if something's wrong.

I moved things back and instead of setting a 16K or 32K records size set the special class for that dataset for 8K which was now topping the lilttle catwomen chart for the most instances, though not for the most data. 4K was not pretty high too. I'll just let ZFS do its thing.

I avoid going specific into hardware because I virtualize heavily, right know the special class

–––––– I started this yesterday then got distracted by something shiny (ADHD) and didn't remember ––––––
until just now that, honestly, I still didn't remember but the browser tab reappeared,

…but it might have worked out for the better because I've been testing and performance isn't that great. It's sort of bad actually. Just now I shut down everything except for critical VMs for the network, which are in their own place anyway, so no other RWs interrupt the evacuation I'm doing, one by one too. I'll wipe everything and try one huge SLOG and see what happens, I have a feeling it's not going to improve much as read speeds are also rather low. Maybe I switch the flash drives from a Synology unit's cache pair; they have much more reasonable sizes, like 128GB.

If that's still bad I'll have to switch loads with one of the Synology units (the appliances can handle the workload fine but they can only work with one VLAN per physical interface) or return to vSAN. There's a silver lining though: vSAN now supports hosting NFS shares so it's the ultimate NAS appliance. That's kind of exciting on its own.

Thankfully I have options and I'm learning new things, so even for the headache I'm grateful. It'll serve some purpose someday. :)
 

vitaprimo

Dabbler
Joined
Jun 28, 2018
Messages
27
I just had to come back to report: it worked! switching the flash drives worked!

The smaller went to TrueNAS, not without first almost going insane trying to set up networking in TrueNAS SCALE, I figured, if I was going to screw thing up, better make it count. I reinstalled TrueNAS Core when I gave up, backed up the last 2GB of container data I forgot and started emptying the Synology cache. It took a while. Swapped the disks, rebuild on DiskStation, considered degrading the ZFS pool to move data but then I remembered VDEVs cannot be mirrored later on like any other system can (WHY!?) so I just destroyed it.

Recreated and rasynced the container data back to it and noticed this:
Screen Shot 2020-11-07 at 8.20.07 PM.png
I wasn't particularly impressed until I looked closer, that bytes, not bits. Over 400Mb/s transfer or very random files, about the standard SATA-based flash drive speeds. It made me sit up upright back again.

When I started moving that annoying VM that has made me waste so much time, I got distracted and forgot for like an hour or two and when I got back it was already done. This VM took a whole day previous times. I got again distracted and until now I'm moving the rest and it peaking well over a gigabit per second, I'm unsure why since the bottleneck would be at 400-ish Mb/s of the SLOG mirrored VDEV but I ain't questioning, I'm just happy to see those numbers without anything stalling anywhere.

Strangely TrueNAS is barely breaking idle networking:

Screen Shot 2020-11-08 at 4.28.02 AM.png


But on ESXi, vCenter and I assume a few minutes later vRealize Operations Manager and SNMP the data is/willbe there. On the network switches is updated the fastest and can be seen in near real time:
Screen_Shot_2020-11-08_at_8_14_17_AM-3.png
It's very impressive for an old cheap system with no investment. I can't imagine the performance I'd get on one of the bigger, proper servers stored away and with a little money put on it. For my needs though, it's overkill, but I'll keep an eye if iX Systems starts selling systems a little bigger than the home line but smaller than enterprise and with SCALE instead of the FreeBSD-based. VDEV removability is a must too. ZFS as impressive as it is, it's too daunting to go mainstream and other filesystems with similar features, namely Btrfs but the bigger being the Apple-controlled APFS and that's not good for anybody. Even XFS announced COW support a while ago. I have still not seen it (and it definitely isn't a volume manager) but it's coming.

Thanks again to both for taking the time for my nonsense, I'm in debt to you: I'm up for a liitle S&M and/or PnP if that's your thing. Just kidding. Really thanks though. :)
 

vitaprimo

Dabbler
Joined
Jun 28, 2018
Messages
27
I just had to come back to report: it worked! switching the flash drives worked!

The smaller went to TrueNAS, not without first almost going insane trying to set up networking in TrueNAS SCALE, I figured, if I was going to screw thing up, better make it count. I reinstalled TrueNAS Core when I gave up, backed up the last 2GB of container data I forgot and started emptying the Synology cache. It took a while. Swapped the disks, rebuild on DiskStation, considered degrading the ZFS pool to move data but then I remembered VDEVs cannot be mirrored later on like any other system can (WHY!?) so I just destroyed it.

Recreated and rasynced the container data back to it and noticed this:
I wasn't particularly impressed until I looked closer, that bytes, not bits. Over 400Mb/s transfer or very random files, about the standard SATA-based flash drive speeds. It made me sit up upright back again.

When I started moving that annoying VM that has made me waste so much time, I got distracted and forgot for like an hour or two and when I got back it was already done. This VM took a whole day previous times. I got again distracted and until now I'm moving the rest and it peaking well over a gigabit per second, I'm unsure why since the bottleneck would be at 400-ish Mb/s of the SLOG mirrored VDEV but I ain't questioning, I'm just happy to see those numbers without anything stalling anywhere.

Strangely TrueNAS is barely breaking idle networking:



But on ESXi, vCenter and I assume a few minutes later vRealize Operations Manager and SNMP the data is/willbe there. On the network switches is updated the fastest and can be seen in near real time:
It's very impressive for an old cheap system with no investment. I can't imagine the performance I'd get on one of the bigger, proper servers stored away and with a little money put on it. For my needs though, it's overkill, but I'll keep an eye if iX Systems starts selling systems a little bigger than the home line but smaller than enterprise and with SCALE instead of the FreeBSD-based. VDEV removability is a must too. ZFS as impressive as it is, it's too daunting to go mainstream and other filesystems with similar features, namely Btrfs but the bigger being the Apple-controlled APFS and that's not good for anybody. Even XFS announced COW support a while ago. I have still not seen it (and it definitely isn't a volume manager) but it's coming.

Thanks again to both for taking the time for my nonsense, I'm in debt to you: I'm up for a liitle S&M and/or PnP if that's your thing. Just kidding. Really thanks though. :)
 

vitaprimo

Dabbler
Joined
Jun 28, 2018
Messages
27
Almost forgot, the Synology unit that temporarily is holding VM data has two disks in the middle of the their scheduled Extended SMART Test (automation makes it really cold, BTW, windy cold which is not fun) this means that performance isn't that great right now (it's set to bias the testing/repairing all that) so in everyday use TrueNAS' performance could even be better with a faster partner/hypervisor online! :D
 
Top