128B? it's not SHA256 used on FreeNAS so 256B?
But I know that it's a Merkle tree so there is a lot of blocks who contains only block pointers ((2 * N) - 1 for N blocks of data) so what is the size of these blocks? It seems to me that it's a lot of space used just for block pointers, am I wrong with my thoughts on the structure?
*what are the possible sizes of the blocks? it's 128k, 64k, 32k, ... 4k or it's more complex?
ZFS allows to do that, but it would be very space-inefficient. Same way ZFS allows 8K blocks on RAIDZ3, where as result for each 8K data will be 12K of redundancy even if vdev is much wider. It is supposed that all configuration decisions should be reasonable. :)"Block sizes can be set to any power of 2 from 512B to 128KB" there is blocks smaller than 4kB even if ashift=12?
Each dnode (file of zvol) has block size stored in its metadata. ZFS uses that shift to convert file offsets into indirection pointer offsets. That value can not be changed after dnode already has some data/pointers. So if block size (recordsize) is changed for existing dataset, then new value will be used only for new files, while existing one remain as-is. For zvols block size simply can not be changed after the first write."Each file has own block size value copied from dataset value" dataset value? can you expand on that?
Originally it was reserved, but some time ago there was added a new pool feature called embedded_data. If this feature is enabled and data can be compressed down to 112 bytes, then they are stored directly inside the pointer. See zpool-features man page."if after compression data can fit into the pointer space." The pointer space isn't reserved for the blocks pointers? and even if the data can be here, it's still a block so the data space of the block is empty, no?