iSCSI Windows Server - I/O Errors on NTFS Volume

Status
Not open for further replies.

Antix

Cadet
Joined
Jan 9, 2018
Messages
5
Hi,

I'm running a fairly large FreeNAS installation and exporting 3 iSCSI LUNs to a Windows Server 2016. (1 LUN is exported to a Linux server via iSCSI, I don't believe this is having issues)
I am using MPIO on all LUNs, connected via 2x 10GbE

LUNs are sized as follows:
a) 45TB
b) 2TB
c) 85TB

Recently, I've been starting to get issues where Windows will not let me browse certain folders. The exact error message is:
Code:
X:\ is not accessible.

The request could not be performed because of an I/O device error.



This problem seems to come and go, if I revisit the folder at a later time it may work. For example right now as I type this, it's working fine - all LUNs and all folders are working with no issue. However a few hours ago without me touching anything, I received the I/O errors from Windows.
If I offline the disk in Disk Manager and online it again, things work fine for a while.
It's important to note that this system has worked flawlessly for over a year now at the very least, it's almost as if overnight the issues started occurring.

Additionally, I occasionally see the following errors in the Windows event log:
Code:
The system failed to flush data to the transaction log. Corruption may occur in VolumeId: X:, DeviceName: \Device\HarddiskVolumeX.
(The I/O device reported an I/O error.)


Code:
Disk X has crossed a capacity utilization threshold and used Y bytes. When the threshold was crossed, the pool had Z bytes of remaining capacity.



A ZFS Scrub comes up fine, a Windows chkdsk /R also comes up fine with no errors.

This FreeNAS system was previously running 9.10, I've upgraded it to 11.1 recently however the issue persists.
It's running on Supermicro hardware with the following:
Code:
2x Xeon E5-2620 v3
128GB RAM
Intel 10GbE SFP+ NICs
Raw space: 282TB Used, 53TB Available (84%)
11x RAIDZ2 vdevs containing 6 drives in each (HGST HUS726060AL4210 A7J0 - 7200 SAS)


I'm hoping someone may be able to point me in the right direction.
I have a feeling this is less FreeNAS related and more hitting Windows NTFS limits (although I did check and I'm below the maximums). I know switching to the ZFS filesystem and using FreeNAS to expose shares could probably work better in my case, but my trade off would be losing the 2x 10GbE MPIO if I did that (last I saw, SMB3 multi-channel was still experimental)
I also know of the 80% performance degradation, and the 50% capacity utilization when using iSCSI LUNs, but from my understanding that would just affect performance and not give me errors such as what I'm experiencing?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Where to start...could be any number of things here. Start simple...do you have a scheduled SMART test on the FreeNAS box? If so, do all drives report healthy? It sounds like your pool may be getting pretty full so fragmentation issues may occur which could manifest into something like this. Any other errors on the Windows box that may point to iSCSI timeout issues? It may also be useful for us to see the output (pasted in code blocks) of the following commands from FreeNAS:
zpool status
zpool list
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I also know of the 80% performance degradation, and the 50% capacity utilization when using iSCSI LUNs, but from my understanding that would just affect performance and not give me errors such as what I'm experiencing?
These problems are going to go hand-in-hand. With block-level storage, the OS is going to be very unforgiving if there is any latency or delay in response. If ZFS is super bogged down, and taking "forever" to complete the writes or reads, Windows will think the device has dropped and is no longer accessible, giving you weird errors like you're seeing.
 

Antix

Cadet
Joined
Jan 9, 2018
Messages
5
Could be on to something there I guess, especially with the fragmentation listed below. I've also noticed that our Linux LUN has been dropping multipath sessions also (unsure if related or not)

Would running the filesystem natively on ZFS make this more forgiving?

Also I've been watching the output of zpool iostat -v XXXXXXXXXX-zpool 2 during periods where the I/O errors appear however load/IOPS/Throughput seems very low:
(Note some extra info, I have 2 vdevs which use 4TB disks instead of 6TB)

Code:
										   capacity	 operations	bandwidth
pool									alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
XXXXXXXXXX-zpool						283T  53.3T	  5	 32  80.1K  4.05M
  raidz2								31.5T  1.01T	  0	  2	  0   366K
	gptid/0e8a9b3a-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/0f1fddb5-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/0fb68e45-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/105093fc-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/10e2a186-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	da54p2								  -	  -	  0	  2	  0  91.5K
  raidz2								30.6T  1.87T	  0	  2  7.63K   366K
	gptid/12203ed8-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/12b6df8f-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/134ebd69-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/13e10fee-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/147d92b5-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/15238425-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
  raidz2								30.6T  1.90T	  0	  2  15.3K   366K
	gptid/15d8ab8a-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/167ca3ce-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  3.81K  91.5K
	gptid/171c395c-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  3.81K  91.5K
	gptid/17c0e2bc-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  3.81K  91.5K
	gptid/185e7d7d-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  3.81K  91.5K
	gptid/18fe9fdc-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
  raidz2								30.6T  1.94T	  2	  3  38.1K   427K
	gptid/19bdcf34-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  2	  3  9.53K   107K
	gptid/1a5dcd4a-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  2	  3  9.53K   107K
	gptid/1afe8cb1-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  2	  3  9.53K   107K
	gptid/1ba5984c-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  2	  3  9.53K   107K
	gptid/1c486324-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/1cf11e99-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
  raidz2								30.8T  1.74T	  0	  3	  0   427K
	gptid/1dbf8842-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/1e637cad-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/1f0a43b3-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/1fb3a9d7-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/205e3b3b-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
	gptid/20ff4ecb-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  3	  0   107K
  raidz2								30.8T  1.68T	  1	  2  11.4K   366K
	gptid/21ddb317-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/22859528-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/233473e2-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/23e4b3cb-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/2492a20f-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/25396ac6-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  3.81K  91.5K
  raidz2								30.7T  1.78T	  0	  2  7.62K   366K
	gptid/2620eeb0-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/26c85644-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/27782253-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/2823aac1-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/28d096e4-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
	gptid/29839189-c67a-11e6-a0a0-0cc47aaab1da	  -	  -	  0	  2  1.91K  91.5K
  raidz2								21.1T   624G	  0	  2	  0   366K
	gptid/c44d5f8c-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/c51c6a81-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/c5e35d24-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/c6a8e460-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/c776d735-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/c856dd3b-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
  raidz2								21.1T   637G	  0	  2	  0   366K
	gptid/f10d9c5e-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f1ec355c-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f2bcde0f-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f38f9fa2-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f46cdc0f-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f5439d92-d78c-11e6-8fa7-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
  raidz2								12.7T  19.8T	  0	  2	  0   366K
	gptid/f40eaa34-7330-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f4c6f299-7330-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/f895ba56-7330-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/fc602e23-7330-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/00293ee2-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/03fe091d-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
  raidz2								12.2T  20.3T	  0	  2	  0   366K
	gptid/1b73d5b0-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/1f3cf2e9-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/230d5e01-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/26d61336-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/2aab654d-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
	gptid/2e7a1865-7331-11e7-a99a-0cc47aaab1da	  -	  -	  0	  2	  0  91.5K
--------------------------------------  -----  -----  -----  -----  -----  -----



Here is the output of those commands:
(I had cancelled the last scrub as it was running and was going to take 500+ hours, so I was wondering if that would be contributing)
Code:
zpool list
NAME				SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
XXXXXXXXXX-zpool   336T   283T  53.3T		 -	31%	84%  1.00x  ONLINE  /mnt
freenas-boot		222G  2.84G   219G		 -	  -	 1%  1.00x  ONLINE  -


Code:
zpool status
  pool: XXXXXXXXXX-zpool
 state: ONLINE
  scan: scrub canceled on Fri Jan  5 10:31:09 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		XXXXXXXXXX-zpool							   ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/0e8a9b3a-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/0f1fddb5-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/0fb68e45-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/105093fc-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/10e2a186-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			da54p2									  ONLINE	   0	 0	 0
		  raidz2-1									  ONLINE	   0	 0	 0
			gptid/12203ed8-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/12b6df8f-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/134ebd69-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/13e10fee-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/147d92b5-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/15238425-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-2									  ONLINE	   0	 0	 0
			gptid/15d8ab8a-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/167ca3ce-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/171c395c-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/17c0e2bc-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/185e7d7d-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/18fe9fdc-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-3									  ONLINE	   0	 0	 0
			gptid/19bdcf34-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1a5dcd4a-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1afe8cb1-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1ba5984c-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1c486324-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1cf11e99-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-4									  ONLINE	   0	 0	 0
			gptid/1dbf8842-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1e637cad-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1f0a43b3-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1fb3a9d7-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/205e3b3b-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/20ff4ecb-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-5									  ONLINE	   0	 0	 0
			gptid/21ddb317-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/22859528-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/233473e2-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/23e4b3cb-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/2492a20f-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/25396ac6-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-6									  ONLINE	   0	 0	 0
			gptid/2620eeb0-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/26c85644-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/27782253-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/2823aac1-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/28d096e4-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/29839189-c67a-11e6-a0a0-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-7									  ONLINE	   0	 0	 0
			gptid/c44d5f8c-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/c51c6a81-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/c5e35d24-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/c6a8e460-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/c776d735-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/c856dd3b-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-8									  ONLINE	   0	 0	 0
			gptid/f10d9c5e-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f1ec355c-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f2bcde0f-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f38f9fa2-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f46cdc0f-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f5439d92-d78c-11e6-8fa7-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-9									  ONLINE	   0	 0	 0
			gptid/f40eaa34-7330-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f4c6f299-7330-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/f895ba56-7330-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/fc602e23-7330-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/00293ee2-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/03fe091d-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
		  raidz2-10									 ONLINE	   0	 0	 0
			gptid/1b73d5b0-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/1f3cf2e9-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/230d5e01-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/26d61336-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/2aab654d-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0
			gptid/2e7a1865-7331-11e7-a99a-0cc47aaab1da  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:08 with 0 errors on Wed Jan 10 03:45:08 2018
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  mirror-0  ONLINE	   0	 0	 0
			ada0p2  ONLINE	   0	 0	 0
			ada1p2  ONLINE	   0	 0	 0

errors: No known data errors

 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
XXXXXXXXXX-zpool 336T 283T 53.3T - 31% 84% 1.00x ONLINE /mnt

I would bet dollars to donuts that this is your problem: 84% utilization is insanely high for ZFS, much less ZFS block storage.

Furthermore, even though you've got 11 vdevs, they are all RAIDZ2, so I'm surprised you have enough I/O without a SLOG. I'll be honest though, I don't have much experience with ZFS at that scale other than stripped mirrors.
 

Antix

Cadet
Joined
Jan 9, 2018
Messages
5
Thank you for your input, random I/O has been generally fairly good - this system stores backups so I don't need insanely high IO

What are your thoughts on migrating the LUNs to a zvol and presenting them over SMB, rather than iSCSI?
I'll also keep in mind that I can add extra disks but at this scale the 80% target starts to hurt so I don't know how sustainable that might be
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What are your thoughts on migrating the LUNs to a zvol and presenting them over SMB, rather than iSCSI?

That doesn't make any sense. A ZFS volume is just a dataset that represent a block device. If you want to access the data over SMB, you just create a regular dataset and move the data to it. Are you not using zvol's right now as the extent type for your iSCSI lun's? Can you post a screen shot of iSCSI extents from the FreeNAS gui?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
A zvol is block storage, so I'm assuming you mean a filesystem dataset. If you put the data directly on the filesystem and present via SMB, you should have better performance.

The reason behind the utilization targets is because ZFS is a copy-on-write file system (https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/). This means that, any time data changes, ZFS copies the entire block to a new block, and then marks the old block as free space. This prevents the write-hole problem. However, FreeNAS also uses a tree of cryptographic hashes to verify data. When data changes, all the nodes in this tree also change. And they must be copied and written. These two factors combine to heavily fragment the file system. And the best solution to combat fragmentation is to leave a bunch of open space on the drives.
 

Antix

Cadet
Joined
Jan 9, 2018
Messages
5
Correct I got my terminology mixed up, I meant transitioning the data to a dataset and having that shared over SMB

Can you post a screen shot of iSCSI extents from the FreeNAS gui?
Can confirm the current extents are zvols
Extents.png
 
Status
Not open for further replies.
Top