ISCSI causing freenas server to lockup

leprejohn

Cadet
Joined
Jun 1, 2019
Messages
9
Hello IX Community how are you doing? I've seems I've came across a bit of an issue with using ISCSI.

I've recently added a 10TB drive into my freenas server as this is the largest drive I've got, I didn't want to have it in my main pool (2x 500gb and 2x 640gbs)

When transferring data across from the raid-z1 pool to the single drive it seems if I do more than 1 file the freenas console will watchdog time out and terminate the connection. This has forced the web GUI to go down and the server on console will lose its NIC/Network settings forcing me to reboot to get the freenas machine back online.

Before I setup the ISCSI I was doing SMB > SMB transfer using a windows 10 machine and this worked fine, I decided I wanted to set it up with ISCSI so I could use winscp to directly transfer from a remote server straight to the freenas machine.

My other ISCSI target on the raid-z1 doesn't seem to have this issue when backing up my VM's using veeam (transfers straight to ISCSI target).

I was hoping to get some advice on this to try and fix the ISCSI issue I seem to be having.

Thanks Leprejohn
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
-- Moderator Note --

The forum rules, conveniently linked at the top of every page in red, require that users need to post relevant information about their systems, including a hardware manifest and other configuration information, which is crucial information to other posters who are trying to provide assistance.

</>

As for iSCSI, it seems like you've created a really bad pool design, and that won't be helping you.

https://www.ixsystems.com/community...d-why-we-use-mirrors-for-block-storage.44068/
 

leprejohn

Cadet
Joined
Jun 1, 2019
Messages
9
-- Moderator Note --

The forum rules, conveniently linked at the top of every page in red, require that users need to post relevant information about their systems, including a hardware manifest and other configuration information, which is crucial information to other posters who are trying to provide assistance.

</>

As for iSCSI, it seems like you've created a really bad pool design, and that won't be helping you.

https://www.ixsystems.com/community...d-why-we-use-mirrors-for-block-storage.44068/

Hi Jgreco the drive doesn't need to have any redundancy which is why I've set it up on a separate pool with just the 10TB drive.

I didn't think having an iSCSI target on a single pool/drive would cause such a performance impact and cause the freenas box to lock up and drop the NIC.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Before I setup the ISCSI I was doing SMB > SMB transfer using a windows 10 machine and this worked fine, I decided I wanted to set it up with ISCSI so I could use winscp to directly transfer from a remote server straight to the freenas machine.
Wow, that's a spectacularly bad way of moving data from one server to another. It's not better than your old SMB to SMB solution in any way and simultaneously has all the massive disadvantages of needing to run block storage instead of file storage.

The "correct" admin-in-the-loop way of doing this is to use an SFTP client on one of the servers to connect to the other one. Between ZFS pools, replication is almost always the way to go.

I didn't think having an iSCSI target on a single pool/drive would cause such a performance impact and cause the freenas box to lock up and drop the NIC.
You'd think so, but we still have no idea what hardware you have.
 

leprejohn

Cadet
Joined
Jun 1, 2019
Messages
9
Wow, that's a spectacularly bad way of moving data from one server to another. It's not better than your old SMB to SMB solution in any way and simultaneously has all the massive disadvantages of needing to run block storage instead of file storage.

The "correct" admin-in-the-loop way of doing this is to use an SFTP client on one of the servers to connect to the other one. Between ZFS pools, replication is almost always the way to go.


You'd think so, but we still have no idea what hardware you have.

So the reason for the ISCSI on the single drive was that when I transferred something from a remote server using my virtual machine I would need to put it say on the virtual machines desktop then transfer from the virtual machines desktop to the SMB share. Which can be troublesome when doing large files as I don't always have the free space available.

I thought with the iSCSI target it would make more sense whilst using WINSCP to transfer from the remote server directly onto the iSCSI drive as it appears as a drive in windows so the WINSCP client can install directly on there. I hope that makes sense.

As the data isn't really important to me on the single drive which is why it has its own pool and no redundancy.

The hardware I've got is a I5-3470 with an MSI H61I-E35-V2W8 mobo, 16gb ram and a H200 in IT mode.

I might look into running SFTP client on the freenas box and connect that way but I thought to ask on here about the issue with the iSCSI first.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The hardware I've got is a I5-3470 with an MSI H61I-E35-V2W8 mobo, 16gb ram and a H200 in IT mode
Realtek Ethernet. It's a small miracle it works at all, so to see it bringing a system down to its knees is no surprise.

I thought to ask on here about the issue with the iSCSI first.
Never run block storage unless you have to.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hi Jgreco the drive doesn't need to have any redundancy which is why I've set it up on a separate pool with just the 10TB drive.

ZFS should really have redundancy and you can run into odd issues when you don't have it.

I didn't think having an iSCSI target on a single pool/drive would cause such a performance impact and cause the freenas box to lock up and drop the NIC.

Realtek NIC's are well-known for being sucky even on Windows, and definitely can be problematic on FreeBSD. The Realtek interfaces are powered by either one or two hamsters running on an exercise wheel to move your bits around. We find that often one of them gets sick or maybe dies; this usually has a severe negative impact on performance. Reakteks are not real tech. Get an Intel card to replace it.

Additionally, iSCSI on systems with less than ... maybe 64GB, 32 in a pinch? is not highly recommended or expected to work particularly well, especially supporting multiple pools.

https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/

Veeam does not require iSCSI and works fine over CIFS. Putting an iSCSI datastore on RAIDZ1 is likely to have performance and optimization issues and you probably don't want to do that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
MSI H61I-E35-V2W8 mobo
This is the reason we want to know what hardware is being used. What looks like a little thing to you can make a huge difference.
 

drinking12many

Contributor
Joined
Apr 8, 2012
Messages
148
I use ISCSI for VMware datastores and RDMs for some test SQL clusters, I had constant hangs and even freenas crashes using a Realtek built-in NIC. I bought an Intel NIC after I started having some other problems, and so far I havent had a single crash since disabling the Realtek NIC in the BIOS. Though that is far from empirical evidence. I have only 24GB in mine but for my purposes it works good enough.
 
Top