Existing iSCSI connection fails to work going from Core to Scale

Jerami1981

Dabbler
Joined
Jan 4, 2018
Messages
32
SuperMicro MBD-X11SPI-TF-O, Xeon Silver 4110, 190 GB 2133 ECC RDimm, HBA x2 LSI 9305,Mellanox MNPA19-XTR 10G
Pool 1-RaidZ2 Vdev#1 x6 16TB WD, Vdev#2 x6 16TB WD, Vdev#3 x6 16TB WD
Pool 2 Vdev#1 Z1 x6 10TB WD
Pool 3 Vdev#1 Z1 x6 16TB SG

I have a unique dilemma I am hoping to find a reasonable way out of. I currently have Pool 2 being hosted as an iSCSI connection over to a box running Server 2022. On that server i am running a VM (also server 2022) with the Storj app running. I am passing the iSCSI disk through to this VM as Storj wants to see the storage as local. This has been running fine for around 13 months now. I would like to move away from doing iSCSI, which means i need to run my storj vm locally on TrueNas. I found the hypervisor of core to be very unreliable when running Windows, but on a test machine Scale seemed stable. So I went ahead and upgraded my server from Core 13.0-U3.1 to the newest Release for Bluefin. When the dust settled i noticed Storj was complaining about its directory missing. Everything looked good in Scale so i went to my Windows server box. iSCSI initiator showed the connection as Connected, but disk management failed to display the disks. Diskpart also failed to list that storage. I poked at what little i new to and ultimately rolled back to Core, where everything just started working again. I read iSCSI may not be heavily supported with Scale, which sort of puts me in the old chicken or the egg. I need to move to Scale to spin up a local instance of Storj before i can begin to migrate the data from Pool 2 over to Pool 3. But it appears I may need to be done with the iSCSI set up before i can move to Scale. Any insight into my issue and a way to limp this along to the finish lin?
 
Top