Is it possible to migrate Windows home directories to TrueNAS?

Traemandir

Cadet
Joined
May 16, 2023
Messages
6
Hi everyone,

Just wanted to post here and see if anyone can provide me some insight, as I'm getting to the end of my rope here trying to setup a new TrueNAS appliance ...

For a little background, I manage IT for a small K-12 School district with about ~300 staff members. All our production user data is stored on servers running Windows Server 2019, through Windows Folder Redirection. I decided to order a new TrueNAS appliance with the goal of migrating over user data, and being able to phase out multiple servers in favor of a more powerful and more redundant storage solution. But I'm having a lot of trouble migrating a subset of test users into the TrueNAS in a way that is compatible with our existing environment. From what I read online TrueNAS is supposed to support Windows home directories, but the details of whether my scenario is supported or not is very unclear. I've tried to approach this from two different angles, but neither one has yielded the results I was hoping for ....


Scenario A - Using TrueNAS as an iSCSI target for a Windows Server VM
My original plan was to use the TrueNAS as an iSCSI target for our Hyper-V server, and to use this setup for hosting a Windows file server, in addition to a few other small VMs. The issue I ran into in this scenario is that while I had no issue migrating test user data to the new virtual machine who's storage is hosted on the new Zvol, performance on the user end was very slow. I'm not sure if there are optimizations I can make on the Hyper-V side for better IO performance, or if this kind of setup will always perform slow?? I know the bottleneck lies within the virtual machine because transfer speeds to the TrueNAS directly was over 10x faster. I'll include full system specs at the bottom of this post. But for the specifics of this scenario:
1. Virtual machine was configured with a 2TB VHDX storage drive
2. VHDX file lives on a partition formatted as ReFS (I believe the partition was set to use a 64Kb block size?)
3. ReFS partition lives on the iSCSI device
4. iSCSI device connects to the Zvol over a 10GB fiber connection, through a dedicated switch (jumbo frames enabled)

------

Scenario B - Hosting user directories on TrueNAS directly via SMB
Due to the performance issues with the virtual machine, I figured it would probably be a better option to try hosting the home directories on a TrueNAS dataset directly using SMB. I was able to create a 3TiB dataset, create an SMB share with full control for Domain Admins. Then from my Windows laptop, I was able to access the share successfully, and created folders for our different Schools, and updated the security permissions on these sub folders through Windows to restrict access to only the IT department, and the Active Directory security group for that School's users.

Problem #1 - When attempting to copy user data to the new location, the transfer would hang for each user sub folder (My Documents, Downloads, ETC ...) that permission could not be updated in the destination directory - The file or folder could not be found (???). The workaround I found for this is that the transfer would continue if I went into the ACL settings for this dataset, and reapplied the ACL list recursively to the sub folders. But doing this for every sub folder is impractical. I set the owner to our Active Directory "Domain Admins" group, and gave this group full control as well. The method for the data migration is to use the robocopy cmdlet from the original windows server, with the syntax:

robocopy "source" "destination" /m /s /copyall

The syntax here is important because it copies all NTFS permissions, including file ownership. Copying all the existing permissions is critical for folder redirection mapping correctly when a user logs on to any of our PCs.

Problem #2 - I had a little better luck creating a separate dataset and SMB share with the "used as home directory" option checked. But in this scenario, when a user home directory is created, I cannot see the their folder myself, when navigating to the share from a Domain Admin account. This is an issue because in our current setup, whenever there's an issue we're able to view their home folder on the remote share, and "Take ownership" through Windows. Has anyone else gotten Folder Redirection to work using TrueNAS SMB as the destination?
------


So I guess my questions are the following:

1. Does anyone have Windows user data stored on TrueNAS using Folder Redirection?
2. Did you configure this using an iSCSI target for a Windows server, or using TrueNAS as the SMB target, or something else?
3. How did you migrate your user data?

I'll be happy to provide any additional information if you have any questions... I appreciate the help! Sorry this post didn't include much in the way of TrueNAS configuration or screenshots out of the gate, I just needed to get this post out before leaving for the day. System specs below:

TrueNAS appliance:
- Intel(R) Xeon(R) Silver 4210R CPU
- 98 Gb Memory
- 6 high capacity HDD disks configured in raidz2, with a small SSD cache. Compression on the default value, ~ 30 TiB usable space
- 10Gb fiber networking

Thank you!
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
This hits close to home for me. Been in your shoes, was in K12 for a decade, and managed IT in K12 for half that.

For a little background, I manage IT for a small K-12 School district with about ~300 staff members. All our production user data is stored on servers running Windows Server 2019, through Windows Folder Redirection. I decided to order a new TrueNAS appliance with the goal of migrating over user data, and being able to phase out multiple servers in favor of a more powerful and more redundant storage solution. But I'm having a lot of trouble migrating a subset of test users into the TrueNAS in a way that is compatible with our existing environment. From what I read online TrueNAS is supposed to support Windows home directories, but the details of whether my scenario is supported or not is very unclear. I've tried to approach this from two different angles, but neither one has yielded the results I was hoping for ....
First, and I am not going to tell you how to live your life, but can I ask a question? Roaming profiles were a fantastic idea back in the day. But why would you intentionally put all of your eggs in one basket? I have lived through two different versions of this scenario, and neither of which ended well.
In one case early on in my career, my IT Manager lost a server which hosted roaming profiles for all of our highschool staff. He didn't have backups. Some users had synced their data to their desktops through the built in Windows utility, but most of them lost data. When I moved onto to my managerial role in another District, I inherited an environment which was recently destroyed by ransomware. They too had all of their user data in roaming profiles and they had lost their backup system to the same malware attack.
Windows roaming profiles are inherently a single point of failure. Most of my collogues as well as myself have migrated to using OneDrive or Google Drive to solve this very issue, and then you can backup that data locally with a whole host of tools. There's also a whole host of cool things you can do from a data governance perspective alot easier this way.

FWIW, sometimes it's worth considering thinking outside of the box...

Scenario A - Using TrueNAS as an iSCSI target for a Windows Server VM
My original plan was to use the TrueNAS as an iSCSI target for our Hyper-V server, and to use this setup for hosting a Windows file server, in addition to a few other small VMs. The issue I ran into in this scenario is that while I had no issue migrating test user data to the new virtual machine who's storage is hosted on the new Zvol, performance on the user end was very slow. I'm not sure if there are optimizations I can make on the Hyper-V side for better IO performance, or if this kind of setup will always perform slow?? I know the bottleneck lies within the virtual machine because transfer speeds to the TrueNAS directly was over 10x faster. I'll include full system specs at the bottom of this post. But for the specifics of this scenario:
1. Virtual machine was configured with a 2TB VHDX storage drive
2. VHDX file lives on a partition formatted as ReFS (I believe the partition was set to use a 64Kb block size?)
3. ReFS partition lives on the iSCSI device
4. iSCSI device connects to the Zvol over a 10GB fiber connection, through a dedicated switch (jumbo frames enabled)

------
This is certainly a doable modality to do what you are trying to do. From Window's perspective the TrueNAS is just a hard disk, and all the configuration is just using Windows/Robocopy to do Windows things. There is no inherent reason that it should be slow, but without more information it's difficult to help.
  • Define slow. What performance metrics can you share?
I know the bottleneck lies within the virtual machine because transfer speeds to the TrueNAS directly was over 10x faster

But for the specifics of this scenario:
1. Virtual machine was configured with a 2TB VHDX storage drive
2. VHDX file lives on a partition formatted as ReFS (I believe the partition was set to use a 64Kb block size?)
3. ReFS partition lives on the iSCSI device
4. iSCSI device connects to the Zvol over a 10GB fiber connection, through a dedicated switch (jumbo frames enabled)
Why are you doing it this way, is there a specific reason? If I was designing a system like this I would store the VM itself on either a different pool or on local media. I'm talking about the "C" drive here. All you need is 100GB or so. Then you can directly mount the ISCSI LUN from the TrueNAS into this VM. This can be your "D" drive, and where you should store your user profiles. I would even give the VM it's own dedicated network interfaces just for ISCSI, and I would give it two. You can use MPIO in ISCSI to get some redundancy and performance increase that way. You're problem here is network bandwidth, which is where this suggestion is coming from.

I'd be happy to write some thoughts on "B" if you are interested also, but it's late and I am tired. Yet another alternative to consider, and granted I have no idea what our situation is. IXSystems literally sells TrueNAS systems and support that can help you do what you are trying to do. You could buy a system thats supported, and since this workload you are describing is mission critical to your users it's probably not a bad idea. You can then re-use this system here to receive ZFS snapshots from the production system and have some good onsite backups...
 
Last edited:
Joined
Jul 3, 2015
Messages
926
I can't help with your exact scenario and I have no real experience with Windows Folder Redirection however I do you DFS with TrueNAS and that works great. The permission issue you are seeing in scenario B is interesting and surprises me as I have migrated a lot of traditional Windows shares over to TN without issue.

I create a new dataset and select SMB as the share type. I then edit permissions and select the preset 'restricted' and leave user and group as root/wheel. I then remove the owner@ and group@ ACEs and add my own. We have a 'storage admin' group in AD so I add that with full control. Then add any other groups that are needed. Then just share it out via SMB with default parameters (at least to begin with). Naturally you can add groups via Windows if you like once you have your basic admin group applied. I have historically used FastCopy to move data within Windows environments and it works great even at preserving ACLs having shifted over 1PB in the last several years.

Anyway like I said not exactly the info you were after but perhaps this may help in addressing your permission issues in scenario B.

Good luck and I would love to hear how you get on. I think you are going down the right path so don't give up.
 

Traemandir

Cadet
Joined
May 16, 2023
Messages
6
First, and I am not going to tell you how to live your life, but can I ask a question? Roaming profiles were a fantastic idea back in the day. But why would you intentionally put all of your eggs in one basket?
Thank you for your question! You are right, and I agree that moving to a cloud storage solution is the better long term solution. The vast majority of student data is already in Google Drive, but there's a little more work that needs to be done on my part to determine if teachers should be migrated to Google Workspace, Microsoft 365, or some combination of the two. My short term goal is to more effectively manage and back up our internal storage, and then long term transition all user data offsite. When this happens, our internal storage can be repurposed to backup our cloud data like you suggested. We already have our mission critical applications moved to cloud hosted environments, user data is just going to take a few more years given our current resources... or lack there of, LOL! Which I'm sure you're familiar with given your past experience in K12.

  • Define slow. What performance metrics can you share?
I don't have "official" benchmarks, but when copying user data or large test files from my laptop to a SMB share hosted directly on TrueNAS the 1Gbps ethernet connection on my laptop is fully saturated. The TrueNAS has no problem keeping up with the full 1Gbps write, or at lease close to it, sustained over a long data transfer. Meanwhile when I implementing the Zvol w/ iSCSI and a virtual Windows file server, the write speed to this file server would start out a few hundred Mbps, and would quickly drop down to 1-3 Mbps. On top of this, there was an inherent sense of "latency" when browsing my home directory. Even just traversing folders, you would have to wait about half a second for child items and sub folders to appear in Windows explorer.

Why are you doing it this way, is there a specific reason? If I was designing a system like this I would store the VM itself on either a different pool or on local media. I'm talking about the "C" drive here. All you need is 100GB or so. Then you can directly mount the ISCSI LUN from the TrueNAS into this VM. This can be your "D" drive, and where you should store your user profiles. I would even give the VM it's own dedicated network interfaces just for ISCSI, and I would give it two. You can use MPIO in ISCSI to get some redundancy and performance increase that way. You're problem here is network bandwidth, which is where this suggestion is coming from.
Well, I supposed the main reason is just that this is how I've always done it. :smile: But I am open to new things. The advantage of having the .VHDX file allows for "check points" (snapshots) in Hyper-V. There should be little to no performance loss with a .VHDX file on a ReFS file system, as this is what Microsoft designed the ReFS filesystem for. But given the results, I image this combination on top of iSCSI on top of TrueNAS is at least part of the reason why the system is not performing.

I like your suggestion about connecting the virtual machine to the LUN directly! I already had configured the virtual machine to have a small C: drive provided as a .VHDX on that host server's local storage, then a D: drive for storage provided by a .VHDX file on the iSCSI device. So to implement your suggestion, I believe all I would have to do is pass the iSCSI NIC through from the the host to the virtual machine, and then configure the iSCSI initiator on the virtual machine instead? This would bypass both the ReFS partition and .VHDX file when reading/writing to the virtual machine. Alternatively ... would it be close enough to keep the host server as the iSCSI initiator, then pass the block device through to the virtual machine?

Lastly on MPIO, I use this for our older HPE SAN iSCSI network, but I did not implement it here. The new iSCSI network I created is a pair of 10G fiber connections connected from the TrueNAS to a dedicated pair of stacked Cisco switches with LACP, then another pair of 10G fiber connections from these stacked switches to the Hyper-V server in LACP. I went with LACP instead of MPIO because I think a single 10G fiber connection is already way faster than the TrueNAS can be expected to perform, given the storage pool is all regular HDDs and not SSDs.
Yet another alternative to consider, and granted I have no idea what our situation is. IXSystems literally sells TrueNAS systems and support that can help you do what you are trying to do. You could buy a system thats supported, and since this workload you are describing is mission critical to your users it's probably not a bad idea. You can then re-use this system here to receive ZFS snapshots from the production system and have some good onsite backups...

Well ....... this is actually a TrueNAS R20 we're talking about :smile:. The technicians I've spoken with have been polite and very knowledgeable about TrueNAS and ZFS, so I hope posting here didn't throw anyone under the bus, as that was not my intention. I've spent a lot of time going back and forth with support on this (at least for scenario B) and haven't made a lot of headway, so I figured I would take a break and try consulting the community before trying again. This way I should have a better understanding myself of how I want to approach implementing the TrueNAS in our environment, and what my expectations should be.

As a couple bonus questions if you don't mind ... I'm thinking scenario A makes more sense from a compatibility standpoint, but *should* scenario B be possible based on TrueNAS > Windows compatibility? Also back on the scenario A virtual windows server track .... would it make sense to upgrade the TrueNAS OS to Scale, and host the Windows server on the TrueNAS directly with KVM? This would cut out the need for iSCSI entirely, but at the cost of taking system resources away from the TrueNAS host.
-------

I apologize again that this post is so long .... but I really appreciate your reply, and the information you have provided.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468

Traemandir

Cadet
Joined
May 16, 2023
Messages
6
I create a new dataset and select SMB as the share type. I then edit permissions and select the preset 'restricted' and leave user and group as root/wheel. I then remove the owner@ and group@ ACEs and add my own. We have a 'storage admin' group in AD so I add that with full control. Then add any other groups that are needed. Then just share it out via SMB with default parameters (at least to begin with). Naturally you can add groups via Windows if you like once you have your basic admin group applied. I have historically used FastCopy to move data within Windows environments and it works great even at preserving ACLs having shifted over 1PB in the last several years.

Thank you for your reply!! I made a new "test" dataset to start over following your instructions. I gave 'domain admins' full control, then navigated to the share and created a subfolder IT, then two sub folders for IT 'Home' and 'Shared'. Problem #1 comes when attempting to robocopy user data from our Windows server to the TrueNAS:

"ERROR 1307 (0X0000051B) Copying the NTFS Security to Destination File <destination> This security ID may not be assigned as the owner of this object"

Fortunately, I'm able to fix this one by adding the force unknown acl user = yes auxiliary parameter to the share in TrueNAS, then restarting the SMB service. (I think this is caused by some unused system permissions on the source data that TrueNAS doesn't recognize) Attempting the robocopy again appears to work successfully as it copies all my user data, but fill get stuck at the following error once the copy gets to the next user's home directory:

"ERROR 2 (0 x 00000002) Changing File Attributes <destination> The system cannot find the file specified."

For each home directory, the user has exclusive ownership and access permissions to each of their primary folders (My Documents, Downloads, ETC). My account does not have permission to access the folders, but since my account is a local administrator on the server, robocopy /b will bypass ownership, and for manually troubleshooting I can take ownership of a user's folder. This functionality just doesn't seem to transfer to the TrueNAS share... it's like robocopy create the folder, but then gets locked out of the folder after it copies the ACL for the folder, then can not copy the subfolders and files.

I guess my real question is, is it possible to work around this in my environment? Or is this a limitation of compatibility between ZFS ACL and Windows NTFS server permissions and will not be possible?

Thank you for your reply, and for the encouragement!
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
For each home directory, the user has exclusive ownership and access permissions to each of their primary folders (My Documents, Downloads, ETC). My account does not have permission to access the folders, but since my account is a local administrator on the server, robocopy /b will bypass ownership, and for manually troubleshooting I can take ownership of a user's folder. This functionality just doesn't seem to transfer to the TrueNAS share... it's like robocopy create the folder, but then gets locked out of the folder after it copies the ACL for the folder, then can not copy the subfolders and files.
Just so that I understand, what is highlighted in bold is on your current existing Windows server, correct? Because that is in my experience the default behavior from a Windows admin point of view.
I guess my real question is, is it possible to work around this in my environment? Or is this a limitation of compatibility between ZFS ACL and Windows NTFS server permissions and will not be possible?
No, what you are trying to do should work. I can do some testing in my homelab here and get back to you on what issues I run into and see if I can work around them.

But like I had said in my earlier post, you probably are better off doing it the ZVOL way for no reason other than Samba is not SMB and FreeBSD/ZFS NTFS ACLs are quirky when we're talking about cascading permissions like this. I'm not saying either way is the answer, but It'll certainly be an easier go.
 
Joined
Jul 3, 2015
Messages
926
Ah ok I think I see the problem. The account I use to shift data around has full control over all data hence not funky issues around permissions. Sounds like you are relying on Windows internals to ‘trust’ you are ok to move the data as you are a local admin on the server.

So just to clarify the user you are trying to move the data around with doesn’t have full control over all the date?
 
Last edited:

Traemandir

Cadet
Joined
May 16, 2023
Messages
6
The slowness of your iscsi might be related to the pool setup (raidz2) - try reading https://www.truenas.com/community/resources/the-path-to-success-for-block-storage.197/
The cache mentioned - is that setup for L2ARC or SLOG?

And how are the sync settings setup?
Sync vs async makes a huge difference in performance / demands on disks/SSDs (see earlier link) - but remember to not just disable sync if you like the data :smile:
Thank you for the link! This definitely gives me some things to think about. For starters, I thought that Raidz2 data was striped between the drives, but reading here that may not be the case. Although, I did try replacing this with mirrored vdevs originally and didn't see any significant improvement. I don't think our storage needs quality for "high performance", since it is predominantly just user documents... but I do need it to not be SLOW. I also didn't realize how big of an impact pool occupancy had ... I may give this a try with a smaller zvol in a larger pool and see how much of an improvement this makes.

LOL!! I do believe I have sync turned on. For the most part in our use case, I could probably get away with turning it off ...... but I'll leave this as a very last resort, as our users do like their data. o_O

Just so that I understand, what is highlighted in bold is on your current existing Windows server, correct? Because that is in my experience the default behavior from a Windows admin point of view.
Yes, that is correct. I agree it doesn't deviate from what a Windows environment considers the norm, I just don't have enough experience with storage on Linux/Unix, and didn't know this would not be the case.

No, what you are trying to do should work. I can do some testing in my homelab here and get back to you on what issues I run into and see if I can work around them.

But like I had said in my earlier post, you probably are better off doing it the ZVOL way for no reason other than Samba is not SMB and FreeBSD/ZFS NTFS ACLs are quirky when we're talking about cascading permissions like this. I'm not saying either way is the answer, but It'll certainly be an easier go.
OK cool! Based on your analysis I probably will switch gears back to the ZVOL route, but It's still helpful to know that this is doable.

Ah ok I think I see the problem. The account I use to shift data around has full control over all data hence not funky issues around permissions. Sounds like you are relying on Windows internals to ‘trust’ you are ok to move the data as you are a local admin on the server.

So just to clarify the user you are trying to move the data around with doesn’t have full control over all the date?
Yeahhhh.... I think the way you're explaining this is technically correct, and herein lies the compatibility issue. I am using my account when attempting to copy data, and I am a member of the 'domain admins' group. So in my perspective I do have "full control" over the data. But technically for the users home directories, I am NOT the owner, and do not have access unless I perform the "take ownership" functio. Or in this use case where I'm trying to migrate data, robocopy /b requires elevation and uses this to bypass the permissions entirely. So robocopy /b is able to begin copying a user's folder, but immediately gets stuck when attempting to change ownership of the newly copied folder on the TrueNAS share. I'll draw up a quick example of our file structure below ...

- //storage
----> department1 (SMB share to department1 users, and IT dept. I have full control)
-------> home
----------> user1 (inheritance disabled, user1 is exclusive owner)
----------> user2 (inheritance disabled, user2 is exclusive owner)
-------> shared
----> department2 (SMB share to department2 users, and IT dept. I have full control)
-------> home
----------> user3 (inheritance disabled, user3 is exclusive owner)
----------> user4 (inheritance disabled, user4 is exclusive owner)
-------> shared
etc ...
 
Joined
Jul 3, 2015
Messages
926
I’m surprised you don’t have a domain admin or storage admin account with default full control. All our shares at work be it personal home space or shared space have a storage admin with full control throughout the filesystem. Guess it would be a massive pain to try and retro fit that now. It kind of makes sense if you are trying to move/modify permissions and you don’t have the rights to do that it would throw an error. Can’t think of a way around it sorry.
 

Traemandir

Cadet
Joined
May 16, 2023
Messages
6
I’m surprised you don’t have a domain admin or storage admin account with default full control. All our shares at work be it personal home space or shared space have a storage admin with full control throughout the filesystem. Guess it would be a massive pain to try and retro fit that now. It kind of makes sense if you are trying to move/modify permissions and you don’t have the rights to do that it would throw an error. Can’t think of a way around it sorry.
LOL, this is true .... no problem though, I appreciate you taking the time to reply! What you're saying makes sense, I totally understand how having a storage admin with default full control would have solved this problem and improve other qualities of life. If you don't mind my asking, how is home storage implemented in your environment? In our environment we create the initial shared folders for the departments, but then the home directories themselves are created by the Windows logon process as defined by the Folder Redirection Group Policy settings. I know in the GPO there is a setting for "grant user exclusive rights" which we have enabled, but the alternative would be that all users in a department could access each others home folders.
 
Joined
Jul 3, 2015
Messages
926
We have a workflow manager that checks certain OUs in AD and when new accounts arrive it goes off and provisions them a home directory which essentially populates AD home folder to a network path and creates them a folder on a certain server somewhere and permissions are set via a template. This template ensures all the right people have the correct access.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
We have a workflow manager that checks certain OUs in AD and when new accounts arrive it goes off and provisions them a home directory which essentially populates AD home folder to a network path and creates them a folder on a certain server somewhere and permissions are set via a template. This template ensures all the right people have the correct access.
That's actually pretty neat. I've never seen anything quite like that. We had Classlink at my last gig and it did some automatic account provisioning, but generating home directories on a SMB share was not something it could do.
 
Last edited:
Joined
Jul 3, 2015
Messages
926

curtissteam

Cadet
Joined
Jul 21, 2023
Messages
2
Thank you for your question! You are right, and I agree that moving to a cloud storage solution is the better long term solution. The vast majority of student data is already in Google Drive, but there's a little more work that needs to be done on my part to determine if teachers should be migrated to Google Workspace, Microsoft 365, or some combination of the two. My short term goal is to more effectively manage and back up our internal storage, and then long term transition all user data offsite. When this happens, our internal storage can be repurposed to backup our cloud data like you suggested. We already have our mission critical applications moved to cloud hosted environments, user data is just going to take a few more years given our current resources... or lack there of, LOL! Which I'm sure you're familiar with given your past experience in K12.


I don't have "official" benchmarks, but when copying user data or large test files from my laptop to a SMB share hosted directly on TrueNAS the 1Gbps ethernet connection on my laptop is fully saturated. The TrueNAS has no problem keeping up with the full 1Gbps write, or at lease close to it, sustained over a long data transfer. Meanwhile when I implementing the Zvol w/ iSCSI and a virtual Windows file server, the write speed to this file server would start out a few hundred Mbps, and would quickly drop down to 1-3 Mbps. On top of this, there was an inherent sense of "latency" when browsing my home directory. Even just traversing folders, you would have to wait about half a second for child items and sub folders to appear in Windows explorer.


Well, I supposed the main reason is just that this is how I've always done it. :smile: But I am open to new things. The advantage of having the .VHDX file allows for "check points" (snapshots) in Hyper-V. There should be little to no performance loss with a .VHDX file on a ReFS file system, as this is what Microsoft designed the ReFS filesystem for. But given the results, I image this combination on top of iSCSI on top of TrueNAS is at least part of the reason why the system is not performing.

I like your suggestion about connecting the virtual machine to the LUN directly! I already had configured the virtual machine to have a small C: drive provided as a .VHDX on that host server's local storage, then a D: drive for storage provided by a .VHDX file on the iSCSI device. So to implement your suggestion, I believe all I would have to do is pass the iSCSI NIC through from the the host to the virtual machine, and then configure the iSCSI initiator on the virtual machine instead? This would bypass both the ReFS partition and .VHDX file when reading/writing to the virtual machine. Alternatively ... would it be close enough to keep the host server as the iSCSI initiator, then pass the block device through to the virtual machine? Besides that I must say that I already several times as a student I have purchased powerpoint presentation, by the way I used this source https://edubirdie.com/powerpoint-presentations-writing-service and I have to say that I was very satisfied, in fact these guys work very fast and my homework was ready within a day, it seems that as a student it would be nice to ask this question to them too, because they can make a writing pro and con of these arguments and from there we can make some conclusions.

Lastly on MPIO, I use this for our older HPE SAN iSCSI network, but I did not implement it here. The new iSCSI network I created is a pair of 10G fiber connections connected from the TrueNAS to a dedicated pair of stacked Cisco switches with LACP, then another pair of 10G fiber connections from these stacked switches to the Hyper-V server in LACP. I went with LACP instead of MPIO because I think a single 10G fiber connection is already way faster than the TrueNAS can be expected to perform, given the storage pool is all regular HDDs and not SSDs.


Well ....... this is actually a TrueNAS R20 we're talking about :smile:. The technicians I've spoken with have been polite and very knowledgeable about TrueNAS and ZFS, so I hope posting here didn't throw anyone under the bus, as that was not my intention. I've spent a lot of time going back and forth with support on this (at least for scenario B) and haven't made a lot of headway, so I figured I would take a break and try consulting the community before trying again. This way I should have a better understanding myself of how I want to approach implementing the TrueNAS in our environment, and what my expectations should be.

As a couple bonus questions if you don't mind ... I'm thinking scenario A makes more sense from a compatibility standpoint, but *should* scenario B be possible based on TrueNAS > Windows compatibility? Also back on the scenario A virtual windows server track .... would it make sense to upgrade the TrueNAS OS to Scale, and host the Windows server on the TrueNAS directly with KVM? This would cut out the need for iSCSI entirely, but at the cost of taking system resources away from the TrueNAS host.
-------

I apologize again that this post is so long .... but I really appreciate your reply, and the information you have provided.
I disagree with Anna's suggestion that putting all user data in roaming profiles is a bad idea. Roaming profiles can provide centralized data management and consistency across multiple devices, making it easier for users to access their data from different locations. While it's true that cloud storage solutions offer advantages, there's still a valid use case for roaming profiles, especially in scenarios where cloud solutions might not be feasible due to resource limitations or specific requirements.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I disagree with Anna's suggestion that putting all user data in roaming profiles is a bad idea. Roaming profiles can provide centralized data management and consistency across multiple devices, making it easier for users to access their data from different locations. While it's true that cloud storage solutions offer advantages, there's still a valid use case for roaming profiles, especially in scenarios where cloud solutions might not be feasible due to resource limitations or specific requirements.
The industry as a whole has shifted away from this methodology for a whole host of reasons. Cryptomalware, single points of failure, difficulty migrating data (Literally not possible without using specialized tools, ie Robocopy), scalability concerns, and a whole host of other pretty tough sells. We have better options than we did when Microsoft went down this path and we have collectively learned alot from the mistakes here.

Feel free to do whatever you want, but I don't even think Microsoft considers this best practice anymore.
 
Top