Looking for your input! (Microsoft Lab - Hyper-V)

Status
Not open for further replies.

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
Hey there,

I have at home an HP DL 180G6 running latest and greatest FreeNAS with:
Dual E5606's (2.13GHz Quad's - Easily upgradeable)
16GB ECC 10600R (Easily Upgradeable)
6X 2TB WD RE4 Drives (With 4 more on the way)
1X 250GB SSD (Samsung Evo 850)
1X 128GB SSD (Something not so special that's new and I don't care if it dies)
HP P410... each drive in RAID0 (HBA being delivered this weekend :))
Dual Port 1GBe (Built-In)
Dual Port Intel 1GBe (ext)

Hyper-V Server: (Soon to be replaced by an R620 with E5-2660's and more goodies or R810 with Quad 10 Cores)
DL 380G6
Dual E5540's
56GB Ram
Server 2016 Data-Center (because licensing for me and why not when it's free for my lab)

Workload:
Server 2016 VM's with a few Ubuntu
1 SQL Server (2016)
Mostly Applications from Microsoft and other random crap (System Center, Windows Azure Pack, Microsoft Project, Microsoft Dynamics AX)
1VM is (Surprise!) a Plex Server (1VHDx for OS and another for media that is several TB in size like 4)

I have the setup on FreeNAS as:

RAID10 (3 mirrors with 2VDev's in each mirror)
Presented the volume (4K block on FreeNAS to Hyper-V via iSCSI (with MPIO on Separate Subnets and with least-queue depth) formatted at 4K and the VM's are formatted at 4K to ensure block alignment)
I am testing out GZIP-9 because my CPU's are bored and when migrating VM's to the new ZVOL they barely hit 50-60% then dropped down to less than 6% with all 8 VM's running and frankly I'm going to have a bit of stuff on this as time goes on)

I have the 128GB SSD enabled as a ZIL for the system, after a lot and lot of reading, I have been convinced that I should get more memory before enabling the 250GB SSD as an L2ARC.


My questions are this:

1. Good idea on the L2ARC until more memory? (Want to get 64GB soon for it)
2. Block alignment for Hyper-V, what are your thoughts about me using 4K on FreeNAS vs the default 16K? Better suggestions?
3. My MPIO, separate subnets etc... 1NIC session to all NIC's on the FreeNAS from the host with Least Queue Depth - good idea bad idea? (I think it's good based on all the documentation and tons of google reading and past experience with my Nimbles)
4. ??? - Anything I should change or suggestions?

P.S. - I know that this is a long post and I'm sure no one ever asks for guidance, but I assure you, I've done a crap ton of reading in the last 34 days on this forum, OpenZFS and even spoke with 45drives.com as far as their experiences go.

Thank you so much in advance!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I can help you with item 1. Yes, you need more memory. L2ARC will actually increase memory pressure on RAM, which will end up with a net reduction in performance. Get to 64GB at a minimum (ideally, max the board out) and then consider L2ARC.

You should probably read the manual and brush up on your terminology a bit. Your configuration *sounds* like you've got 1 pool with 3 vdevs, 2 drives mirrored in each.
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
I can help you with item 1. Yes, you need more memory. L2ARC will actually increase memory pressure on RAM, which will end up with a net reduction in performance. Get to 64GB at a minimum (ideally, max the board out) and then consider L2ARC.

You should probably read the manual and brush up on your terminology a bit. Your configuration *sounds* like you've got 1 pool with 3 vdevs, 2 drives mirrored in each.



Thank you! You are correct sir, 3Vdevs with 2 drives mirrored in each is accurate. Definitely getting more RAM for sure. Max on this box is 192GB..... I'll do 128 however due to pricing above that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have the setup on FreeNAS as:

RAID10 (3 mirrors with 2VDev's in each mirror)
Your terminology is so messed up that I can't tell what you are talking about.
You need to go back and read some of the primers, here are the links:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
Your terminology is so messed up that I can't tell what you are talking about.
You need to go back and read some of the primers, here are the links:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/


@chris - did you see my reply just above yours? ;)

1 Pool, 3Vdevs, 2HDD mirrored in each Vdev.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Take my advise with a grain of salt as I am still in the build/configuration process myself, but I hope it helps.

My questions are this:

1. Good idea on the L2ARC until more memory? (Want to get 64GB soon for it)
CPU>Memory>SSD>HDD as far as speed. However, the opposite holds true for cost (mostly). If you can afford the RAM, always RAM > SSD as another layer to the read/write. I also think that the SLOG and or L2ARC <corrected> per pool.
4. ??? - Anything I should change or suggestions?
2 and 3 are too technical for me.

As far as the PS note goes, no worries! I believe the purpose of this forum is for newcomers like you and me. We are here to help.

As for other advise, I see that you are going for max IO with the mirrored pairs. This, plus the use of the SLOG (ZIL) is aimed at improving performance with your VMs. What kind of network connection are you using between the two (1GbE)? What kind of drive connections to the board (3Gb/s, 6Gb/s)? I'd look at every step along the way to make sure you are not missing a choke (which you may have already done). I think you pretty much have it, but ZFS loves ram for a lot of its underlying functions, so "moar RAM". There are some great videos on ZFS and links to read if you are interested in more (you have already stated you have read a bit). I really liked the documentation and https://www.youtube.com/watch?v=uT2i2ryhCio as an overall, pull it together summary once I had ready many one off/single topic threads.

In looking back at the SLOG (Seperate ZIL - ZFS Intention Log) myself, I believe the SSD you are using may not meet the backup for self-power attribute that most suggest. Please correct if I am wrong. http://www.freenas.org/blog/zfs-zil-and-slog-demystified/
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
SLOG and L2ARC are per pool, not per vdev. The SLOG device should have power loss protection, which usually mandates an enterprise/data center grade drive.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
FWIW, I have a very similar setup for my Hyper-V Server (except I am running Server 2012 R2 DataCenter). Also, FreeNAS is a separate Server where all the Data/VMs are housed.

  • You are using iSCSI so remember you never want to go above 50% disk usage or performance will tank... For example, my total storage for iSCSI is 16.7 TiB; I have 2 ZVols (one for MS and one for ESXi). The MS ZVol is only allocated 2.5TiB and ESXi is 4.5TiB for a total of 7TiB to be consumed. I will never go above 8TiB unless I increase the total storage.
  • The more RAM the better, so fill that up ;)
  • You may want to consider 10GB to connect the two servers. I have mine directly connected using a different IP Range (just statically assigned) so no need for a 10GB Switch. This will isolate iSCSI traffic and let the VMs use the normal 1GB connection for other stuff. Cost is not that expensive (think like $120 per 10GB NIC, but maybe cheaper if you shop around)
While I am not that active much anymore, if you search my comments I do recall that I posted a bunch of speed testing results using various drives and settings a while back.

Best of luck!
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
Take my advise with a grain of salt as I am still in the build/configuration process myself, but I hope it helps.


CPU>Memory>SSD>HDD as far as speed. However, the opposite holds true for cost (mostly). If you can afford the RAM, always RAM > SSD as another layer to the read/write. I also think that the SLOG and or L2ARC <corrected> per pool.

2 and 3 are too technical for me.

As far as the PS note goes, no worries! I believe the purpose of this forum is for newcomers like you and me. We are here to help.

As for other advise, I see that you are going for max IO with the mirrored pairs. This, plus the use of the SLOG (ZIL) is aimed at improving performance with your VMs. What kind of network connection are you using between the two (1GbE)? What kind of drive connections to the board (3Gb/s, 6Gb/s)? I'd look at every step along the way to make sure you are not missing a choke (which you may have already done). I think you pretty much have it, but ZFS loves ram for a lot of its underlying functions, so "moar RAM". There are some great videos on ZFS and links to read if you are interested in more (you have already stated you have read a bit). I really liked the documentation and https://www.youtube.com/watch?v=uT2i2ryhCio as an overall, pull it together summary once I had ready many one off/single topic threads.

In looking back at the SLOG (Seperate ZIL - ZFS Intention Log) myself, I believe the SSD you are using may not meet the backup for self-power attribute that most suggest. Please correct if I am wrong. http://www.freenas.org/blog/zfs-zil-and-slog-demystified/


Really appreciate that input.You're absolutely right, definitly gettin maor RAM today in fact and looking at the Intel "DC" series for use as a ZIL since they have the power backup thingy inside it.
Thanks again!
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
As for other advise, I see that you are going for max IO with the mirrored pairs. This, plus the use of the SLOG (ZIL) is aimed at improving performance with your VMs. What kind of network connection are you using between the two (1GbE)? What kind of drive connections to the board (3Gb/s, 6Gb/s)? I'd look at every step along the way to make sure you are not missing a choke (which you may have already done). I think you pretty much have it, but ZFS loves ram for a lot of its underlying functions, so "moar RAM". There are some great videos on ZFS and links to read if you are interested in more (you have already stated you have read a bit). I really liked the documentation and https://www.youtube.com/watch?v=uT2i2ryhCio as an overall, pull it together summary once I had ready many one off/single topic threads.

In looking back at the SLOG (Seperate ZIL - ZFS Intention Log) myself, I believe the SSD you are using may not meet the backup for self-power attribute that most suggest. Please correct if I am wrong. http://www.freenas.org/blog/zfs-zil-and-slog-demystified/


So regarding the Network IO - 4X 1GBe links with 1 dedicated to mgmt with the other 3 in separate subnets for iSCSI traffic, the Hyper-V host has 3 dedicated for iSCSI as well. The drive connections are going to be 6GB when I get home today from work (3GB now). Definitely checking out your Youtube link on lunch.

Again really appreciate your advice :)
 
Last edited:

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
FWIW, I have a very similar setup for my Hyper-V Server (except I am running Server 2012 R2 DataCenter). Also, FreeNAS is a separate Server where all the Data/VMs are housed.

  • You are using iSCSI so remember you never want to go above 50% disk usage or performance will tank... For example, my total storage for iSCSI is 16.7 TiB; I have 2 ZVols (one for MS and one for ESXi). The MS ZVol is only allocated 2.5TiB and ESXi is 4.5TiB for a total of 7TiB to be consumed. I will never go above 8TiB unless I increase the total storage.
  • The more RAM the better, so fill that up ;)
  • You may want to consider 10GB to connect the two servers. I have mine directly connected using a different IP Range (just statically assigned) so no need for a 10GB Switch. This will isolate iSCSI traffic and let the VMs use the normal 1GB connection for other stuff. Cost is not that expensive (think like $120 per 10GB NIC, but maybe cheaper if you shop around)
While I am not that active much any more, if you search my comments I do recall that I posted a bunch of speed testing results using various drives and settings a while back.

Best of luck!


Awesome! Thank you very much for your input! :)

So far Network throughput hasn't been the issue, Disk Activity on the server (PowerShell diskperf -y (which will work on 2012R2 if you're curious) and open taskmgr) shows that the disks are at 100% activity all the time however the disks on FreeNas (Reporting - Disk Busy) show they are actually around 65ish% which is confusing. CPU is super low and Network IO is low too. I see a pattern with everyone's input regarding memory. I'm glad to see that!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Found some old threads that might be a good read for you. I forgot to mention, that I am using Intel DC S3710 200GB SLOG as well (IIRC they are actually 2 of them mirrored, but will have to double check to be sure):

Initial Setup/Design:
https://forums.freenas.org/index.php?threads/multiple-volumes-or-not.45545/#post-308867

A bunch of test results for reference:
https://forums.freenas.org/index.php?threads/slow-writes-on-ixsystems-hardware.46032/page-3

System has been running without issue for so long I don't even check it much (if at all) and is rock solid. ;)
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
Found some old threads that might be a good read for you. I forgot to mention, that I am using Intel DC S3710 200GB SLOG as well (IIRC they are actually 2 of them mirrored, but will have to double check to be sure):

Initial Setup/Design:
https://forums.freenas.org/index.php?threads/multiple-volumes-or-not.45545/#post-308867

A bunch of test results for reference:
https://forums.freenas.org/index.php?threads/slow-writes-on-ixsystems-hardware.46032/page-3

System has been running without issue for so long I don't even check it much (if at all) and is rock solid. ;)

Outstanding, thank you! I'll read them now. :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Found some old threads that might be a good read for you. I forgot to mention, that I am using Intel DC S3710 200GB SLOG as well (IIRC they are actually 2 of them mirrored, but will have to double check to be sure):

Initial Setup/Design:
https://forums.freenas.org/index.php?threads/multiple-volumes-or-not.45545/#post-308867

A bunch of test results for reference:
https://forums.freenas.org/index.php?threads/slow-writes-on-ixsystems-hardware.46032/page-3

System has been running without issue for so long I don't even check it much (if at all) and is rock solid. ;)
You have much to offer, I hope you will visit more frequently.
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
Well, swapped memory for now and am running 48GB RAM now, also added the SSD's for ZIL and L2ARC and what an incredible difference in performance it made. The server has 12X 4GB DIMM's so eventually I'll get those swapped (or a different motherboard/proc combo) and get 8GB sticks installed.

Thanks again everyone.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Well, swapped memory for now and am running 48GB RAM now, also added the SSD's for ZIL and L2ARC and what an incredible difference in performance it made. The server has 12X 4GB DIMM's so eventually I'll get those swapped (or a different motherboard/proc combo) and get 8GB sticks installed.

Thanks again everyone.
Glad things worked out for you. Welcome to hardware addiction ;)
 
Status
Not open for further replies.
Top