FreeNAS as iSCSI Targer for Hyper-V

Status
Not open for further replies.

merlinios

Cadet
Joined
Feb 16, 2017
Messages
4
Hello all,

I have setup freenas on a Hp Blade BL 460 G1 with 2 x Xeon Quad Core and 64 GB RAM. This blade server have direct attached through an optical cable an HP Eva enclosure with 14x 500GB 15k

So we create a big datastore with lz4 , share type " windows " and create an iscsi target and give all this storage to a hyper-c Cluster . The iSCSI Connectivity from cluster to freenas is 1GBit per node.

The problem as we speak is the very low performance. We run about 20 VM's MAX simultaneous . These virtual machines are very slow to respond , also for boot etc . Is there some way to see in freenas the latency of the storage ? Or the total IOPS of the system ? In reporting i can see in CPU about 15-20% MAX average , Memory is wired 58.9 G . Strange is the network traffic graph which is all time maxed at 150MBit so i thing something going on there . It is like it cannot go higher and it tops there.

Any ideas ?

Thanks a lot

inttraffic.PNG


scsitargetport.PNG
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
What is your pool layout?
 

merlinios

Cadet
Joined
Feb 16, 2017
Messages
4
Because i didn't create the pool layout , can you tell me where i can find this information ?

Thanks a lot
 
Joined
Jan 18, 2017
Messages
525

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
14 drive z2. Damn. That is far outside the recommendations even for a bulk storage server.

That pool would have the random IOP performance of a single drive.

Which is what you're seeing I think.

For a VM work load it's recommended to use mirrors rather than raidz.

Thus you would have a zpool consisting of 7 vdevs each vdev consisting of 2 drives.

This would improve the IOP performance 7 fold over your current system, but young from 12 disks for data to only 7.

Additionally for VM workloads it's recommended to keep the storage used capacity below 50%

Additionally, it may be worth while installing a high performance PCI nvme drive with PLP as a SLOG. You can test this by disabling sync writes (temporarily!)
 

merlinios

Cadet
Joined
Feb 16, 2017
Messages
4
Hello and thanks for your answer,

So to create this i need to destroy the current config ?
 

flatrick

Cadet
Joined
Apr 7, 2016
Messages
2
Hello and thanks for your answer,

So to create this i need to destroy the current config ?

Yes, you have to destroy the zpool and start over. To have redundancy and speed, you might want to consider turning the zpool into VDEVs with three drives in RAIDz1. Remember, for every VDEV, you'd increase the zpools IOPS capacity. If you require even more redundancy, go for 4~5 drives per VDEV in RAIDz2, or for the best possible IOPS, just use mirrored VDEVs.
The book "FreeBSD Mastery: ZFS" is actually a good read if you want to get a good understanding of ZFS :smile:
 
Status
Not open for further replies.
Top