sfcredfox
Patron
- Joined
- Aug 26, 2014
- Messages
- 340
Greetings,
Questions/Objectives of post:
(These are the questions I'm hoping to get answered in the wordiness below)
I would like to get some ideas of things I can tune/control within my current system to both test and optimize the performance of iSCSI to it's highest obtainable levels with the hardware I have.
-Should/could I adjust settings on FreeNAS' iSCSI target such as the queue depth or other settings to optimize performance?
-Can system RAM be used somehow as a cache for iSCSI like it would with ZFS?
-Has anyone gotten volume manager to see disks behind a P400/P800 controller to format for UFS?
-Given the constraint of my hardware, are there any better ways to setup FreeNAS for ESX hosts?
-Can I use any DD (disk performance tools in FreeNAS) with no UFS volumes?
Current Infrastructure:
FreeNAS Platform:
HP DL 380 G5
(x2) Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
16GB ECC
P400 (512MB w/BBC) with 8x72GB 10K (RAID5)
P800 (512MB w/BBC) with 10x146GB 10K in SAS enclosure (RAID5)
HP MSA70 SAS attached drive enclosure
dual Broadcom 5709
FreeNAS 9.2.1.7-Release 64-bit
Network Infrastructure:
Cisco SG200-26 (SMB Gigabit managed switch)
Purchased/pre-made Cat5e cabling (should be no layer 1 issues)
Other systems (VMware platforms):
(x2) HP DL 360 G5
E5410@2.33Ghz/32GB
BC5709
ESXi 5.5U1
Background:
While I am fairly comfortable with VMware / SANs / FC / etc, I don't feel like I have the master's degree in iSCSI and FreeBSD that I have seen some of the experts here demonstrate. I have poured over some of the forum posts regarding iSCSI trying to get some answers to my questions, instead I feel like a high school kid sitting in a college physics class.
I fully expect to draw fire from some of the resident experts (jgreco and cyberjock come to mind first) for both not using the best hardware configurations for disk, but I'll take the abuse to get some further guidance and how to move forward in a productive direction.
Disk/Storage
I fully understand that ZFS is usually the recommended way to setup storage for obvious performance benefits and going with software RAID. Unfortunately, the HP P400&P800 controllers do not at all support JBOD, and everyone says trying to do logical drives for individual disks is a terrible idea. The P800 is the only controller I have to SAS connect the external disk enclosure. So, best I can tell, hardware RAID is all I have. I have chosen RAID 5 on both arrays (internal 8 disks, and external 10 disks). I have tried to do the best with what I have and those are excellent cards, just not for FreeNAS. As for hardware RAID selection, I need the capacity of RAID5 versus doing RAID 10 for performance.
FreeNAS 8/9 (FreeBSD) apparently hates those controllers (unless I'm doing something wrong) because I can't see the disks in volume manager. I can't figure out how to format them using UFS to do fileshares/iSCSI files/etc. I have read a few posts similar, but they were all talking about ZFS which I am not using. So, I have setup the iSCSI targets using device extents. Each of the Arrays is an extent assigned to it's own target (two targets, one for each RAID 5 array). Is there any way to use those arrays in a UFS volume?
I would also like to know if I can do any disk performance tests with no volumes formatted for ZFS/UFS? I have read many posts that describe using DD, but they create files for writing/reading. I obviously don't have any volumes since the arrays are being used as devices extents, and I can't find any write-ups about doing that (probably because it can't be done?) So any other ways to do a local performance analysis?
CPU/Memory
This system has a huge amount of free CPU and memory I wish I could better utilize. When doing mass virtual machine startup/shutdowns, I can tap out the 1GB link, but the CPU and memory are barely utilized. Is there any way to use the system's RAM as a massive write/read cache for the iSCSI target process? I can't find any information about that either.
Networking
Since I can barely spell FreeBSD (which is not linux!-won't make that mistake and be slain), I'm not sure if I need to worry about getting more specialized drivers for the Broadcom NICs? They support jumbo frames and TOE, but since FreeBSD only supports offloading a few of the aspects like checksum, etc, are there other specific things I should be doing for the iSCSI interface to increase it's performance? Are there any step-by-step walk-throughs for setting up the supported offloads? Are they even worth it?
I read some write-ups that were a little over my head regarding multipathing in FreeNAS/FreBSD. Whats the best way to accomplish that? Maybe an example of using two nics for my scenario (Two NICs, two iSCSI targets, each with only one LUN/Extent?)
I have tried using Jumbo Frames, and it seems to degrade performance, even though I enabled it on all involved hardware (ESX hosts/virtual networks, switch, FreeNAS) and tested using non-fragmented pings, which worked all the way up to 8792. Any thoughts?
iSCSI Targets
I have read some very good forum posts and articles on iSCSI tuning and feel like the stupid kid in class, so does anyone have some very specific examples of changes I should try? I have thought about changing the queue depth from 32 to 64 to ensure that's not being tapped out and telling all the initiators to shutup. (I guess ESX changes initiator queue back down to something very low like 2 and works its way back up slowly)
What have other people changed about their target settings and what positive effects did it have?
I know this reads like a book, so it you actually read this far, thanks for any suggestions you can give me. I didn't include a metric-crap-ton of performance data both because I didn't know what would be of most use or how to collect it. I'm sure someone will let me know if you want to know specific performance data.
Thanks!
Questions/Objectives of post:
(These are the questions I'm hoping to get answered in the wordiness below)
I would like to get some ideas of things I can tune/control within my current system to both test and optimize the performance of iSCSI to it's highest obtainable levels with the hardware I have.
-Should/could I adjust settings on FreeNAS' iSCSI target such as the queue depth or other settings to optimize performance?
-Can system RAM be used somehow as a cache for iSCSI like it would with ZFS?
-Has anyone gotten volume manager to see disks behind a P400/P800 controller to format for UFS?
-Given the constraint of my hardware, are there any better ways to setup FreeNAS for ESX hosts?
-Can I use any DD (disk performance tools in FreeNAS) with no UFS volumes?
Current Infrastructure:
FreeNAS Platform:
HP DL 380 G5
(x2) Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
16GB ECC
P400 (512MB w/BBC) with 8x72GB 10K (RAID5)
P800 (512MB w/BBC) with 10x146GB 10K in SAS enclosure (RAID5)
HP MSA70 SAS attached drive enclosure
dual Broadcom 5709
FreeNAS 9.2.1.7-Release 64-bit
Network Infrastructure:
Cisco SG200-26 (SMB Gigabit managed switch)
Purchased/pre-made Cat5e cabling (should be no layer 1 issues)
Other systems (VMware platforms):
(x2) HP DL 360 G5
E5410@2.33Ghz/32GB
BC5709
ESXi 5.5U1
Background:
While I am fairly comfortable with VMware / SANs / FC / etc, I don't feel like I have the master's degree in iSCSI and FreeBSD that I have seen some of the experts here demonstrate. I have poured over some of the forum posts regarding iSCSI trying to get some answers to my questions, instead I feel like a high school kid sitting in a college physics class.
I fully expect to draw fire from some of the resident experts (jgreco and cyberjock come to mind first) for both not using the best hardware configurations for disk, but I'll take the abuse to get some further guidance and how to move forward in a productive direction.
Disk/Storage
I fully understand that ZFS is usually the recommended way to setup storage for obvious performance benefits and going with software RAID. Unfortunately, the HP P400&P800 controllers do not at all support JBOD, and everyone says trying to do logical drives for individual disks is a terrible idea. The P800 is the only controller I have to SAS connect the external disk enclosure. So, best I can tell, hardware RAID is all I have. I have chosen RAID 5 on both arrays (internal 8 disks, and external 10 disks). I have tried to do the best with what I have and those are excellent cards, just not for FreeNAS. As for hardware RAID selection, I need the capacity of RAID5 versus doing RAID 10 for performance.
FreeNAS 8/9 (FreeBSD) apparently hates those controllers (unless I'm doing something wrong) because I can't see the disks in volume manager. I can't figure out how to format them using UFS to do fileshares/iSCSI files/etc. I have read a few posts similar, but they were all talking about ZFS which I am not using. So, I have setup the iSCSI targets using device extents. Each of the Arrays is an extent assigned to it's own target (two targets, one for each RAID 5 array). Is there any way to use those arrays in a UFS volume?
I would also like to know if I can do any disk performance tests with no volumes formatted for ZFS/UFS? I have read many posts that describe using DD, but they create files for writing/reading. I obviously don't have any volumes since the arrays are being used as devices extents, and I can't find any write-ups about doing that (probably because it can't be done?) So any other ways to do a local performance analysis?
CPU/Memory
This system has a huge amount of free CPU and memory I wish I could better utilize. When doing mass virtual machine startup/shutdowns, I can tap out the 1GB link, but the CPU and memory are barely utilized. Is there any way to use the system's RAM as a massive write/read cache for the iSCSI target process? I can't find any information about that either.
Networking
Since I can barely spell FreeBSD (which is not linux!-won't make that mistake and be slain), I'm not sure if I need to worry about getting more specialized drivers for the Broadcom NICs? They support jumbo frames and TOE, but since FreeBSD only supports offloading a few of the aspects like checksum, etc, are there other specific things I should be doing for the iSCSI interface to increase it's performance? Are there any step-by-step walk-throughs for setting up the supported offloads? Are they even worth it?
I read some write-ups that were a little over my head regarding multipathing in FreeNAS/FreBSD. Whats the best way to accomplish that? Maybe an example of using two nics for my scenario (Two NICs, two iSCSI targets, each with only one LUN/Extent?)
I have tried using Jumbo Frames, and it seems to degrade performance, even though I enabled it on all involved hardware (ESX hosts/virtual networks, switch, FreeNAS) and tested using non-fragmented pings, which worked all the way up to 8792. Any thoughts?
iSCSI Targets
I have read some very good forum posts and articles on iSCSI tuning and feel like the stupid kid in class, so does anyone have some very specific examples of changes I should try? I have thought about changing the queue depth from 32 to 64 to ensure that's not being tapped out and telling all the initiators to shutup. (I guess ESX changes initiator queue back down to something very low like 2 and works its way back up slowly)
What have other people changed about their target settings and what positive effects did it have?
I know this reads like a book, so it you actually read this far, thanks for any suggestions you can give me. I didn't include a metric-crap-ton of performance data both because I didn't know what would be of most use or how to collect it. I'm sure someone will let me know if you want to know specific performance data.
Thanks!