Poor performance with NFS on FreeNAS 9.3 and 10Gb NIC

Status
Not open for further replies.

Mark Brookfield

Dabbler
Joined
Sep 14, 2014
Messages
15
Hi,

I need some helping trying to establish why I'm getting terrible performance on a HP box running FreeNAS-9.3-STABLE-201505040117


Client:

VMware ESXi 5.5 build 2702864
Dell PowerEdge T710
2 x Intel E5520
144GB RAM
6 x 500GB 15k SAS drives
8 x Broadcom NetXtreme II BCM5709 gigabit NICs
2 x Emulex OneConnect 10Gb NICs


Server:

FreeNAS-9.3-STABLE-201505040117
HP ProLiant Microserver N54L
1 x AMD Turon II Neo dual-core
16GB RAM
2 x Seagate Barracuda 3TB 7200rpm SATA III 6GB/s 64Mb cache
8 x Broadcom NetXtreme II BCM5723 gigabit NIC
1 x Intel 82574L gigabit NIC
2 x Emulex OneConnect 10Gb NICs

The Dell and HP boxes are connected to each other directly using two Dell Force 10 10G SFP cables.

On the FreeNAS box, the two Seagate drives are configured in a mirror named "volume1", compression level is set to off. zpool status shows no errors.

FreeNAS is installed to a HP v285w 64GB USD key mounted directly on the motherboard.

Copying a 44GB VMDK from a RAID-5 SAS volume on the Dell PowerEdge to a NFS share on the FreeNAS box took 6h12m.

The transmit rate on the Dell never seemed to go above 3,900KBps... which is only double what the my home broadband is!

The same file coped to an iSCSI target on the FreeNAS box took 6m41s. The transmit rate was consistent at around the 150,000KBps mark.

smartctl -a shows no errors.

Can anyone suggest where I can begin looking for the cause of the issue?

Many thanks,


-Mark
 

Attachments

  • debug-nas-20150509184244.tgz
    330.5 KB · Views: 244
  • Like
Reactions: Oko

Mark Brookfield

Dabbler
Joined
Sep 14, 2014
Messages
15
Wow... thanks for a quick response and the info, HoneyBadger.

I've read through that, and it makes sense. But... it effectively makes NFS unusable. I mean, six hours to copy a relatively-small VMDK is ridiculous.


So I'm not sure what to do here, because there is no viable solution. The options are put your data at risk (I believe the word used was "hazardous"), or keep it as it is and accept that the performance will be akin to the of an ATA-100 disk.

Or get rid of FreeNAS and move everything to fibre-channel... which seems... a shame :-(

Thank you for the info though!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, the solution is to get an slog device and put it in your system. ;)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Got sniped by @cyberjock - the option for "performance without risk to data" is a fast SLOG device. This will let the sync writes complete quickly while still being on "stable storage" - your effective write speed then depends on how fast of an SLOG device you have. Not every SSD makes a good SLOG; basically, if it's not an Intel DC series, it's probably not your best option. ;)

If you're willing to accept a minor bit of risk, you can also run iSCSI with sync=standard - this writes your ZFS metadata as synchronous but the remote writes from VMware will be async. If you couple this with a UPS on your FreeNAS machine, as well as a solid backup routine (hourly snapshots?) and accepting a small amount of risk you can improve performance that way.
 

Serverbaboon

Dabbler
Joined
Aug 12, 2013
Messages
45
Been there with this hardware, you need to add a SLOG, preferably with some power loss protection.

The recomendation is always Intel, under provisioned to increase lifespan (search forum).

Although it will work as a single drive you are always recommened (by CJ usually) to mirror, with this hardware you will need the bios upgrade and fill the CDROM bay with a 2 or 4 bay 2.5 box if you want to mirror the SLOG and still leave room for adding another two data disks.

If you are ok with no expansion then you just need the 2.5 to 3.5 adapters.

I have expanded my N54 by adding a single SLOG and it made a huge difference to vmware operations such as vmotions, deployments and copies etc. Not sure if general usage was as noticable, I used a Toshiba SSD which has power loss protection but has been shown to be behind the intels in performance.

As a further point I have suffered a failure of a non mirrored slog device during my ever more foolish expansion attempts of the microserver, (strictly speaking an add in controller failure) and I did not seem to lose data just the angry red status light in the gui.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
As a further point I have suffered a failure of a non mirrored slog device during my ever more foolish expansion attempts of the microserver, (strictly speaking an add in controller failure) and I did not seem to lose data just the angry red status light in the gui.

Loss of an SLOG device doesn't necessarily kill the pool, but sync write performance will absolutely crater until it's replaced. If your workload depends on low-latency sync writes (eg: virtualization) it might as well have gone offline.
 
Status
Not open for further replies.
Top