Was helping a friend of mine setup a fresh freenas build.
Hardware overview:
Supermicro 1155 MB (Don't remember exactly which one).
i3-3xxx (3rd gen with ecc support, don't remember exactly which model).
8 gigs ECC ram (yea, I would have liked to see 16 gig).
4x WD 3tb red in RAID-Z2. (hooked up to MB sata ports)
Everything setup, and freenas 9.1.1 x64 installed.
Went to test local pool bandwidth using dd with bs=1m and count=20k
Write speeds were good, ~250 MB/sec.
Read speeds were really bad. About 30 MB/sec.
Verified cpu usage was ok. Checked gstat, and one of the hard drives was showing 100% utilization while the other 3 were mostly idle. It wasn't just one single drive pegged at 100%. It would change between drives.
Destroyed the pool to test individual drives. Simultaneous dd's to the drives was fine, and simultaneous dd's from the drives was fine. Each drive could read at about 140 MB/sec steady.
Recreated a stripe pool for testing. Reads were still very low, 30-40 MB/sec. gstat still showed one or two drives at 100% while the others were at 10-20%.
Destroyed and recreated a z2 pool, as I was fairly sure it wasn't causing the issue.
Tried disabling read ahead. Instantly went from 30 MB/sec to about 120 MB/sec. I thought, weird, read ahead is normally only disabled for highly random read io like databases where zfs can't predict the next read. But also noticed that with read ahead disabled, the current queue depth of the drives was only hitting 3-4, instead of 10.
Re-enabled read-ahead, but set vfs.zfs.vdev.min_pending=1 and vfs.zfs.vdev.max_pending=1.
That got me back to ~250 MB/sec read. All 4 drives constant 'load' in gstat. Tried setting max_pending to 2, and read performance tanked again, with one hard drive constantly pinned at 100%. Set max_pending back to 1, and everything was fine.
Tested scrub speeds, and it seemed fine. About 450 MB/sec for the 100 gig of test data on it.
This is the first time I've done anything with WD RED's, but I've never seen max_pending have that much of an effect. My main nas has 11 Seagate 3tb drives, and the default of max_pending=10 works great. Sequential read on a file is a little over 1 GB/sec.
Anybody else had to force max_pending to 1 for decent read performance?
Hardware overview:
Supermicro 1155 MB (Don't remember exactly which one).
i3-3xxx (3rd gen with ecc support, don't remember exactly which model).
8 gigs ECC ram (yea, I would have liked to see 16 gig).
4x WD 3tb red in RAID-Z2. (hooked up to MB sata ports)
Everything setup, and freenas 9.1.1 x64 installed.
Went to test local pool bandwidth using dd with bs=1m and count=20k
Write speeds were good, ~250 MB/sec.
Read speeds were really bad. About 30 MB/sec.
Verified cpu usage was ok. Checked gstat, and one of the hard drives was showing 100% utilization while the other 3 were mostly idle. It wasn't just one single drive pegged at 100%. It would change between drives.
Destroyed the pool to test individual drives. Simultaneous dd's to the drives was fine, and simultaneous dd's from the drives was fine. Each drive could read at about 140 MB/sec steady.
Recreated a stripe pool for testing. Reads were still very low, 30-40 MB/sec. gstat still showed one or two drives at 100% while the others were at 10-20%.
Destroyed and recreated a z2 pool, as I was fairly sure it wasn't causing the issue.
Tried disabling read ahead. Instantly went from 30 MB/sec to about 120 MB/sec. I thought, weird, read ahead is normally only disabled for highly random read io like databases where zfs can't predict the next read. But also noticed that with read ahead disabled, the current queue depth of the drives was only hitting 3-4, instead of 10.
Re-enabled read-ahead, but set vfs.zfs.vdev.min_pending=1 and vfs.zfs.vdev.max_pending=1.
That got me back to ~250 MB/sec read. All 4 drives constant 'load' in gstat. Tried setting max_pending to 2, and read performance tanked again, with one hard drive constantly pinned at 100%. Set max_pending back to 1, and everything was fine.
Tested scrub speeds, and it seemed fine. About 450 MB/sec for the 100 gig of test data on it.
This is the first time I've done anything with WD RED's, but I've never seen max_pending have that much of an effect. My main nas has 11 Seagate 3tb drives, and the default of max_pending=10 works great. Sequential read on a file is a little over 1 GB/sec.
Anybody else had to force max_pending to 1 for decent read performance?