Weird drive throughput issue

Status
Not open for further replies.

FlyingYeti

Cadet
Joined
Feb 20, 2018
Messages
5
Hi,

I'm having a strange drive throughput problem. I have 12 HGST 4TB 74K000 drives. I'm running FreeNas 11.1-U2 w/48GB RAM with an M1015 flashed to IT mode hooked up to an IBM EXP3000.

No matter the pool configuration, if I have 6 drives, it can get about 160MB / sec per drive, as reported by:
zpool iostat -v 1

I've tried both:
sudo dd if=/dev/zero of=./test2.dat bs=2M count=100K
and
sudo iozone -a -s 50G -r 2048

and get the same results.

However,
as I add more drives to a vdev/pool, I get less throughput per drive, eventually going as low as 80 MB/s.

I thought perhaps it's a controller/enclosure throughput issue, But if I stripe all 12 drives, I get 80 MB/s, but a total throughput of about 1GB/s.

Does anyone know what might be going on?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, if you have more drives for the same workload, they're bound to work less...
 

FlyingYeti

Cadet
Joined
Feb 20, 2018
Messages
5
Well, if you have more drives for the same workload, they're bound to work less...

Shouldn't both dd and iozone write data as fast as possible, maximizing the workload?
What do you suggest I try to increase the workload to test your theory?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So maybe the PCIe bus is bottlenecked somewhere. Question is: is this an academic question or do you need more real (non-benchmark) performance?
 

FlyingYeti

Cadet
Joined
Feb 20, 2018
Messages
5
I'm trying to understand exactly what is happening and to optimize my system. I'm trying to reconcile what I read with what I see, and in this case the theoretical performance is not even close.

It's certainly not a PCIe bottleneck issue, I'm far away from the throughput PCIe 2.0 8x can deliver. Also, when I stripe all 12 drives, I get a higher aggregate throughput as expected, but still not as high as I believe I should.

The max interface throughput should be about 1500MB/s on a 12x stripe or RaidZ2 w/12 drives, the bottleneck being the external enclosure (4 lanes @ 3Gbps each).

So, I'm looking for someone that can help me determine if my expectations are wrong, explain why, and if they're not wrong, can help me use some tools to identify the bottleneck.
 
Status
Not open for further replies.
Top