Is something wrong with my SSD?

Status
Not open for further replies.

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
I'm still trying to figure out why I can get decent synchronous transfers on my system, and I'm looking at whether there's a problem with either my SSD or the card I've used to mount it into the system.

Here's what I've come up with today:

1. Adding the Intel 320 SSD as an SLOG seems to decrease write performance.

I'm using the dd command I see referenced here so I'm measuring throughput directly from the command line. Here's the command:

dd if=/dev/zero of=/mnt/nas1mirror/testing/tmp.000 bs=2048k count=50k

And here's what I got:

nas1mirror-dd-test.gif

Those are megabytes per second, by the way. Looks like ZFS is caching the hell out of the writes when it can to get great performance on async writes. Great. But the SSD speed decrease? That sucks.

2. The SSD isn't getting "boosted" the way the disk pool is.

So I pulled the SSD out of the pool and set it up as its own pool and reran the test. Sync=standard on both datasets:

ddtest-comparison.gif

The SSD transfer number looks about right, as reviews I've read of the Intel 320 online suggest it can support 220 MB/s or so on sustained writes, and I got 409 MB/s. That's greater than I'd expect, but maybe that's attributable to the on-disk cache. Cool.

But the 4 drives I've got as a striped mirror? WTH? If ZFS is caching writes in memory and pushing transactions as I'd expect, shouldn't it be doing that for both? Why the better performance here?

Or is this more likely to be 409 MB/s for the SSD after caching is taken into account, in which case I've got a problem with either the SSD or the card it's mounted in?

Where should I be looking next to help diagnose this?

Thanks. Sorry for the wall of text. At least I threw in pictures to make it easier...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, for starters, using dd testing doesn't really work how you think it works for benchmark testing.

Adding stuff and using it properly is non-trivial. There's a reason why my noobie presentation says something like "if you don't know if you need one or not you probably don't". There's only certain applications where adding an SLOG will help, and even then it'll only provide a tangible measurable benefit sometimes.

This isn't like Windows where throwing more hardware at the problem guarantees a performance benefit. In short, you're somewhat on your own to figure out what works best for your situation, or pony up and pay for a consult to see what you are or aren't doing right for your setup.

I'd never ever use one of those PCIe to SSD controllers in a server. Not well known hardware and not exactly name brand.

Benchmarking ZFS is nothing like benchmarking any other file system. And I'm betting money either you aren't setting up ZFS properly for your situation or you aren't benchmarking properly. More than likely, if you are like most people around here, you are doing both to some extent.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Well, for starters, using dd testing doesn't really work how you think it works for benchmark testing.
OK, well here's a follow-up question: what's the proper way to test drive performance in FreeNAS so I can determine whether I have an issue with the SSD card or not? I searched here, came up with a best guess and ran with it. What should I have done instead?

Adding stuff and using it properly is non-trivial. There's a reason why my noobie presentation says something like "if you don't know if you need one or not you probably don't". There's only certain applications where adding an SLOG will help, and even then it'll only provide a tangible measurable benefit sometimes.
Understood. As it is, I'm trying to build a NFS datastore for virtual machines, so as best I can tell I fall into the use case for SLOGs.
 
Status
Not open for further replies.
Top