Performance Difference Between ZFS Stripe and RAID Stripe?

ZFS Stripe or RAID Stripe?

  • ZFS

    Votes: 2 100.0%
  • RAID

    Votes: 0 0.0%
  • Keep everything in one pool

    Votes: 0 0.0%

  • Total voters
    2
Status
Not open for further replies.

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I managed to scoop up two 150 GB 2.5" 10k RPM WD Raptors (enterprise, not consumer grade) for a measly $15 a piece since the were refurbished, which was perfect since I wanted something extremely cheap but fast and didn't care if they died randomly because it's just temp storage space. (I was considering cheap SSDs but figured that was just a money pit since they would probably die after a year or so due to all the reads and writes)

I will be striping these two together (may get a 3rd if it's worth it) and will be using it for a download directory for SABnzbd/NZBget, and maybe some other things, but largely downloads for NZBs. Since it will be doing a lot of unpacking/joining/copying.

My question is, is there a performance penalty associated with the tasks that ZFS does behind the scenes (CoW and other things), compared to just a simple RAID 0 array setup via mdadm (I'm using Linux until FreeNAS 10 is complete) with something like ext2 (no need for journaling) or XFS?

Also would it be beneficial to keep both the complete and incomplete folders on there or just the incomplete and put the complete in my multimedia pool which is 3 striped mirrors (6x 4 TB connected to my HBA) since it will be copying stuff from complete to my pool anyway?

....or was this a dumb idea to begin with and I should keep both directories within the pool? I had originally had the download directory on a single 2.5" laptop HDD and was getting abysmal transfer speeds so I got rid of it and put everything back in the pool. Since these are twice as fast, and there's two of them, that shouldn't be a problem this time.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Depending on the io pattern zfs can be really fast or extremely slow. A high amount of random io will make you cry and can only be mitigated with lots and lots of RAM.
That being said, you should be able to get away with it for your use case. It is 'only' a download box, not an enterprise grade Oracle DB.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Depending on the io pattern zfs can be really fast or extremely slow. A high amount of random io will make you cry and can only be mitigated with lots and lots of RAM.
This is only accurate for certain configurations of zfs. I think you are thinking of a RAIDZ(1,2,3) vdev configuration, which certainly has IO limitations. However, stripes don't have that same restriction. In the case of zfs stripes, IOPS are additive, so there would be a negligible difference against mdadm stripe.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
This is only accurate for certain configurations of zfs. I think you are thinking of a RAIDZ(1,2,3) vdev configuration, which certainly has IO limitations. However, stripes don't have that same restriction. In the case of zfs stripes, IOPS are additive, so there would be a negligible difference against mdadm stripe.
Unfortunately it is the Copy On Write and not the raid type that slows zfs (or any other COW fs) down. COW introduces data fragmentation that even will become worst over time. In a typical oldskool enterprise DB environment where the data resides on a couple of fat centralized storage boxes you'll be striping with zfs for optimal speed. But even with reads (full table scans anyone?) the fragmentation will completely nuke the storage box's prefetch algorithms. The only way to add more speed in this case is to give zfs lots and lots (and lots) of RAM to be able to prefetch and cache.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I plan on putting 128 GB in this baby eventually, and make use of the NVME M.2 slot for an insanely fast and large L2ARC (256+ GB with 2+ GB/sec reads and about 1 GB/sec writes anyone?) but that will have to wait a while considering all this stuff was on credit to begin with hahaha

I just created my downloads pool but I can't switch stuff over yet. I guess I could do some benchmarking on it while it's empty so I can see what I can expect performance wise out of it.


Well damn, that came out a lot better than expected!

Code:
[root@nas ~]# dd if=/dev/zero of=/mnt/downloads/test.img bs=2M count=10k
10240+0 records in
10240+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 79.2879 s, 271 MB/s


This from my striped mirrored pool (6x 4 TB HGST NAS drives connected to an LSI HBA) and the bandwidth is considerably lower than what I've seen before, but that's probably because I didn't stop anything and a lot of things are probably reading and writing while I'm doing this. I've seen about 300 mb/sec usually and sometimes as high as 500 or 600 but that's a rarity.

Code:
[root@nas ~]# dd if=/dev/zero of=/mnt/storage/test.img bs=2M count=10k
10240+0 records in
10240+0 records out
21474836480 bytes (21 GB, 20 GiB) copied, 159.708 s, 134 MB/s
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If you put only the incomplete files on this pool, then fragmentation really isn't an issue. Since you are moving data off the temp stripe and on to the other pool. So yes, there might be a minor impact, but since there isn't an easy way to add a non-ZFS filesystem I wouldn't worry about it.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
So yes, there might be a minor impact, but since there isn't an easy way to add a non-ZFS filesystem I wouldn't worry about it.

Ah but there is since I'm using Linux at the moment hahahah I was considering going with ZFS anyway so that it would integrate easily back into FreeNAS 10 when it's released in a few months, but since it will be empty 80% of the time destroying the raid set and recreating it as ZFS wouldn't have been a problem anyway.

The complete directory would only be temporary also since SickRage would scan that folder every 10 minutes and move anything that was in there to my storage pool. I just didn't know if it would be better for NZBget/RAR to try and extract the archive/assemble the pieces on the same disks that it's reading from or to do it on a completely different set of drives. Logic would tell me that the latter would be better, but that pool has a lot of IO across six disks (compared to the downloads pool) so I'm not sure.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I remembered that from previous testing and compression is disabled for the downloads pool.
 
Status
Not open for further replies.
Top