Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

incoming files storage configuration

Status
Not open for further replies.

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
Hello!

I'm intending using a 7 drive raidz2 for long term storage. Yet, I'd like to accommodate for higher IOPS requirements.

My first thought was to run a "scrap drives" mirrored.
+IOPS
+space optimization of 3TB drives: no TB's lost
- risk of fragmentation from having high fill rate?

Another possible other option would be to put them on the Raidz2
+ no "lost in space" pool
- less fill rate due to higher capacity
? IOPS capacity of drives in relation to the use case? I've no idea how a raidz2 reacts to this.

Please advice! What would you do and why?

Cheers /
 
Last edited:

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
Perhaps to clarify some - I wonder if there are any differences with regards to fragmentation patterns that would point towards either mirrored setups as opposed to raidz2?
Would I be better off with a smaller mirrored drive setup vdev with higher fill rate than a larger raidz2 of 6 drives, with lower fill rate?

Please chime in.
 

tvsjr

Neophyte Sage
Joined
Aug 29, 2015
Messages
943
Just my n00b $0.02...

Your fragmentation will be high, not because of the high fill rate, but because of the parallel writes. The usual solution to fragmentation is lots of free space. I vote for a RAIDZ2 of all the drives (depending on the value of your data, you might even consider a RAIDZ1, although I'm sure this will draw some ire). You could create a mirror of the remaining 1TB on the 3TB drives if you wanted some space for something else.
 

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
You could create a mirror of the remaining 1TB on the 3TB drives if you wanted some space for something else
Had no idea this was possible!
That is actually really neat. Definitely something to think about.
 

tvsjr

Neophyte Sage
Joined
Aug 29, 2015
Messages
943
I'm pretty sure you can. Now I'm doubting myself :)
 

Sakuru

Senior Member
Joined
Nov 20, 2015
Messages
527
No, do not create vdevs out of partitions. If you create 1 vdev with all drives, it will use 2 TB from each drive. The extra 1 TB on the 3 TB drives won't be used unless you upgrade all of the 2 TB drives to 3 TB drives.
 

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
The extra 1 TB on the 3 TB drives won't be used unless you upgrade all
That is pretty much what I've understood too. But I'm noob to the extent my harware haven't even arrived yet.

Is there an advantage in IOPS capacity in a mirrroed vdev of 2 drives compared to a raidz2 of 6 drives big enough to motivate splitting my drives into a mirrored vdev AND a useless pool?
Here's where my judgement is lost.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
6,315
60 * 30 = 1800 megabytes / sec. I'd look into having a 10 gigabit nic in the server. 60 concurrent active samba sessions sounds a lot like random IO to me. I'd go mirrored vdevs (ie striped mirrors) with lots of ram so that your transaction groups are larger. Beefy multi-core Xeon CPU will be useful too. Jgreco's favorite E5-1650 is probably a good fit.

Maybe consider a RAIDZ1 ssd-only pool. SSDs are cheap these days.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,862
In a nutshell, IOPS are limited by the slowest drive in the vdev, and vdevs are additive. So if you have a 6 drive RAIDZ2 vdev in a pool (with an average IOPS per drive os 130), then you can expect ~130 IOPS in that pool. If you split them into 3 mirrors and striped, you could expect ~390IOPS for the pool.

I'm not sure IOPS is your concern though. It looks like a lot of sequential reads and writes, which are fine for a RAIDZ2 pool.
 

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
60 * 30 = 1800 megabytes / sec.
Incorrect. A total of 30Mbytes, spread across a hefty guestimate of 60 transfers.
The composition of files was analyzed recently with regards to teh "misaligned pools and lost space"-thread:

I think @depasseg is probably more accurate in his description of the characteristics of tranfers as "sequential writes", rather than "a lot of IOPS".
From previous experience, a 1.5tb WD green do not cope with this scenario on NTFS, the drive becomes an obvious bottleneck.

So the strategy should be to first trying out the 6-wide raidz2, if proven unsufficient, striping 3 pairs of mirrors would be the next best thing I suppose.

Thank you depasseg, very helpful.
 
Last edited:

Dice

Neophyte Sage
Joined
Dec 11, 2015
Messages
1,205
Status
Not open for further replies.
Top