Backblaze Pod 5.0

Status
Not open for further replies.

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
Hi, I'm considering a FreeNas 9.3 on a Backblaze Pod 5.0 (for backup puprposes). What are you thoughts on the new version? Is it playing nicely with FreeBSD?
Is it OK to be used also in a little bit more performance sensitive environment (VFX work - around 30 users) - loaded with lots of RAM 128 or 256GB and stripped SSDs for caching?
Thanks
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
The updates to the sheet metal are interesting.. Can't comment on the hardware integration with FreeNAS.

$dayjob is probably 6 months from another round of pod deployments, so I've been looking at the specs but don't have one in the lab yet.

One thing that jumps out at me is "port multiplier". One of the nice things about the 4.0 design is it can be built with LSI cards instead of the rocket junk. This looks like backblaze has gone back to multipliers, which probably is OK for their use case, but might have a performance impact in addition to Marvell's legendary reliability.
 
Last edited:
Joined
Apr 9, 2015
Messages
1,258
Honestly if I was considering spending that much on hardware to put FreeNAS on I would just contact iXsystems and let them work up a system. They know what they are doing and you would likely end up with something a lot better and much more reliable.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
I am running a similar setup but with Backblaze's old partner, 45 Drives. I am using their 30 drive unit with direct attach setup (no backplanes) and connecting them to 2x LSI 9201 cards. It runs great. Also, I am using only one 750w PSU. The unit is on and sitting next to me and runs relatively quiet. i.e.. very quiet compared to a server room, kind of 'loud' compared to a quiet PC or home theater set-up.

For their new Backup Pods 4.5/5.0, the big issue I see, especially for a semi-noob like myself, is that the backplanes and SATA cards are not FreeNAS/FreeBSD compatible. Now you might make them work, but I haven't seen a clear post that says they will work. This raises the second question which is, are those backplanes compatible with the LSI 9201 cards? I am guessing they are not and I personally don't have the experience and time to figure it out. So obviously, I went with the direct attached above.

Hope this helps.

Doug

PS. In my opinion, stay away from the Rocket 750 cards if you can. The upside of using them is cost savings (about ~$360 savings for the same amount of ports) and saving one PCI slot. The upside of the 3x LSI 9201 are: 1) can always re-sale on ebay, 2) FreeNAS proven bulletproof so headache free, 3) Potentially much faster.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Hi, I'm considering a FreeNas 9.3 on a Backblaze Pod 5.0 (for backup puprposes). What are you thoughts on the new version? Is it playing nicely with FreeBSD?
Is it OK to be used also in a little bit more performance sensitive environment (VFX work - around 30 users) - loaded with lots of RAM 128 or 256GB and stripped SSDs for caching?
Thanks
I don't have a Backblaze pod, but I will say that BackBlaze is what led me to FreeNAS. In the end, I decided against risking the possible HW incompatibilities and just went with a SuperMicro server (12 drives), and later added a SuperMicro JBOD (45 drives). The server (freenas1 in my sig) still has plenty of horsepower left to add additional JBOD shelves depending on storage needs.

You might want to consider that as an alternative.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
I don't have a Backblaze pod, but I will say that BackBlaze is what led me to FreeNAS. In the end, I decided against risking the possible HW incompatibilities and just went with a SuperMicro server (12 drives), and later added a SuperMicro JBOD (45 drives). The server (freenas1 in my sig) still has plenty of horsepower left to add additional JBOD shelves depending on storage needs.

You might want to consider that as an alternative.
This is a great way to go as well. I did something nearly similar with cheap PCs cases and Rackable 3601 expanders. But then I wanted to consolidate, to I want the Storinator route. (45 drives).
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Also, the original OP wanted to know about 30 user performance. I have no idea about this as there seems to be a lot of different things that seem to go into performance. My gut feeling is that the drives are the least of your problems with performance. Rather the performance issues are:

1) Network connections and how they are handled.
2) FreeNAS and how it queues stuff up.
3) FreeNAS and the communication languages is uses to share. Also with the client computers.
4) How full a RAIDZ is.

My server:
1) Three Zpools of 10 4TB drives in a RAIDZ2. (mixture of Hitachi 7200RPM, and Seagate 7200 RPMs.
2) Supermicro X9SCL/X9SCM
3) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
4) 32 Gigs Memory
5) 10G Myricom 2 port card. 2 client computers direct attached with their own 10G Myricom cards.
6) No additional LARC or ZIL.

So for performance.
I have two Zpools which are full with about 1 TB free on each. (When I use more than that, the Zpool in the OSX Finder, comes to a crawl all of a sudden.) I am storing about 60 TBs of videos and connect directly from client (OSX 10.8.5) using AFP. With a mini-sas Raid5 box I can get about 350Mbs copy speeds from the client to the server. However, OSX Finder performance, and editing with Adobe Premiere Performance is at best inconsistent. Mostly pretty good and nearly instantaneous but other times beach ball hell.

Go figure.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Is it OK to be used also in a little bit more performance sensitive environment (VFX work - around 30 users) - loaded with lots of RAM 128 or 256GB and stripped SSDs for caching?
Are you using 10GbE connections or just 1Gig? What protocols do you expect to use to share the storage with your users? And how much usable storage do you need?
The performance considerations will come down to HDD sizes, # of drives per vdev and vdev type (z1,z2, striped mirrors). And the protocol will dictate whether or not you would benefit from a SLOG.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
Thanks. I actually made a thread about a Supermicro 24xHDD server already.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Hi, I'm considering a FreeNas 9.3 on a Backblaze Pod 5.0 (for backup puprposes). What are you thoughts on the new version? Is it playing nicely with FreeBSD?

I am running a similar setup but with Backblaze's old partner, 45 Drives. I am using their 30 drive unit with direct attach setup (no backplanes) and connecting them to 2x LSI 9201 cards. It runs great. Also, I am using only one 750w PSU. The unit is on and sitting next to me and runs relatively quiet. i.e.. very quiet compared to a server room, kind of 'loud' compared to a quiet PC or home theater set-up.

I have also just installed a 45 Drives Q30 at an ultra high-def (2K/4K/6K) film and post-production/editing company:

SuperMicro X10DRL motherboard
Dual Xeon E5-2620 v3 @ 2.4GHz
256GB RAM
2x 125GB SSD boot drives
2 x LSI 9201-16i HBA cards
28 x 4TB WD Re drives (it holds 30, but one drive died and waiting on a replacement)
3 x 1o drive vdevs RAID Z2, AFP shares
3 x dual Intel X540T2BLK 10Gbe NICs (LAGG/LACP all 6 ports into one)

Netgear XS728T 10 Gbe 24-port managed switch (LAGG/LACP to NAS)
Mixture of late 2013 MacPro and iMacs to XS728T via CAT6 and Sonnet Twin 10G Thunderbolt to Ethernet adaptors.
Storinator NICs and Sonnets are set to "mtu 9000"

Running repeated tests over the 10GbE network, Mac clients consistently see 750-800MB/sec writes ans 550-650MB/sec reads. That's good enough for multiple streams of most 4K and even 6K raw codecs
We ran a test earlier in the week where we overlayed six simultaneous(but, all unique) Redcoode 4K 4:1 @24fps streams(127MB/sec) in a single Adobe Premiere session on a late-2013 Mac Pro. The Q30 had no problem, but the $7k Red Rocket-X expansion card could not handle the load.

So, we then ran six Prores 422 HQ 2K streams(~32MB/sec) to three different Macs without any stuttering or artifacts. Finally, we ran several Prores 4444 XQ streams to each of the Macs and still no problem.
These are unlikely loads for this small production company, even though they have gotten used to working with raw codecs.

Is it OK to be used also in a little bit more performance sensitive environment (VFX work - around 30 users) - loaded with lots of RAM 128 or 256GB and stripped SSDs for caching?

It depends on what you specifically want to use the box for - video streams or compositing, etc. Go for the most RAM can can cram into the box, ZFS loves RAM. SSD cache(as L2ARC) does not enhance the performance of FreeNAS with regard to sequential/streaming content. FreeNAS(actually ZFS) is set by default to bypass cache for streaming. From ZFS Tuning Guide - "By default the L2ARC does not attempt to cache streaming/sequential workloads, on the assumption that the combined throughput of your pool disks exceeds the throughput of the L2ARC devices, and therefore, this workload is best left for the pool disks to serve".

In fact, adding extra SSD cache may actually degrade performance as the HDD RAID pool bandwidth is far beyond that of SSD drives and the resulting tables needed to cache and flush data. In my Q30 below, the 30 drive pool has total bandwidth of ~2GB/sec on writes and ~1.6GB/sec reads. I think SSD's in RAID-0 are ~1.1GB/sec

PS. In my opinion, stay away from the Rocket 750 cards if you can. The upside of using them is cost savings (about ~$360 savings for the same amount of ports) and saving one PCI slot. The upside of the 3x LSI 9201 are: 1) can always re-sale on ebay, 2) FreeNAS proven bulletproof so headache free, 3) Potentially much faster.

I originally had the Rocket 750 installed in this Q30. But, when we had a SATA cable go bad and a port on the motherboard, I put dual LSI 9201's in the replacement Q30. Performance seems to be better
The Rocket was designed primarily for one function - maximizing the number of drives controlled from a single card. Saves a PCIe slot and simplifies things. And in datacenters, that is perfect. Performance is a distant consideration to density and cost.

I actually wonder if one HBA for each vdev might even be better.....the cost is minor compared to the overall NAS price. I came across a fairly well documented website that made this claim.
 
Last edited:

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
It depends on what you specifically want to use the box for - video streams or compositing, etc. Go for the most RAM can can cram into the box, ZFS loves RAM. SSD cache(as L2ARC) does not enhance the performance of FreeNAS with regard to sequential/streaming content. FreeNAS(actually ZFS) is set by default to bypass cache for streaming. From ZFS Tuning Guide - "By default the L2ARC does not attempt to cache streaming/sequential workloads, on the assumption that the combined throughput of your pool disks exceeds the throughput of the L2ARC devices, and therefore, this workload is best left for the pool disks to serve".

In fact, adding extra SSD cache may actually degrade performance as the HDD RAID pool bandwidth is far beyond that of SSD drives and the resulting tables needed to cache and flush data. In my Q30 below, the 30 drive pool has total bandwidth of ~2GB/sec on writes and ~1.6GB/sec reads. I think SSD's in RAID-0 are ~1.1GB/sec

Heaviest workload will be compositing - reading sequences of 1-10MB files (exr, dpx, etc.). And while it's true that SSD in RAID-0 will be slower - NVMe is really cheap nowadays and you can get around 4-5GB/s for as much as $600 (2xSamsung 850Pro) and it could easily scale very quickly by just by adding more or/and faster drives. Of course compositing read pattern is not streaming... Even editing and grading wouldn't be concidered strictly streaming as you don't usually read the whole big (e.g. ProRes) file at once but rather sequence of chunks (in/out points) from many files. I think both Avere and Isilon (which are used in big VFX houses) use flash/cache storage in their systems probably for a good reason.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
I'm sorry, I completely overlooked that you were building an SSD array and not using SSD cache for an spinning drive array!! Total brainfart on my part
Yes, agreed with your explanation. I was actually thinking of doing the same for this small shop, as they want to move color and few other processes in-house
I hadn't looked at the latest Pod design, there's some interesting features.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
There must some misunderstanding cause I'm actually thinking about regular HDD array with lots of RAM and big NVMe L2Arc. As I said I already decided to go the Supermicro way but still the Pod is an interesting system.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
here must some misunderstanding cause I'm actually thinking about regular HDD array with lots of RAM and big NVMe L2Arc. As I said I already decided to go the Supermicro way but still the Pod is an interesting system.

Sorry again, just me operating with half my brain.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Man, there is some great info in your other thread. Just about every question I had is answered
Wish it had existed a few months ago before I ordered the Q30!

BTW, I now regret getting the 4TB drives, instead of the 6TB. So, 30 x 4TB in RAIDZ2 gives us 78TB usable
People are already complaining about space....and they've barely started using it yet (15-20TB)

They have six 64TB G-Tech Studio-XL Thunderbolt RAID pods sitting around ~80% full. A lot of that is redundant file/folder copies because they were working in a non-networked/shared, local system and would copy stuff endlessly as they jumped from station to station. Hopefully, I can get them to clean/consolidate those drives to free up one Studio-XL as Nearline storage. They say they got up to 800-1,200MB/sec reads from them. So, 15TB transfers back to the online FreeNAS box should be relatively quick, as needed(4-6 hours at night).

They'll be needing a second box soon as a couple of shows go into full production in the next few months. I'll be following your progress and testing closely!
Beyond LTO, they are also going to need a fairly large nearline storage of much slower 360+ TB soon.
 

wuxia

Dabbler
Joined
Jan 7, 2016
Messages
49
You can always get an extension jbod chasis plus an extra hba and just add it to the zpool if you have an extra slot. A 44x drives supermicro sas chasis is ~ $2000. Your server should be able to handle it. Of course you should increase your ram accordingly.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
I asked 45 Drives about 512GB RAM. They said the problem with taking the X10DRL mobo beyond 256GB RAM is that we'd have to go to the 64GB RAM chips. And the price difference would be something like $7k. With drives, that would bring the price fairly close to that of a whole new unit (I haven't looked into the price of 64GB RAm chips, though)
 
Status
Not open for further replies.
Top