Help please, first start...

Status
Not open for further replies.

Szyrs

Dabbler
Joined
Jan 23, 2015
Messages
15
Hi,

I've been working on upgrading my server from a Windows machine for almost a year, due to some hurdles along the way. It's getting to be more than just a thorn in my side, having my storage space in pieces.

I have finally arrived at point where I know what I'm going to build but I have some bits here that I'm thinking that I could set up as a FreeNAS box while I skrimp and save.

I have an HP Z400 that has 24GB ECC RAM and a few cheap sata cards, so I'm thinking to change the case and a few other bits and see what I can accomplish. I went with FreeNAS over similar varients because it seems so well supported and popular. It's a little difficult to follow a few things, I guess because the range of users is so wide.

I have no previous experience with FreeBSD or ZFS but I do pick things up and I'm not feint hearted.

I've been advised against deduplication and I'm happy to accept that for this machine. I can't analyze for effectiveness untill this machine is running anyway, so it's definitely not an issue at this point.

I do not need encryption on this machine. I would like compression. I would like 2 or 3 drive redundancy, but that depends on the number of drives I can run in this system.

There will be a very low user count, perhaps 11 concurrent users max at any one time. File sizes vary from small plain text to video and media files that are several gigs in size. Streaming media will be dependant on network bottlenecks and available ram.

Aside from that, I simply want a ZFS storage device, as large as I can make it with the hardware I have (more because I'm sick of waiting than because i'm stingy).

I have I think 14x 3TB drives at the moment that are free, this is what is holding me up though:

How many drives/tb of HDD can I put into this machine, given the ZFS requirements and the absolute ceiling of 24GB of RAM? I believe that 24GB is not even close to enough RAM to start considering a L2ARC, so I'm limited to timiting my storage space.

How many 3TB drives can I run with 24GB of RAM, and does that include Hot Spares?

Many thanks.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Your system should run and be safe with 24 GB, but it's hard to say what performance will be like. From the spec sheet I find on HP.com, it sounds like this is a Xeon with ECC RAM, which are good things.

I'm a little concerned about the "cheap SATA cards"--those have been known to cause problems. They may work, but you'd probably be better off replacing them with a proper HBA like an IBM M1015 or comparable LSI card.

As for pool layout, two, six-disk RAIDZ2 vdevs would work well and give 21-22 TiB of net storage space.
 

Szyrs

Dabbler
Joined
Jan 23, 2015
Messages
15
Thanks for the reply.

I have a number of cheap RocketRAID SATA adaptors and I also have a 3Ware 9650SE, which is more of a RAID controller than an HBA. I'm just reading some stuff about FreeBSD users having success using this 3ware card in "single" drive mode, rather than "Jbod"...

Yes a w3690 and 24GB ECC RAM.

So you say that 24GB will be happy camping but I'm a little unclear on the rest?

Is 21TB net of storage space a ceiling? I'm assuming that amounts to a gross of 27TB (-ish) in z2?

And is that with compression running?

The rules of thumb on this stuff seem to vary from 1GB to 30GB per 1TB with quite fine granularity. This is even true of the n00b/install guides. The forums that reference this stuff seem to boil down to a handfull of threads that are hundreds of pages long, beginning at the start of the decade and hosting a range of simple questions and engineer-level debates. I look forward to getting into it all at a very low level when i build my actual server but for now I'm just trying to assess the viability of this option. The last deciding factor before I actually put it all into a bigger case, is how much storage can I safely shoehorn into this? 20TB-ish is acceptable, 30-40TB would be ideal, under 12TB isn't really worth wasting the CPU and RAM on, as I have better uses for it. I realise that I'm not in an optimum position with hardware but I'm sure you understand where I'm coming from. There isn't much point in building a machine that will require upgrading long before Christmas comes along...

Also, from what I can gather it's best to tinker with vdevs and zpools as little as possible, so if I can iron out as many of the hardware hiccups that I can in theory then I just might stay lucky... Cheers
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I'm just reading some stuff about FreeBSD users having success using this 3ware card in "single" drive mode, rather than "Jbod"
Single drive mode is not a good solution. If the cards won't do JBOD, even after re-flashing, use different cards.
Is 21TB net of storage space a ceiling?
Once you have more than the 8GB minimum, it all depends on workload vs performance expectations, and in general, the more RAM you have, the looser the requirements become.
from what I can gather it's best to tinker with vdevs and zpools as little as possible
I would say this is mostly just a practical issue. If you set up your vdevs and your pool on top of it and then dump many TB of data onto it, it can be very inconvenient to change the layout. Moving many TB of data around takes a long time and depends on having available temporary storage capacity.
 

Szyrs

Dabbler
Joined
Jan 23, 2015
Messages
15
Hi Robert,

I apologise for not quoting but i'm on my phone.

1) Is that a general point you're making or one specific to this card?What am I supposed to reflash it with? Jbod is an option on this card and the firmware is up to date.

This is one of the threads that I was reading, from the FreeBSD forums: https://forums.freebsd.org/threads/3ware-jbod-and-zfs-controller-settings.15176/

" If you have good power coming into the building and a UPS, then enable the Performance profile and the write cache on the drives. The performance profile puts the cache on the controller itself into write-back mode (meaning the controller tells ZFS that data has been written to disk as soon as it's written to cache). Without the performance profile, the controller cache is put into write-through mode, where the controller waits for data to hit the disk before telling ZFS that it's on disk (IOW, it's a read cache).

You also don't want to use JBOD mode. In JBOD mode, all of the controller stuff is turned off (onboard cache, for example) and the controller becomes a plain SATA controller.

Not sure if you can switch 1 disk at a time from JBOD to Single. You may have to backup, flip the controller out of the JBOD mode, create the 4 SingleDisk "arrays", recreate the pool, and restore your data.

That's how we run all our 3Ware controllers with ZFS (write cache enabled on drives, performance profile, queueing, etc ... everything set to max performance).

Oh, and disable all verification tasks. Let ZFS handle that as well (zpool scrub)."

2) I can't give a great deal more information on workload and performance expectaions than i have already given above. Is there really no way of calculating it without first running the machine?

3) I recently watched the powerpoint presentation for noobs that is sticky-ed in these forums. I emphasised that a break or corruption in the zpool or vdevs will lose all of my data. For that reason alone, I'd rather do one initial setup and then do my best to maintain it; rather than be chasing my tail straight out of the gate.


Thanks for taking the time to answer, the assistance is greatly appreciated. Sorry if I'm being a bit thick...
 

Szyrs

Dabbler
Joined
Jan 23, 2015
Messages
15
Sorry, I promise that this is the last one!

If the ideal is to run the card in jbod (I imagine that ZFS records write to disk instead of cache for a reason better than benchmark results?) then is there any benefit to running the 3ware card over a number of rocketraid devices? The LSI cards tend to be costly here...
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
1) Is that a general point you're making or one specific to this card?What am I supposed to reflash it with? Jbod is an option on this card and the firmware is up to date.
It's a general point. If a card has a true JBOD mode then there should be no need to re-flash it. As far as the thread you read goes, I would make two points. First, I don't know if there's something special about that card that makes single disk mode a better bet than JBOD, but over and over the refrain in these forums (from much more experienced and knowledgable members than me) is that single-disk mode is not the way to go. Second, this:
In JBOD mode, all of the controller stuff is turned off (onboard cache, for example) and the controller becomes a plain SATA controller.
In these forums, the latter is what is recommended, because then ZFS gets a true picture of what's going on with the drives. Letting the controller do things like write-back caching is asking for trouble. The poster of this particular comment qualified it with "If you have good power coming into the building and a UPS", which is fine up to a point, but still may leave room for the card to deceive ZFS about the on-disk state. That thread also doesn't address the issue of SMART data ... and it's from 2010.
2) I can't give a great deal more information on workload and performance expectaions than i have already given above. Is there really no way of calculating it without first running the machine?
For light workloads you can be reasonably confident up front, but with heavier workloads there's no substitute for realistic testing. 11 users working with video sounds like a potentially heavy workload for a small box. You say you're RAM limited, but other components could have an impact too, notably NIC and HBA, which are replaceable.
a break or corruption in the zpool or vdevs will lose all of my data.
I believe if you reread the presentation you'll see that a pool will remain healthy while all its component vdevs remain healthy, and conversely, a lost vdev means a lost pool. The most obvious choice for 14 3TB drives would be a pool made up of two RAIDZ2 vdevs, but your workload may dictate a different layout.

You might consider experimenting for a few weeks with toy pools before committing to a production layout. One member went as far as buying a bunch of cheap used drives to play with. You can also play in a VM but that's more useful for learning the GUI and such.
 
Last edited:

Szyrs

Dabbler
Joined
Jan 23, 2015
Messages
15
The W3690 doesn't support VT-d, which I believe is required to virtualise FreeNAS properly. That would have been ideal..

Yes, my network is seriously underweight but that is a problem that I'm currently working on.

I guess we'll see what happens. Thanks very much for your time.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
That would have been ideal..
I wasn't proposing virtualizing FreeNAS except as a familiarization platform. I wouldn't dream of trying to guide you on virtualizing FreeNAS in production, that's way over my head.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I have a number of cheap RocketRAID SATA adaptors
I just want to mention that, while you're unlikely ever to see it on a list of recommended hardware for FreeNAS, I had good results with a HighPoint Rocket 640L, which is a plain HBA (in contrast to their RocketRAID cards). The reason people here won't recommend it is that it has a Marvell chipset. I found it brand new on eBay for a great price and used it for several months without any issues, but that's just one data point.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I had that exact 3ware card. It is not a good choice for ZFS at all. In FreeBSD, if you are a pro you can make it work (and I say that very losely). For FreeNAS its downright dangerous and is almost certainly going to kill your zpool later. The question is when it will happen. I bought tthe 9650SE and ended up selling it less than 2 months later at a significant loss of money.. but learned the lesson before data was lost.

If you are trying to do this 'by the book' and/or your data is important, do NOT use that 3ware card. It's not worth it when you can get something that will be amazing and proper for $125 or so on ebay.

As @Robert Trevellyan said, Marvell isn't recommended. It may or may not work fine. Some work fine until you hit some edge case under heavy load, and then the system goes offline when the disks in the zpool detach. I don't recommend Marvell in the slightest for that reason.

Stick with the tried-and-true LSI cards. Those work great, they perform great, and they aren't taking risks. ;)
 
Status
Not open for further replies.
Top