How much ramm do i need?

Status
Not open for further replies.

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
Hello everyone :)

I'm planning a new "dedicated" freenas build, i've used it alot on xenserver as test for nfs shares etc. But now i want to have it as my dedicated storage (The ds410 with 2TB disks is filled to the top )

My build:

Case: Norco RPC 4224 (the mighty well known 24disk case)
Motherboard: Supermicro MBD-X10SRL-F-O
Memory: Samsung M393A4K40BB0-CPB*
CPU: Intel Xeon E5-1620 v3 - Boxed
ZIL: 2x Intel DC S3700 Series - 100GB raid 1*
HDD's: 4x z2 vedv of 6 Western digital Red, 4TB (WD40EFRX)
PSU: An XFX 1000w model that's laying arround (so the expected load is ~30%, with enough headroom for startup)
L2ARC: 2 Samsung 840 evo's (120GB) that are laying arround (raid 0?)

The servers main purpose will be a media vault of all my photo's/video's, backups from all computers (and maybe mycloud, more on that later), VM storage (NFS shares) for xenserver, and last but not least. Possible off-site backups from friends which can only be done over the gigabit network, E-Sata is unfortunatly not an option in my rack.

Now my main questions;
I've read everywhere that there's a rule of thumb on the RAMM. 1 GB ramm for every TB of HDD. As i'm using at least 4TB drives (starting with z2 vedv of 6) due to the pricepoint (6TB is way to expensive for now, maybe in the future) i will be needing "at least" 96GB ramm. Also things like dedup/encryption are on the list, and possibly higher TB/drive counts.

32GB ramm modules are only like 20bucks more expensive then 2 16GB modules, so i think its a wise investment to buy 32GB modules (starting with 1 for the first vdev, slowly expanding as needed) So i can max out at the allmighty 256GB (8x 32gb). Or would 128GB be the max i'm ever going to need?

The xenserver lab (or esxi) will write a lot of resources, and data integrity is very high on the list as they will be running of the freenas machine instead of local raid (got 16 10K sas drives in them for the heaviest IO, but i prefer to use a dedicated zfs machine for bulk storage/non high io stuff like iso', boot devices etc.)

The ZIl will be a mirror of intel DC (Datacenter?) SSD's that are supposed to be very write resistant (PB's of data lifetime), do i need to make a RAID 1 array on the motherboard raid controller, or is freenas able to make a software raid 1 for the ZIL? (to much hassle to try out in the VM, i prefer to do it right on the dedicated machine)

Secondly, almost the same question for L2ARC, can freenas make a software raid 0 array for the L2arc, or do i need to make it myself on the raid controller? I've only seen the multiple boot medium install, which i will be doing to for max security.

As i'm slowly building it out (i cant just throw down 7K for a server :P) Can i just expand my zpool with a live dataset on it (the running vm's) or do i need to "cut them off" so all transactions to the Zpool are stopped before expansion?

Lastly, the freenas server will be running from an 1.5KVA apc rack psu if the power outlet decides it doesn't like my hungry server, For windows there's a software install that notifies you and shutsdown the computer automaticly, is this also possible in Freenas?

I'm not realy a BSD guy, but i do know a little about Centos (running a amazon EC2 vps with website/TLS secured) so i know my way around the CLI, but don't expect me to make a list of all subdirectories on the /xxx/xx/xx :)

I'm looking forward to your answers and suggestions, it will be the first "big build" after some synology machines that are painfully slow due to raid5 (~10-20mb/s :( ) but its their price, you can buy some nice synology for 3K, but you don't get ZFS!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Congrats!

1st - you never,ever need to bother with HW RAID. All drives should be direct attached.
2nd - I can't get the sense for what your VM workload will be. And this is important, because you might want to dedicate some disks to striped mirrors specifically for your VMs. RAID Z2 performance could really become a major bottleneck depend on that workload. I'm not sure what your plan is for the 16x10k SAS drives.
3rd - along the lines of #2, it's hard to give you an exact amount of RAM without understanding your workload. If you aren't running any jails and got rid of the L2ARC, I would say you could get by with 32GB. The rule of thumb starts to tail off as you grow into the larger range of storage size. Of course, more is better. If you could swing 128GB, then go for that.
4th - The L2ARC will greatly affect your RAM usage, so make sure you do some more reading. If you don't have enough RAM, your L2ARC will actually slow down the system performance.
5th - UPS - there is a UPS service in FreeNAS that can do what you are looking for. And it can either be a master or slave (look into NUT).
6th- You can expand a live dataset, but might decide that it's better to do it during a period of time when the dataset isn't too busy just in case.
7th- There tends to be a preference towards the SuperMicro chassis vs the Norco. I forget the exact reasons, but there are some threads on here you should probably read before pulling the trigger.
8th - You need to look into how to connect the MB to whatever chassis you decide to use. For instance, that MB has 10 discrete ports, vs an on board HBA. HBA's reduce the cabling and some backplanes are setup for that simple connection and then use expanders to connect all the drives. Other backplanes provide a single connection for each drive. It's up to you, but make sure they match.
 

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
Hy Depasseg,

thanks for your reply, to answer your questions;

1. Well thats clear :) Freenas does the raid 1 / 0 for L2ARC/SLOG then?
2. My VM load will be around 6-7 CentOS vm's running (testing stability/compatibility of different programs) A few winows VMs (mainly 95/98/xp and 7) and last but not least some minor VM's for network statistics/monitoring of networking hardware (logs etc.)

But i wan't maximum possible performance on the array, i'm planning the 4x Z2 vedv for random IO, I want the arrays to be the bottleneck, not ramm/cpu or something :)

3. I'm not planning Jails as the xenserver will do it for me in a seperate machine :)
4. I know L2ARC can eat all ramm, and even read some stories about worse performance when the L2ARC was added, my L2ARC will be ~240Gb (2x 120 samsung 840 evo) so what ramm usage fromm L2ARC will i see? Its main purpose is for the virtual machines as they will request alot of random IO
5. Good to know, but whats the difference between master and slave? Is it possible for freenas to send an shutdown command to other servers (ssh?) or? I suppose the ups gets connected through USB cable?
6. So its best to do when the VM's can be shutdown, so almost all traffic to the ZFS pool is gone.
7. I know Supermicro is preferred due to build quality, but i allready own the norco, secondly a 24disk supermicro case (empty) will cost me 1.5k new, and they dont seem to support normal ATX psu which is a big no-no for me. I know server/redundant psu's are better, but when does a PSU thats used 20-30% suddenly die? Its a well known brand that makes them (seasonic makes the XFX psu's right?)
8. I'm using 3x LSI 9211i (the one that everyone flashes over their IBM card) with the sas connectors on the back, got them cheap from ebay :) The backplane has 6 mini sas ports, so i can directly connect them to the LSI's when flashed to IT mode (i think they are raid mode as the box says raid options, but they may be IT allready as the main point is "HBA"

The SSD's will be directly connected to the motherboard, i think 128GB rammn will be enough for my build, but if I ever want to use dedup/other fancy stuff is it going to be enough?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
1. Yes - L2ARC is Striped and SLOG is Mirrored
2. Your IOPS will be bound by the slowest member in each vdev. If you are trying to maximize IOPS, you need to maximize the number of vdevs (either Striped mirrors or multiple 3 disk RAID-Z1's).
4. I don't know the exact answer, but I'm sure it's been covered in the forums or in the Cyberjock's newbie guide. I seem to recall 1GB RAM for 5 GB of L2ARC
5. Master has the direct connection to the UPS (either serial or USB) and Slaves communicate to the Master via the network. Slaves can be any NUT client (not just FreeNAS).
6. That's what I would do.
7. They all die eventually. :smile:
8. Sounds like a good plan.

Read long and think hard about enabling dedupe before you do it.
 

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
Lolled at your number 7 :) A supermicro psu will also die then, and by the time they do die, they can't be replaced anymore ;) (off course they will last longer then an XFX psu)

But i've read a little on Sync=always, does this improve the data integrity at the cost of reduced write speed (like 30-40mb/s with intel S3700 SSD) instead of 90MB+ write speed with no sync.

Will this only happen to my VM writes if i enable sync, or to my whole pool (like windows shares/FTP transfers)

Write/read speed is important for me, but the data safety is even more important!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
:smile: I'd be surprised if you couldn't find a replacement supermicro PSU in 5-10 years.

Yes, sync=always improves data integrity at the expense of speed. The basic way to think of it is this:
When a write comes in, ZFS writes it to the ZFS Intent Log (ZIL) which resides in memory, if sync=always isn't set, then the write can be acknowledged without the data actually being written to disk. If sync=always is enabled, then the data must be written to disk before the write is acknowleged. This is where the slowdown comes, and the SLOG is a way to speed up the write. Instead of writing to the pool, the write goes to the SLOG at the same time it goes to memory. So the SLOG can acknowlege much quicker than the pool. The SLOG's only purpose is to handle the Writes that were acknowleged but not written to the pool (in the case of power failure to the pool, for instance. cyberjock has a much better writeup, but this is it in a nutshell.

You can enable sync=always at a dataset level. I believe the protocol (nfs vs iscsi) and the client can also choose to require or disable sync writes.

So if data safety is more important than speed, use a high quality SLOG (low latency and it's own power protection - like built in capacitors).
 

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
Thanks for your explanation, so i can choose to use sync=always for NFS (vm's etc) and SCSI (windows installations) but not for ftp etc for the bulk storage ("try again" on fail")

Good to hear!
 

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
And how about the 8TB seagate archive (v2) disks? I've read they are not that fast, but as i'm planning to use 4 6disk Z2 vdevs will they catch up a little? technically each (non parity) disk has to do 10-12mb/s to saturate my gigabit on bulk file transfers. Random io will be done with L2ARC and SLOG, and for integrity i'm planning sync=allways which will allready reduce the speeds a little...
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I haven't heard too much about them. There are some posts though. Try searching for SMR.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Read long and think hard about enabling dedupe before you do it.

This should be in giant bold font, italicized, underlined, and blinking Geocities-style. If you think L2ARC is hard on your RAM then deduplication munches it like popcorn. For a typical virtualization workload with 4K blocks you're looking at ~5GB of RAM per 1TB of indexed data, on top of whatever else you're consuming for L2ARC indexing and ARC.

If you enable this, enable it very, very selectively - enable it on specific datasets/zvols, and only put data there that you know will benefit strongly from it.
 

aadje93

Explorer
Joined
Sep 25, 2015
Messages
60
Thank you for your reply's, i'll wait for some more realworld performance, maybe 3 6TB vdev pool, and a 8TB vdev pool for backups
 
Status
Not open for further replies.
Top