Planning 10gb connected NAS

Status
Not open for further replies.

UnderSlepT

Cadet
Joined
Aug 28, 2018
Messages
5
Hi,

I am slowly upgrading my current NAS and I am planning to get the most of my future 10gb connection. Currently I have this setup :

CPU - Intel Pentium G4600
RAM - 2 x Crucial 4GB DDR4 2400 MHz
MB - ASUS PRIME B250M-K
Boot - 2x 16GB Sandisk Cruzer

Pool 1 - Raidz1, 3x 3TB WD RED
Pool 2 - Kingston V300 120GB

Now I am planning to currently add additional 3 WD Reds to pool1, rebuild it as mirror and switch the boot volume to Intel Optane 16GB (1 of the flash disks throws errors after just 8 months of use and they generate alot of heat). My goal is 8-12 WD Reds in pool1, and potentially upgrade pool2 to mirror of 1TB SSDs for my VM host. Currently I use intel SATA, I plan to switch to LSI 9300 series 16 port HBA. My network card is currently Intel PRO 1000/CT, will add Intel chipset based Supermicro SFP+ card. For system I will probably use one of the newer Ryzen 5 quad cores with X470 + 32GB ECC RAM.

Currently they have on sale Intel Optane 900p and 800p, so I was wondering if I set the pool to always write in a synchronous way, will it improve CIFS performance when working with small files, since everything will be quickly written to Log disk, and then sequentially written to harddrives? I am asking, since I have only read about L2ARC and LOG propper use / size planning.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This all sounds like you are telling us what you have already decided to do. Did you want advice about better options? Some of the ones that you have are not necessarily the best.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

UnderSlepT

Cadet
Joined
Aug 28, 2018
Messages
5
Thank you for your reply.

I know, I need 8+ TB of storage, I have already 3 x 3TB WD Reds and I want mirror for sequential speed - 10gb network. I have heard from ZFS devs - each vdev is limited in IOPS to the slowest device, so you want smallest vdevs possible for performance, then you split IOPS between vdevs in pool, to whichever vdev is not currently doing operation - they called it natural load balancig or something like that. I will potentially have some VMs on secondary pool or just buy Samsung 970 Evo 2tb for VM host. Right now I am thinking about potencial Slog SSD - Optane 900p + UPS for better 10gb CIFS performance. I would be happy with 300-400 MB/s.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
so you want smallest vdevs possible for performance
I think you have some basic misunderstandings. Speed is not related to vdev size, it is related to the speed of the drives that comprise the vdev. Having many vdevs is normally about improving random IO, not sequential IO, but having many vdevs should make the pool faster overall, especially dealing with small files. However it will likely require at least eight vdevs to obtain your desired speed because there is an amount of system overhead that must be allowed for and small file access is notoriously difficult to make fast. It might be better to build a pool of solid state drives instead of spending the money to build a sufficiently large pool of normal drives to obtain speed nearly as good, but not quite, as fast as a SSD.
Currently they have on sale Intel Optane 900p and 800p, so I was wondering if I set the pool to always write in a synchronous way, will it improve CIFS performance when working with small files, since everything will be quickly written to Log disk, and then sequentially written to harddrives?
That is not how the Separate LOG (SLOG) works.

You probably need to go back and review some documentation to get a better understanding of the technology. Please review these resources:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

The ZFS ZIL and SLOG Demystified
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

10 Gig Networking Primer
https://forums.freenas.org/index.php?resources/10-gig-networking-primer.42/

and this shows the difference SLOG can make when it is needed:

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

If you were hosting your virtual machines on the storage pool, it might make sense to add a SLOG, but it does not if you are using the storage pool for normal CIFS data. What kind of data are you trying to access quickly?
 

UnderSlepT

Cadet
Joined
Aug 28, 2018
Messages
5
Again thanks for your reply.

I see, so Allan Jude talked about only IO/sec in regards to vdev performance. Well, I started as a student with Athlon 5350 + single 3TB WD Red, and now I am working, so I want to plan my NAS to be fast today and in 5-6 years. I have a lot of ISOs, VM backups, photos and game backups. Lot of files are big, but I have a ton of smaller files from older backups, programs, drivers. Another issue is that Windows 10 stuthers, when it does even large file transfers over network and I mean mouse doesnt react for seconds, if my NAS gets bogged down with file tranfers. So my idea was, whether it would work, when I would set the pool to be mounted with option where all writes would be synchronous, so every write to the pool would be through the SLOG - Optane 900p, which can saturate 10Gbit/sec connection in both ways.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
Again thanks for your reply.

I see, so Allan Jude talked about only IO/sec in regards to vdev performance. Well, I started as a student with Athlon 5350 + single 3TB WD Red, and now I am working, so I want to plan my NAS to be fast today and in 5-6 years. I have a lot of ISOs, VM backups, photos and game backups. Lot of files are big, but I have a ton of smaller files from older backups, programs, drivers. Another issue is that Windows 10 stuthers, when it does even large file transfers over network and I mean mouse doesnt react for seconds, if my NAS gets bogged down with file tranfers. So my idea was, whether it would work, when I would set the pool to be mounted with option where all writes would be synchronous, so every write to the pool would be through the SLOG - Optane 900p, which can saturate 10Gbit/sec connection in both ways.
write are never going through SLOG, the only exception is after a unclean shutdown, the SLOG will be replayed.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
so Allan Jude talked about only IO/sec in regards to vdev performance.
I don't know what the context was and it sounds to me like you have a misunderstanding, which is why I pointed you at the resources.
I would set the pool to be mounted with option where all writes would be synchronous, so every write to the pool would be through the SLOG - Optane 900p, which can saturate 10Gbit/sec connection in both ways.
Except that is not how SLOG works. The data is not written to the SLOG and then copied to disk from the SLOG.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For system I will probably use one of the newer Ryzen 5 quad cores with X470 + 32GB ECC RAM.
Please don't buy a Ryzen system. There are a lot of problems with them on FreeNAS because they are pretty new and they are certainly not working properly with VMs presently. Better to buy a two or three year old Intel based platform. There is no need to be close to the cutting edge with FreeNAS, in fact to ensure driver support, it is probably better to be a few years back from the edge.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You would do much better with a system board like this:

Supermicro X9SRL-F ATX Motherboard LGA2011 IPMI w/ Heat Sink & I/O Shield
https://www.ebay.com/itm/113216257183
Price: US $189.98

The board supports up to 512GB of RAM.

SAMSUNG 16GB PC3L-12800R DDR3-1600 ECC Registered 1.35V RDIMM
https://www.ebay.com/itm/302606459277
Price: US $44.95 x 4 64GB of RAM with room for another 64 without going to the larger modules.

You can stay with the included fan or get this larger one. I use this model on two of my systems. It is only slightly louder than the Noctua cooler I have on my wife's computer.

Dynatron R27 Side Fan CPU Cooler 3U for Intel Socket LGA2011 (Narrow ILM)
https://www.ebay.com/itm/401284811045
Price: US $39.59

You have a lot of options for CPUs to go in this board, but I recently bought one of these for my FreeNAS:

Intel Xeon E5-2650 V2 2.6GHz 8-CORE 20MB Cache CPU PROCESSOR SR1A8
https://www.ebay.com/itm/222870480347
Price: US $99.99

Only it was $40 more when I bought it... It works great. Plenty of resources for all the things I am doing.

If you want more, you can get a 10 core model like this:

Intel Xeon Processor E5-2680V2, SR1A6 10-Core 2.80GHz 25MB LGA-2011
https://www.ebay.com/itm/113224096017
Price: US $169.00 - I have one of these also, I use it in my ESXi system, it has plenty of resources to run all the VMs I want.

For the drive controller, I would suggest a SAS controller, there are 4 SCA ports on the system board in addition to the SATA ports, but you will be better off with SAS.

SAS PCI-E 3.0 HBA LSI 9207-8i P20 IT Mode for ZFS FreeNAS unRAID
https://www.ebay.com/itm/162862201664
Price: US $69.55

Set of cables to connect the SAS controller to SATA drives:

Lot of 2 Mini SAS to 4-SATA SFF-8087 Multi-Lane Forward Breakout Internal Cable
https://www.ebay.com/itm/371681252206
Price: US $12.99

I would suggest one of these for the boot drive. It will last as long as the server, if not longer:

Intel 320 SERIES SSD 40GB SATA 2 2.5 Hard Drive SSDSA2CT040G3
https://www.ebay.com/itm/183347748896
Price: US $18.99

Not much more than a USB flash drive, but so much more reliable.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

UnderSlepT

Cadet
Joined
Aug 28, 2018
Messages
5
Again thanks for your reply.

I think Allan talked about, why you should never use anything beside stripped mirrored vdevs, especially not RaidZ1, it was on one of the BSDnow episodes. Sorry, my understanding of synchronous writes on ZFS was always that data is stored in RAM, until they get written to your pool, after which pool returns confirmation, that the data got written and you can write next portion of data. And the SLOG worked in a way, that you keep data in memory, but you write them to the SLOG, pool says, that it has written the data and in the backgroud, or every 5s the data gets flushed to the pool in a sequential, fast way, while you can keep writing data to the pool. The netwok adapters look good, but i have no experience with buying on ebay and paying all the fees ( seller is not from EU ), but I will look on it. For the next month or two I don´t have enough money to do MB + CPU + RAM upgrade and I personally don´t have good experience with Intel - on my i5-4670k linux didn´t work for over year, getting kernel panics on boot till they did the TSX instruction disabling if I remember correctly, other was that Pentium 4 heater. I have Ryzen 5 1600 that works better than any Intel ever did, so yeah. If I had to use Intel CPU, I would probably go for E3 - 8 PCI-e Gen.3 lanes for HBA, 8 PCI-e Gen.3 lanes for 2 x M.2 disks and then they have 4 PCI-e Gen.3 lanes shared for PCH for NIC + OS drive. Well, for the price of the Intel 320 SSD I can buy brand new Optane 16GB and I would hope it can improve boot times over flash disks - can be problem, when your VM host boots waay faster, than your storage array.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I think Allan talked about, why you should never use anything beside stripped mirrored vdevs, especially not RaidZ1, it was on one of the BSDnow episodes.
Two things. First, that is probably a matter of opinion. Second, it depends entirely on what your pool will be used for. You definitely need to do some more learning about the subject instead of relying on the perspective of one statement from one person that may have been thinking of a particular usage case.
In addition to the other references I already cited, here is some good reading on the subject:
https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

I am not advocating for RAIDz2 over mirrors, but the choice of which to use should depend on the type of data and the data rate you need. It is absolutely unwise to simply choose mirrors because someone else thinks it is a good idea. I have, in my FreeNAS system, one pool that is made of two vdevs, each vdev being six drives at RAIDz2. That pool is able to write data at about 1000MB/s. I have another pool that is made of a single vdev that is four drives in RAIDz1. That pool is able to write data at around 500MB/s. I also have a pool in that FreeNAS system that is made up of eight mirror vdevs, 16 drives, and based on the number of vdevs, you would expect it to perform better, but it doesn't perform that much better because the drives in that vdev are old 500GB drives that are only capable of writing data at a maximum data rate of around 100MB/s and after overhead (there is always overhead) that only works out to around 400MB/s to the pool through an iSCSI connection over my 10Gb network.
Sorry, my understanding of synchronous writes on ZFS was always that data is stored in RAM, until they get written to your pool, after which pool returns confirmation, that the data got written and you can write next portion of data. And the SLOG worked in a way, that you keep data in memory, but you write them to the SLOG, pool says, that it has written the data and in the backgroud, or every 5s the data gets flushed to the pool in a sequential, fast way, while you can keep writing data to the pool.
That can only work if the drives can keep up with the write flush. If the drives are not keeping up, you will get a cycle effect where the system takes data fast, until RAM is full, then takes data really slow while it flushes the data it already has, then speeds up again until it fills the buffer. If you only ever write a little, and it can fit in memory, you might be fine, but on a larger transfer you will get bogged down. Voice of experience. I have been using ZFS and FreeNAS since around 2011.
I personally don´t have good experience with Intel - on my i5-4670k linux didn´t work for over year
The system I suggested is the same hardware I am running FreeNAS on now. It works. It is also not an i5 and the 4670k is an overclocking chip that was probably very new at the time you purchased it. The system I have suggested is actually several years old and because it is older, it is more likely to be well supported. For FreeNAS, that is often an advantage of buying older hardware.
other was that Pentium 4 heater.
Ancient history and a horrible mistake on the part of Intel. They are not perfect, but I didn't suggest hardware randomly. I suggested hardware that I have used. I know it works.
I have Ryzen 5 1600 that works better than any Intel ever did, so yeah.
Your Ryzen is probably running some other operating system, not FreeNAS / FreeBSD. There have been several FreeNAS users that reported problems trying to get FreeNAS, particularly the virtualization, working on on Ryzen chips. In addition to that, most of the Ryzen system boards are not server grade boards and lack features that are beneficial in a server while providing things like audio headers that serve no purpose.
If I had to use Intel CPU, I would probably go for E3
The big advantage of the Xeon E5 that I suggested is the registered DDR3 memory which is much less expensive than DDR4. If you do a full price comparison, you may find it is a savings to buy the E5 instead. I saved over $200 with my build and it is able to be upgraded to a lot more total RAM.
Well, for the price of the Intel 320 SSD I can buy brand new Optane 16GB and I would hope it can improve boot times over flash disks
FreeNAS should stay running most of the time and on the rare occasion that you need to boot, having an Optane drive instead of a normal SSD is not going to make much difference. I boot my FreeNAS from a mirrored pair of spinning disks. It doesn't matter because I only ever need to boot if I have to shutdown for some reason. It had been up for 20 days the last time I rebooted and the only reason I rebooted then was because I made a setting change that would not take effect until I rebooted. Fast boot devices and large ones, are wasted on FreeNAS. It only reads the boot device occasionally, most transactions are to the storage array.
 

UnderSlepT

Cadet
Joined
Aug 28, 2018
Messages
5
Thanks for you reply.

My source for information on ZFS vdev performace and resilver time was :
https://calomel.org/zfs_raid_speed_capacity.html
http://louwrentius.com/zfs-resilver-performance-of-various-raid-schemas.html

The big advantage of the Xeon E5 that I suggested is the registered DDR3 memory which is much less expensive than DDR4. If you do a full price comparison, you may find it is a savings to buy the E5 instead. I saved over $200 with my build and it is able to be upgraded to a lot more total RAM.
I know that Registered ECC is always a bit cheaper and you can get way more memory on your system with it, but like I said earlier, I lack experience with Ebay shopping and paying taxes ( my bank accont history gets every 2 years checked by Czech NSA equivalent for my security certificate ). In Czech Republic, there is zero difference in prices between DDR4 and DDR3, so in case, where I have to buy memory in CZ, I prefer newer CPU, because of better power consumption at higher single threaded performace for CIFS in case of Ivy Bridge vs Skylake / Kaby lake.

Your Ryzen is probably running some other operating system, not FreeNAS / FreeBSD. There have been several FreeNAS users that reported problems trying to get FreeNAS, particularly the virtualization, working on on Ryzen chips. In addition to that, most of the Ryzen system boards are not server grade boards and lack features that are beneficial in a server while providing things like audio headers that serve no purpose.
True, if I would have to use x470, the most suitable MB would be the ASROCK x470 Taichi, which is about as expensive as Intel LGA1151 Supermicro MB with IPMI. At the same time, I don´t use virtualization on FreeNAS, if there are not giant problems with jails on Ryzen. Compared to my past Nas4Free experience with PHP Virtualbox, FreeNAS Bhyve virtualization is just a pain in the butt to use and there was always some problems, even on my Pentium G4600, while with PHP Virtualbox and Athlon 5350 I had LibreNMS VM with 2,5 years uptime, zero issues.

I should have probably added, that the NAS will be near my PC, not in a server closet, so LGA 2011/2011-3 non 85W parts would add quite a bit of heat to the room ( no AC ).
 
Status
Not open for further replies.
Top