My Journey into the world of FreeNas (includes pictures of build)

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Hi all

So I've been lurking for a while now, doing lots of reading up, and some experimenting, as I was fairly new to FreeNas when I started.

For the last 3 or 4 years, my storage has been entirely handled by my Synology DS1815+, and an offsite DS411+ii backup. Superb boxes, yes a little under powered but did the job flawlessly. As my storage needs have grown though, I have for the last year at least started thinking about how I was going to grow my storage to match (I am doing more and more video editing these days)... It looked like I was going to have to just stump up for a mid 4 figure Synology NAS to get the number of bays i wanted as well as the ability to go 10Gb and rack mount..... but....

I had a stroke of luck. I'm a systems administrator by day (have been for 3 years now, but came from a 9 year stint in Cisco networking) so now look after a small VM environment. Our old platform finally got retired and replaced, and while in the data centre helping to de-rack it all for disposal, I mentioned that it was a shame to throw so much kit away as it still had life in it..... Well, they then said I could take whatever i wanted (with the exception of storage devices, these HAD to be destroyed). Naturally.. I grabbed my screw driver and got to work!

Heres what I came away with....
4x Xeon E5-2630V4's (10c/20t)
2x Xeon E5 2478 (8c/16t)
4x Xeon X5675 (4c/8t)
16x 32Gb 2400Mhz ECC DDR4 dimms
and 72x 16Gb 1333Mhz ECC DDR3 dimms.

At the time I had no plans, so when they said 'fill your boots' I grabbed what i felt i might be able to make use of from a handful of the blades. They had been brought over time as our capacity needs grew, which is why there is such an age spread. But already I was thinking 'why by a NAS, I might as well build one, I now have most the parts, I knew what FreeNas was, I'd seen plenty of videos about it, but had always dismissed it as buying heaps of ram was the biggest off putting cost for me... But with these parts, it suddenly seemed ludicrous not to make the most of it.

After some initial reading, I had originally thought I'd build three servers, a dual CPU unit in a 1 or 2U server with minimal storage running ESXi to replace my existing VM server (an old 2nd Gen i5 desktop i re-purposed), a second single CPU 4U machine for a FreeNas server to act as my file server and iSCSI store over 10Gb... However I quickly realised this was going to cost a bomb as server motherboards for these CPUs i had weren't cheap. The Dual socket one i was looking at was over £300, so buying two or three very quickly was putting me off the idea again!! (And a 3rd for offsite backups, which I am only now starting to plan, just trying to find the right cheap used server motherboard to work with at least one of the Xeons and ram I have spare).

Anyway, this was a few months back, and after a little experimenting and alot of reading, I found an article which talked about virtualising FreeNas... I did alot more reading, but suddenly the answer seemed clear...

So I ended up building a dual Xeon E5-2630V4 server with 256Gb of DDR4 Ram in an old re-purposed HTPC case. (I have a 24 drive chassis on order but its taken nearly a month to arrive... any day now), to which I installed ESXi 6.7, as well as a pair of LSI 9207-8i's (I'll need a 3rd before I'm finished). These are passed through to the FreeNas VM directly so FreeNas has direct access to the drives. When the machine is powered on, it boots into ESXi. Only the FreeNas VM is visible, i boot that, and once booted and all the storage and iSCSI services are up, the other VMs storage become available, and I fire these up. Genius! The only thing that doesn't work is CPU temp monitoring, but ESXi does this so I suppose I have to live with that. Anyway, I'm sure I found the article here, so thank you all for saving me from building 3 separate servers. Now I only need 2.

So yes, my journey continues. At the moment I only have 5x2Tb Greens installed (spares I had lying around) and am using it for iSCSI and a handful of non important SMB shares. Once my chassis arrives, I'll start playing musical data as i shuffle data around on my Synology Nas to allow me to unmount the secondary array of 5x4Tb Reds (I'll order a 6th) and install those into FreeNas (RaidZ2). Then again, shuffle more data round to free up the 7x8Tb reds (again, planning to order an 8th drive and deploy in RaidZ2) and get these moved into the FreeNas box. Still trying to decide what todo with the existing pair of 512Gb 860Pro SSDs i used as caches in the Synology.... I have a pair of 256Gb SanDisk SSDs I'm using as L2ARC... I had though about using the 860 Pros as a slog, but the more i read, the less I think I'll get any benefit from them? Eventually I'll put a 10Gb NIC in as well as one into my Desktop, and I can then video edit directly off the NAS, and I still have 8 Bays in the chassis available for future drives so I'm nicely future proofed!

Though I'd include a handful of pictures too, as everyone loves pictures :)

Just a small selection of the stuff I saved from the skip.
IMG_1815.JPG


The motherboard I settled on... using that right hand PCIe slot is going to be a challenge!!

IMG_1766.JPG


I tried test fitting the LSI card... no chance...

IMG_1819.JPG


Coolers on... hard to find coolers that would clear one another as the sockets are SO close...

IMG_1784.JPG


Once fired up, I threw together a VM just to run Cinebench for the lol's! Impressive numbers for "old" hardware!

12C2611C-E75E-4957-9396-D9AEA8BAF06F.jpg


And this never gets old!...
IMG_1790.JPG


CE1E5A49-027D-408E-90EA-C1939571CDEF.jpg


Its abit of a mess at the moment, just waiting on the chassis, then I can do it properly! Can't wait to get it all properly bolted in!

IMG_1857.JPG


This is what really impressed me... this is a disk benchmark from a VM running on my old i5 server with the C Drive on a iSCSI volume on my Synology NAS over gigabit... (no idea why it keeps loading in sideways)...

C197E21E-A81A-4871-967E-F141907672B7.jpg


but now, as the iSCSI server and VMs are on the same internal vSwitch, and traffic never leaves the box.. these are the new numbers... Mind... BLOWN.... (again, not sure why its loading sideways)
D8575171-0E37-4082-951D-08C4489B2CCF.jpg


Confused why the writes are faster than the reads?.... Something else to read up on :)

So yes, once my case arrives, if people are interested, I'll document and provide pictures. Massive thanks to all for teh information I've already gleamed... thought it was about time i said hello, and thank you :)

Sprint
 
Last edited:

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
The Dual socket one i was looking at was over £300
A brand new one I've guess... Have you checked second hand ones? The ivy bridge/Sandy bridge CPUs seem to be promising too...

Sent from my phone
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Yeah new, but i needed an ATX board which ruled out EATX, CEB and all the other larger boards, the only board i could find that would fit was still so new that I couldn't find it used, so I ended up biting the bullet.

For my backup server through, this is what I'm struggling with... a single socket board (new) is about £270. Older sockets (for the other two types of CPUs i have) are almost as much!! So plan B is an older board and CPU combo, but I'm planning to host it at a friends, so IPMI is a must, so again... options very quickly start to dwindle.. :(
 
Last edited:
Joined
Mar 25, 2018
Messages
9
Loving the photos and the progress!
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Server chassis arrives tomorrow... mini sas cables and fans arrived today... £172 worth of fans!!!! #donttellmywife

Saturdays going to be fun!!
 

Attachments

  • CADCDEBE-2B4C-4ACE-A1D7-773759FC5BE6.jpeg
    CADCDEBE-2B4C-4ACE-A1D7-773759FC5BE6.jpeg
    312.9 KB · Views: 438
  • 081235F8-7872-4C23-A9D4-2B419E061AF9.jpeg
    081235F8-7872-4C23-A9D4-2B419E061AF9.jpeg
    328.3 KB · Views: 430

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
So the parts finally arrived, bit of issue with the comany that supplied the case, but thats boring so won't bore you with that... but here it is! Transplanted my build from my HTPC case, installed all the new fans, and installed the first batch of additional drives (6x4Tb Reds)..

Now starts the process of moving data off my 6x8Tb array in my Synology Nas to other locations to allow me to transplant the drives into this chassis. The question is have is that my Synology NAS has 2x 512Gb Samsun 860Pro's which are used as a cache. I already have a pair of sacrificial 240Gb desktop SanDisk SSDs configured as L2ARC, but am I right in thinking that for use as a home storage server, using as a SLOG would be total overkill? If so, whats the best way to make use of these? Thoughts?

In the mean time, heres some pretty pictures :)

IMG_2286.JPG
IMG_2287.JPG
IMG_2289.JPG
IMG_2290.JPG
IMG_2291.JPG
IMG_2292.JPG
IMG_2293.JPG
IMG_2294.JPG
IMG_2296.JPG
IMG_2297.JPG
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Then just get rid of the SLOG as it isn't doing what it's supposed to. It'll be much faster too.
I haven't got a SLOG, thats whats I'm trying to decide, what best todo with these 2 spare SSDs... I suspected using as a SLOG would be a waste, even when I go get my 10Gb NIC and connect to my editing machine.
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
A SLOG can only ever slow down your NAS.
How so? My understanding (and I'm new to this so please educate me if I'm wrong) was that a SLOG was a write cache, which would store writes to be flushed to the pool every 5 or 10 seconds, meaning you didn't have to wait for the pool to perform synchronous writes? (for my workload, single person editing off the NAS this might be overkill and offer no benefit but on a larger scale could).... how could that slow the NAS now?

Sprint :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
How so? My understanding (and I'm new to this so please educate me if I'm wrong) was that a SLOG was a write cache, which would store writes to be flushed to the pool every 5 or 10 seconds, meaning you didn't have to wait for the pool to perform synchronous writes? (for my workload, single person editing off the NAS this might be overkill and offer no benefit but on a larger scale could).... how could that slow the NAS now?

Sprint :)

Your understanding is wrong, sorry. POSIX sync write requires that the data be written to stable storage.

https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

See especially

Laaaaaaaaatency. Low is better.

in that link. In every single case, turning off sync writes is faster than having sync writes, so adding a SLOG (ok I guess if we're mincing words: *and* you use it) is always a speed killer over the alternative of not bothering with the sync writes.

The risk is that if you do not use sync writes, it's possible to lose data when your power fails, or when your NAS box crashes. FreeNAS isn't super-prone to crashing and if you are on a UPS you may have remediated the power failure issue a different way.
 
Joined
Oct 18, 2018
Messages
969
@jgreco Am I missing something, is the above misleading, or did I read the post you linked incorrectly? Is it not reasonable to expect that some folks will have workloads where they have and want to keep sync writes and thus may see performance increases with a SLOG?

A SLOG can only ever slow down your NAS.
Are you including in that statement sync write with and sync writes without a SLOG? I'm having trouble reconciling this with the following.

the pool is often busy and may be limited by seek speeds, so a ZIL stored in the pool is typically pretty slow. ZFS can honor the sync write request, but it'll get slower and slower as the pool activity increases.

The solution to this is to move the writes to something faster. This is called creating a Separate intent LOG, or SLOG. . . . ZIL traffic on the pool vdevs substantially impacts pool performance in a negative manner, so if a large amount of sync writes are expected, a separate SLOG device of some sort is the way to go.

The SLOG is all about latency. The "sync" part of "sync write" means that the client is requesting that the current data block be confirmed as written to disk before the write() system call returns to the client. Without sync writes, a client is free to just stack up a bunch of write requests and then they can send over a slowish channel, and they arrive when they can.

Is the suggestion that a SLOG only makes the client get a response faster but the NAS still has to flush from ZIL to final storage and that by having a SLOG this process entails more work and is slower than writing the transaction groups to on-pool ZIL before writing to the pool for the final time?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
@jgreco Am I missing something, is the above misleading, or did I read the post you linked incorrectly? Is it not reasonable to expect that some folks will have workloads where they have and want to keep sync writes and thus may see performance increases with a SLOG?

Yes. But it is faster 100% of the time to disable sync writes.

My point with you was that your SLOG device isn't providing the guarantee that a SLOG is supposed to - which is that sync writes are committed.

Imagine you get in the front seat of a modern car. The car is designed to try to protect you in the event of a crash, with seatbelts and airbags (and crumple zones and all that). You decide that the airbags and crumple zones are sufficient and so you don't buckle up with seatbelts. One sad day you hit a very solid wall, and you are forcefully ejected through the windscreen and into the wall. You survive, and in your hospital bed, you wonder, "but there were airbags!".

The fallacy here is that the airbags are sufficient to save you on their own. They are part of an engineered system that included your seat belt as primary restraint. When your unbelted torso kept moving forward during the crash, there was substantially more weight there than the airbag was designed to cushion, and that didn't perform as expected.

In that same manner, the sync write mechanism is a very specific cooperative effort by your hardware. If you do not have the correct hardware, you are fooling yourself if you think that sync writes are working correctly, and you might as well just turn them off and enjoy much faster speeds.

So the point is you have several choices: do it right, with a competent SLOG (SSD with power loss protection, or whatever), or skip the SLOG and use the in-pool ZIL, or just don't bother, in which case things will go a lot faster.
 
Joined
Oct 18, 2018
Messages
969
My point with you was that your SLOG device isn't providing the guarantee that a SLOG is supposed to - which is that sync writes are committed.

Well, with the OP you mean. I just chimed in because the conversation confused me based on my understanding of a SLOG, the ZIL, and transaction groups. Specifically the bit about a SLOG only ever slowing down your NAS. :)

In that same manner, the sync write mechanism is a very specific cooperative effort by your hardware. If you do not have the correct hardware, you are fooling yourself if you think that sync writes are working correctly, and you might as well just turn them off and enjoy much faster speeds.

So the point is you have several choices: do it right, with a competent SLOG (SSD with power loss protection, or whatever), or skip the SLOG and use the in-pool ZIL, or just don't bother, in which case things will go a lot faster.

Thanks for clarifying. This was my understanding as well.

Would it be fair to say that a more accurate statement of

A SLOG can only ever slow down your NAS.

is that a SLOG can only ever slow down your NAS compared to not using sync writes at all. And if you need/want to use sync writes a SLOG can improve performance but should be done in such a way as to not defeat the whole purpose of the ZIL in the first place. For example, by using a SLOG device without PLP.

Thanks for clarifying though. Perhaps I am alone in this but the exchange above was a bit confusing. This clears it up.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just a small selection of the stuff I saved from the skip.
Nice haul. I am amazed at the things people throw away.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well, with the OP you mean. I just chimed in because the conversation confused me based on my understanding of a SLOG, the ZIL, and transaction groups. Specifically the bit about a SLOG only ever slowing down your NAS.

That'll teach you to jump into someone else's conversation! ;-) ;-)
 

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
Nice haul. I am amazed at the things people throw away.

Staggering what companies deem as being "waste"... Not sure what I'm going todo with all this ram... maybe offer it up to anyone that can make use of it, as I certainly can't!

That'll teach you to jump into someone else's conversation! ;-) ;-)
lol his welcome to, i had to re-read your post a few times before I understood, your subsequent clarification will hopefully aid others reading this later.
 
Joined
Oct 18, 2018
Messages
969
maybe offer it up to anyone that can make use of it, as I certainly can't!
Thats a great idea! I love cheap used hardware. I bought some used stuff offline and use it as my backup machine.

lol his welcome to, i had to re-read your post a few times before I understood, your subsequent clarification will hopefully aid others reading this later.
Glad it was helpful to you. :)

I won't lie, I am a bit jealous that you picked up all that hardware! I recently came across a hallway filled with old servers (~50) but only took 2 of them home. Found out later that much of the hardware was still good, even if only for backup servers. It ended up getting recycled. Such a shame.
 
Top