Final check - new value ECC build with Pentium G4560

Unimatrix52

Cadet
Joined
Jun 15, 2020
Messages
3
Dear community

I am looking for a final review and feedback on my new freenas build. It will replace my old Synology DS416 which will be demoted to a backup unit.

Usage:
  • File storage & win/mac backup for family
  • Storage of many thousands raw (~30MB each) pictures
  • VM/Container/Kubernetes Storage for my main Proxmox-Host (Dell T130 with E3-1240) which will be running all services.
So this will be a pure storage unit and I would like to keep it low cost without sacrifying quality or reliability. In the future I might look into 10Gb so I just need enough PCIe slots for potential future expansion.

I did a fair bit of reading & preparation and hope I'm not wasting any of your time.

Thanks!!

Intended build:
MainboardSupermicro X11SSL-F
CPUIntel Pentium G4580
RAM (potentially upgrading to 32GB later, can't find compatible 16GB sticks in Switzerland, MoBo has 4 Slots) from the Supermicro Compatibility-List2x Samsung ECC DDR4 8GB M391A1K43BB2-CTD
Storage (start with 4 drives, 2 mirrors; expand mirror by mirror up to 10 drives)4x WD RED 8TB
Boot driveCrucial BX500 120GB
CaseFractal Define R5 (linking the R6)
HBAIT Mode flashed Dell H310 Link to the seller
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Looks like a decent setup. Your use case is very similar to mine, so you might find some inspiration by looking at some of the threads that I started on this forum last summer.

One miner thought is that you might want to consider the A2SDi-8C-HLN4F as an alternative for the X11SSL-F. The board has an integrated Atom C3758 CPU which, to my understanding, is quicker than the G4580 for most if not all workloads. The board also has 12 on-board SATA ports which should eliminate your need for a separate HBA. If you can live without the option to upgrade the CPU without replacing the mainboard and with only one PCI-e slot, then the Atom-board will probably be a better and cheaper option.

On another note, I was recommended the Intel 545s or Kingston UV500 for boot device by others on this forum. The Crucial is probably great as well. I have no experience with it, nor have I heard any comments about it here or elsewhere.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
With thousands of files, keeping metadata available will be key. RAM compatibility is great between different vendors offering ECC. See whether you can’t get 16GiB ECC sticks from Micron (Crucial) or similar in Switzerland. 32GiB will help, and you might even want 64 to keep metadata in ARC.

You can always look at metadata-only L2ARC as well, if it comes to that: But focus on memory for ARC, first.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
With thousands of files, keeping metadata available will be key.

Makes sense, but shouldn't 16 GiB suffice as long as we are "only" talking about thousands of files? The reason I ask is that I am currently using my FreeNAS box to store around 200 000 family photos and videos (albeit not in raw format) that I access through a (home-built) media organiser web application. I've done my fair share of processing over the entire photo data set and the continuous scrolling feature in the app is loading large amounts of images per second to provide a smooth user experience. So far, I have not experienced any performance issues that would indicate that there is too little memory in the FreeNAS server to handle this amount fo files and metadata.

(I'm not questioning your expertise... just curious and trying to understand)

That said, I would still go for 16GiB sticks to make upgrades to 32 and 64 GiB more smooth...
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I'm not questioning your expertise... just curious and trying to understand)

You should be questioning my expertise. The little “expert” tag the forum gives me means that I post here more than is reasonable. It’s literally something given automatically for post count, not an indicator of expertise.

With that excursion aside: As long as your use case is performant, no need to change anything. I’d be surprised if metadata fit, I’m not ruling it out - and as long as you can load it fast enough for your app, you are golden.

If you are curious, arc_summary.py from CLI will tell you how many ARC misses you have, and the percentage of data vs metadata being kept in ARC.

We had a thread earlier this year where someone had 1.7 million files, that’s over there: https://www.ixsystems.com/community/threads/system-slowing-down.84707/#post-585918
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
You should be questioning my expertise. The little “expert” tag the forum gives me means that I post here more than is reasonable. It’s literally something given automatically for post count, not an indicator of expertise.

I really appreciate your approach :cool: Makes for a really interesting discussion. On a separate note, I wonder how long the forum will continue to afford me the "newbie" tag ...

With that excursion aside: As long as your use case is performant, no need to change anything. I’d be surprised if metadata fit, I’m not ruling it out - and as long as you can load it fast enough for your app, you are golden.

I'm curious how much space each metadata entry actually takes up in the ARC. I would have expected the metadata for a single file to be relatively small. Let's assume that metadata for one file takes up 10 kb of memory including overhead for data structures (e.g. search trees/hash sets or whatever the ARC is implemented with). In that case, a set of 100 000 files would "only" take up 1 Gb of memory. Based on this rough "guesstimate", I would expect 16 Gb of ram to be sufficient for "thousands" of files and a pool size of 8 Tb. But I may be missing some key point here ...

I'm also curious why one would need to keep all metadata of the entire pool in the ARC. Unless your use case requires blisteringly fast access to any file at any point in time in a random uniformly distributed fashion, I would have expected that pre-fetching and loading metadata from disk rather than cache to be acceptable.

If you are curious, arc_summary.py from CLI will tell you how many ARC misses you have, and the percentage of data vs metadata being kept in ARC.

Thanks for sharing the command. Really interesting output. My stats show that I have 95.5% cache hits on "Demand Metadata" which doesn't sound too shabby. Not sure what the "gold standard" is though. I suppose it indicates that the metadata of the entire pool is not loaded into memory. I wasn't able to find the percentage of data vs metadata kept in ARC in the output though. Do you know what I should be looking for?
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
We had a thread earlier this year where someone had 1.7 million files, that’s over there: https://www.ixsystems.com/community/threads/system-slowing-down.84707/#post-585918

Just skimmed through the thread. Interesting stuff. Thanks for sharing.

“ If it's helpful for anyone, here are the estimates I used for sizing some metadata vdevs. Metadata is roughly the sum of:

a) 1 GB per 100k multi-record files.
b) 1 GB per 1M single-record files.
c) 1 GB per 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data.
d) 5 GB of DDT tables per 60 GB (recordsize=8k), 1 TB (recordsize=128k) or 10 TB (recordsize=1M) of data, if dedup is enabled.
e) plus any blocks from special_small_blocks.”

Seems my guess of 10 Kb per file metadata entry was spot on (assuming multi-record files). Lucky guess.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
We may have strayed somewhat off topic. Sorry @abmurksi
 

Unimatrix52

Cadet
Joined
Jun 15, 2020
Messages
3
Never mind, interesting topic and I learned something new today :smile:.
Thank you @lightwave for the comment regarding the Atom, something I haven't considered. Unfortunately I just got the X11SSL-F MoBo yesterday but one of the RAM Slots is defect and not recognizing the stick (swapped them around, same issue) so I'll have to RMA it.

Also thank you @Yorick for the heads-up regarding RAM sizing and the compatibility. I also found memory.net, they send worldwide and seem to have a broad selection of RAM which I might consider if I can't find a tech shop that allows me to return the RAM if they are not compatible.

I'll start with the 16 gig of RAM and see how it goes. I might upgrade to 32 or 48GB later.

A bit off topic now, but I read every where to not mix different RAM sizes withouth any clear explanation as to why (my google-fu might be failing me). My understanding is that you should not mix two ram sticks in the same channel in order to profit from dual-channel speed boost but there should not be an issue to have different sticks for different channels (2x8gb and 2x16gb) as long as within the same channel they are the same?

So the 48GB might be an option. Or I need to sell the two 8gb sticks later and upgrade to full 64GB.
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
I'll start with the 16 gig of RAM and see how it goes. I might upgrade to 32 or 48GB later.

A bit off topic now, but I read every where to not mix different RAM sizes withouth any clear explanation as to why (my google-fu might be failing me). My understanding is that you should not mix two ram sticks in the same channel in order to profit from dual-channel speed boost but there should not be an issue to have different sticks for different channels (2x8gb and 2x16gb) as long as within the same channel they are the same?

So the 48GB might be an option. Or I need to sell the two 8gb sticks later and upgrade to full 64GB.
Dual channel systems will perform a little better if the memory sticks in each channel have the same performance characteristics. The system will still work just fine if the memory sticks are not matched - but common wisdom says that it is best to match the sticks in each channel.

There is no reason why you cannot start with the 8GB memory sticks you have. Make certain they are installed in the same channel. After the machine is up and running, you can monitor your swap space. If swap is being used, then you will know that more memory would be helpful. If the system is not using the swap space, then you have enough memory.

Down the road, you can add new memory sticks into the second channel if you wish. They can be whatever capacity your motherboard supports, depending upon how much memory you want to end up with.
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
Just built a server in the r5 without a window. with 3 intake fans (2 in front, one on the bottom between PSU/HDD cage) the HDD cage one exhaust fan in the back and a
Noctua NH-U12DX i4 for each of my xeon 2699v3 CPUs, thermals for the CPUs are excellent, only about 50-55* running Prime 95 on both CPU. Drive temps haven't gone over 34* even with all 7 drives running bad blocks. The system is quiet and looks inconspicuous. Definitely happy with it, especially since I got a good deal on it.
On the HDD, why not wait for external WD drives to go on sale and shuck them. If I remember right, the 8tb goes on sale for $139 which is steal. I ended up getting some Ultrastar HC520 12tb for $180/ea or else I would have done this. They may need some kapton tape depending if the PSU supports the power management feature on those drives or not.

One question for you though, why not run an older xeon (x10 series supermicro can be had cheaply, especially for a single socket) and have a nice upgrade path if needs change (supports v3 and v4 xeon). You could get a 4 or 6 core CPU on eBay for similar pricing to the Pentium. I was uncomfortable buying used HW but after validating the hw I don't worry about it. I would even want to do so on new components so it isn't like its gonna add time to deployment.

Just some thoughts, looks like a decent build but you may be able to get better value out of older server parts and an upgrade path to more cores (2600 v3 goes to 18 cores and 22 for v4)

Also, get a good power supply, reliable clean power is hugely important, especially if you want to help avoid data loss (in addition to a backup system)

Have fun with it!
 

Unimatrix52

Cadet
Joined
Jun 15, 2020
Messages
3
Many thanks for your input.
Sounds like you've got an amazing system there with those temps!! How did you setup your hdd cages?

I now use two Noctua NF-A14 Pwm in the front providing cool air to the HDD's and also one old arctic 120cm fan on the bottom next to the psu as a third intake. One 12cm Arctic Fan as back exhaust.
Unfortunately I was not yet able to fully assemble the system as the MoBo got an issue with one RAM Slot.

I agree using an older series does provide more bang for the buck but unfortunately I was unable to find any good deals here in Switzerland. Ebay and other local auction platforms are only very rarely selling used server gear and importing from the US or Germany/UK is often prohibitively expensive.
I won't need many cores, as I want to separate storage from computing and all VM's are running on a Dell T130 with an E3-1240v6 with 64GB RAM.
Backups will be stored on a Synology DS416 with 16TB usable for the most important data.

Regarding the HDD's, I just shucked my first WD Elements and needed to use some tape to cover the 3.3v pin but it runs perfectly and seems to be a hgst he10 8tb, nice! :smile: . I start with 3x official wd red 8tb, the shucked hgst 8tb and two seagate ironwolf 8tb I just got a great deal on; thinking of a 3vdev mirror configuration always pairing two different brands. Mirrors due to the possibility to easily expand capacity down the road as I do not have the funds to go for the full 10 disk setup right now. Will regularly check for deals on those externals and might expand with two 12tb's later to mitigate partially the capacity loss from the mirrors. Will need to see how I'll handle the uneven data distribution on the vdevs after expanding with new disks (might just need to copy some media around and delete the original ones).

Also I fully agree on the PSU, I use a BeQuiet StraightPower 11 450w and also attach the system to my APC UPS.

I look forward tinkering and testing the system to the fullest.
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Unfortunately I just got the X11SSL-F MoBo yesterday but one of the RAM Slots is defect and not recognizing the stick (swapped them around, same issue) so I'll have to RMA it.

Sorry to hear that you got a defect board. Hope the RMA process will be quick and painless. I'm sure you're going to be really happy with the board once you get it.

I agree using an older series does provide more bang for the buck but unfortunately I was unable to find any good deals here in Switzerland. Ebay and other local auction platforms are only very rarely selling used server gear and importing from the US or Germany/UK is often prohibitively expensive.

Sadly, the availability of second hand server-grade hardware is nearly non-existent in Europe. When something turns up it is rarely a good deal (and too often there is something wrong with it).
 

jolness1

Dabbler
Joined
May 21, 2020
Messages
29
Many thanks for your input.
Sounds like you've got an amazing system there with those temps!! How did you setup your hdd cages?

I now use two Noctua NF-A14 Pwm in the front providing cool air to the HDD's and also one old arctic 120cm fan on the bottom next to the psu as a third intake. One 12cm Arctic Fan as back exhaust.
Unfortunately I was not yet able to fully assemble the system as the MoBo got an issue with one RAM Slot.

I agree using an older series does provide more bang for the buck but unfortunately I was unable to find any good deals here in Switzerland. Ebay and other local auction platforms are only very rarely selling used server gear and importing from the US or Germany/UK is often prohibitively expensive.
I won't need many cores, as I want to separate storage from computing and all VM's are running on a Dell T130 with an E3-1240v6 with 64GB RAM.
Backups will be stored on a Synology DS416 with 16TB usable for the most important data.

Regarding the HDD's, I just shucked my first WD Elements and needed to use some tape to cover the 3.3v pin but it runs perfectly and seems to be a hgst he10 8tb, nice! :smile: . I start with 3x official wd red 8tb, the shucked hgst 8tb and two seagate ironwolf 8tb I just got a great deal on; thinking of a 3vdev mirror configuration always pairing two different brands. Mirrors due to the possibility to easily expand capacity down the road as I do not have the funds to go for the full 10 disk setup right now. Will regularly check for deals on those externals and might expand with two 12tb's later to mitigate partially the capacity loss from the mirrors. Will need to see how I'll handle the uneven data distribution on the vdevs after expanding with new disks (might just need to copy some media around and delete the original ones).

Also I fully agree on the PSU, I use a BeQuiet StraightPower 11 450w and also attach the system to my APC UPS.

I look forward tinkering and testing the system to the fullest.
I left the stock implementation with the drive cages, don't have a drive in the top most slot though. I have two Noctua 140mm fans (static pressure version) pushing air, they are set to max RPM and its still plenty quiet enough for me. (I have a four year old so I am never dealing with total silence)

Yes, it seems we are fortunate in this regard in the US, tons of cheap server HW available but you should be just fine for your usage and always can be upgraded a bit if needs change. I am running all my stuff on a single box. currently I am just doing freeNAS but I am planning to buy a quadro p2000 (or maybe repurpose a GTX1080 with unlocked streams, rather than the 2 stream cap artificially imposed) for hardware transcoding in plex on a windows/linux VM. At that point, will migrate my stuff to ESXi and passthrough my 9300-8i for freeNAS. I know there is a strong argument against doing so but I want FreeNAS as my storage platform but need hardware plex streams and no support on FreeNAS for now (stability has its price, namely slow addition of features on BSD)

I have a PC power and cooling 750W that has served me well,have a UPS on order,was going to do something smaller with enough time for an orderly shutdown and finishing write (I am essentially the only user so not too much to ask for the writes) but Iwasn't sure on how to be sure that the shutdown would be orderly with all the VMs through esxi so I went with something much larger and I will be able to VPN through even when away from home and shut down everything manually as I should have lots of run time.

Cheers from Montana
 
Top