First build: All flash 8x8TB RAIDz2 (Samsung 870 QVO 8TB), any red flags?

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
Hello! First post. :smile:
Basically I'd love to get a sanity check on everything. I don't want to make a mistake and be unable to correct it later.


Some quick background:
After having used a Synology DS410 and DS1812+ for my personal NAS needs for over a decade I crave something faster and upgradable. When looking at my options back then I decided against FreeNAS because I didn't trust myself to not mess up so a Synology NAS with software already loaded was perfect. Having added/replaced many drives over the years I've gone from 4x3TB RAID5 to 14x4TB RAID6 all spinning rust and it's almost full again. Backups are sporadic and incomplete with me picking and choosing what is important enough to have on both Synology NAS boxes, write on a Bluray disc, save on cloud storage etc.

It's almost always powered off simply because the noise and power requirements are annoying which means I don't end up using it that often. When I want to grab something stored on it I often put it on the backburner because I can't be arsed to turn it on and wait 5 minutes because I'll be doing something else by that time. Also, rebuilding the array after a drive fails takes literally 5 days and because it's in my bedroom/office having 14 drives constantly spinning is not only noisy but also uses a ton of electricity. After having to do this yet again last month I've decided to start over and while I'm at it do this properly with FreeNAS TrueNAS after having since learned a lot from all the community resources (Thank you for writing them!).

My use case:
Long term personal storage for mostly large files but can vary a lot in size. Very light workload. Write once read many. 1 or 2 users. Needs to be quiet. Accessed via SMB which apparently is single threaded and likes having high IPC for fast transfers so I went for an Alder Lake chip. My LAN setup uses a 2.5GbE switch with Cat6 cabling to 2 PCs with 2.5GbE. I want to use this for at least a decade so reliability is important but also want plenty of performance and upgrade room for the future. HomeAssistant might be moved from a Pi4 to this if that app ends up being stable enough. Jellyfin looks interesting but probably not needed. Might end up using the Minecraft server app for a community I'm in. Would give it a dedicated SSD just because that many random read/writes on a Zpool sounds like a very bad time.

After spending a month reading / head scratching I've ended up with this build:
Motherboard:ASUS Pro WS W680 ACE
CPU:Intel i5 13500 (E-Cores will be disabled)
RAM:2x Kingston 32GB 4800 MHz ECC RAM (KSM48E40BD8KM-32HM)
Boot device:Samsung 980 250GB NVMe M.2 (MZ-V8V250)
Storage:8x Samsung 870 QVO 8TB slotted in an Icy Dock 16-bay backplane (MB516SP-B)
HBA:LSI 9305-16i with 4x SAS SFF-8643 to SFF-8643 cables to the above backplane
PSU:Seasonic Prime TX-650
Case:Silverstone RM400

Wrong motherboard?
The biggest question burning on my mind is the motherboard choice. I realize Supermicro is the preferred choice around here but I couldn't find a suitable Supermicro board that ticked all the boxes so went for the above mentioned ASUS board. However after having ordered the parts I found out I was looking at the wrong Supermicro board and that the X13SAE-F has essentially the same features for the same price.

The few differences seem very minor: The ASUS board has the above mentioned Kingston RAM on the QVL list while the Supermicro board only has rebranded Supermicro RAM. But I'm guessing 3rd party RAM will work just fine. Supermicro doesn't seem to allow for BIOS flashing without a CPU but I'm guessing the i5 13500 will work anyway. Supermicro has IPMI built-in while the ASUS board requires an add-in card but I'm not sure if I really need IPMI in the first place. It'll always be a few meters away from me and I can connect a display to it if something happens. ASUS has a PCIe3x1 slot while Supermicro has a legacy PCI slot. PCIe is much more modern and I don't plan to use any legacy PCI cards. But neither would probably see any use from me.

The real difference here is the Supermicro brand. I realize iX Systems is going to be focused on enterprise gear so I'm worried this ASUS board might cause some compatibility issue years down the road even if it works just fine right now. However the underlying hardware is effectively identical on both boards so am I just being paranoid? Both are marketed towards Workstation use. The ASUS one does have marketing material about being "Optimized for 24/7 operation" so at least the intent is there but this can just as easily be marketing BS. Should I return the ASUS board and get the Supermicro board?

Upgrade paths
After learning that creating a proper ZFS setup is very important and very difficult to change later I've decided to make my future upgrade path very straight forward. My plan is to have the 8x8TB SSD's in a single RAIDz2 vdev to prioritize usable storage space over IOPS since SSDs have plenty of IOPS anyway. This allows me to add a second identical 8x8TB RAIDz2 vdev in the remaining 8 slots later down the road if it turns out I do need more Storage/IOPS. This will probably happen fairly soon because all my data together is roughly 90% of the 6x8TB data drives. I'll probably end up leaving some data on the old Synology Box at first so I keep some free space. Does this all make sense? Should each vdev be bigger or smaller?

Later still I could even add another 8-bay Icy Dock and connect that to the 8 SATA ports coming out of the W680 chipset. I would end up using a SlimSAS (SFF-8654) to 4x SATA breakout cable if using the ASUS board. With my ram choice of 2 sticks of 32GB my upgrade path allows for 2 more sticks for a total of 128GB. That might not be enough for 24x8TB. But perhaps that's thinking a bit too far ahead, I doubt I'll need that much storage anytime soon if ever.

Another thought I had was to use the 8 SATA ports from the chipset for a backup on spinning rust. The Silverstone case conveniently already has 8 storage bays for 3.5" disks. Of course this means my backup is in the same case and part of the same server... :confused:
The alternative is to keep using the Synology NAS as a backup but knowing me I'll never get to doing that dreading how slow it is. Maybe I'll eventually bite the bullet and build a second TrueNAS server with cheaper components purely to serve as a backup.

QLC NAND
I've read a lot of bad things about Samsung QVO drives (and QLC in general) but from what I've read the performance is just like any other SSD and only after sequentially writing 84GB to a single 8TB QVO will it run out of SLC cache. With 6 data drives in the vdev that's 0.5TB of data. The only time I'll be writing that much data to it at once is when I'm transferring my files over from my old Synology NAS but those are limited to 1GbE anyway. So I wouldn't be surprised if the SLC cache never fills up even in this scenario. If there's something I'm missing about these drives do tell!

RAIDz1 for SSDs and resilver time on QLC
I've read many times that RAID5 / RAIDz1 are dead but there do seem to be some that seem comfortable if the array is made up of SSDs. This would reduce the amount of space taken up by parity data. However these are not small SSDs (8TB!) which I'm guessing is going to make resilvering take a while. The write speed after the cache is full is roughly 170MB/s which doing some simple math means it should take less than a day to recreate a dead drive. However from my understanding another advantage of RAIDz2 is being able to check for errors even after 1 drive fails meaning TrueNAS should (in theory) be able to see if a second drive is spitting out garbled data during the resilvering process. Resilvering a RAIDz1 vdev might end up using corrupted data during resilvering without anyone knowing about it. Is that correct? If so I might just stick to the tried and tested RAIDz2.

Physical vdev layout on the LSI HBA
Which is better, each vdev connecting 2 drives to each of the 4 SAS ports or keeping each vdev to as few SAS ports as possible (2 SAS ports per vdev)? Or maybe even keeping it simple with 4 vdevs each with only 1 SAS port.
I'm unsure how this HBA works and how it routes the data. I'm wondering if there might be potential bottlenecks with the way ZFS will wait for an operation across all disks to finish before starting a new one.

10GbE LAN
At first I'll just use the 2.5GbE on the motherboard. Then after checking what the theoretical throughput is with fio I'll start to think about adding in a 10G NIC of some sort so I don't waste the potential throughput of an SSD array. I've been reading a lot about how Intel and Solarscale 10G NICs are recommended while the likes of Realtek and Aquantia are definitely not recommended but that doesn't leave me with many options. Cat6 is simple and an Intel X550-T2 would fit nicely in one of the x4 slots on the motherboard. Most SFP+ NICs seems to only be PCIe2.0 x8 however I did read somewhere that the newer models like the Intel X710-DA2 are PCIe3.0 x8 and will supposedly negotiate just fine to x4 speeds in a PCIe3.0 x4 slot which still leaves plenty of bandwidth for a single 10GbE connection.

Beyond that I've been struggling to understand anything and everything about SFP+ including what kind of cabling I need. There's also something about DA in the NIC product name above referring to Direct Attach or something? I've forgotten where I read about this. This seems important. o_O

Special config options for an SSD only Zpool/vdev?
I stumbled across metaslab_lba_weighting_enabled on this page. From how I understand it this should be set to 0 because this option is meant for spinning platters and not SSDs. Is this done automatically by TrueNAS or do I have to manually disable this? Are there any other such settings I should be adjusting because I'm going flash only? What about ashift=12 or recordsize or Advanced Format Disks? Are either of those more or less important with SSDs? I'm having a hard time wrapping my head around what they do exactly.

It took me a good portion of the afternoon/evening to convert my thoughts into letters on a screen so I hope I've provided enough information for my questions. :oops:
Either way I hope this'll be a fun learning experience for me!
-- Z0eff
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The Samsung 870s were/are buggy, apparently crap flash. Certainly early units died spectacularly in a year or two. Not sure if it's fixed yet.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
The Samsung 870s were/are buggy, apparently crap flash. Certainly early units died spectacularly in a year or two. Not sure if it's fixed yet.
That's concerning to hear. I'm guessing my units have been manufactured more recently and not when the drives were first released in 2020 so I have hope it'll be fine for me even in a year or two.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
IPMI, while is may not be useful for you next to the server, may still be useful if you ever are away from the server, say on a trip for example and the machine dies, etc. It's pretty convenient. Wouldn't say required, but very handy.

The old long ago debunked raid 5 is dead is a bunch of BS. That being said, I'd go for z2 in this case with that many ssd.

I have no experience, but seen it posted many times here that many 2.5 GbE cards do not work so well on Scale. I'll let someone else chime in about that.

Supermicro server motherboards typically have more lanes. They are well proven and definitely work well with Truenas. I don't know a thing about the Asus, it may or may not be suitable. I went with used Supermicro board, around $100 like many have here.

You likely will not want to fill your storage to 90% capacity, zfs tends to slow down when that full. That being said, with the system you are speccing, will be vastly faster than your Synology.

I run homeassistant in a VM in Scale so I can run the HASSOS type install, zero issues. The app version has too many limitations for me. I want it all managed and simple.

You could always consider NFS instead of SMB. I use all NFS, even on a Windows client.

You can indeed expand by adding 8 more SSD to your z2 pool. That would also increase IO speed.

Make sure to update your LSI firmware to latest, and IT mode.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
IPMI, while is may not be useful for you next to the server, may still be useful if you ever are away from the server, say on a trip for example and the machine dies, etc. It's pretty convenient. Wouldn't say required, but very handy.

I do sometimes show photos stored on my NAS from my phone but if something's wrong I don't think I'll be logging in remotely to try and troubleshoot it from my phone haha

The old long ago debunked raid 5 is dead is a bunch of BS. That being said, I'd go for z2 in this case with that many ssd.

Good to know, but yeah z2 just gives me that extra peace of mind anyway.

I have no experience, but seen it posted many times here that many 2.5 GbE cards do not work so well on Scale. I'll let someone else chime in about that.

It's not a card but comes from the W680 intel chipset. So it's Intel which everyone seems to like but it's built-in instead of an add-on card. I hope Debian/TrueNAS can handle that

Supermicro server motherboards typically have more lanes. They are well proven and definitely work well with Truenas. I don't know a thing about the Asus, it may or may not be suitable. I went with used Supermicro board, around $100 like many have here.

I don't really need more lanes, 2 8x slots and 2 4x slots is honestly plenty. Still worried about long term compatibility though.

You likely will not want to fill your storage to 90% capacity, zfs tends to slow down when that full. That being said, with the system you are speccing, will be vastly faster than your Synology.

I'll probably end up copying about half of my files from the Synology initially although part of me hopes that the lz4 compression will keep it below 80% usage.

I run homeassistant in a VM in Scale so I can run the HASSOS type install, zero issues. The app version has too many limitations for me. I want it all managed and simple.

Oh are VMs within TrueNAS possible? I was under the impression that it's either kubernetes apps or you have to run TrueNAS itself in a VM and run other VMs along side.

You could always consider NFS instead of SMB. I use all NFS, even on a Windows client.

I didn't realize Windows could use NFS! That's very interesting, am doing some googling already to see which works better for my purpose.

Make sure to update your LSI firmware to latest, and IT mode.

I'm really hoping the place I'm buying it from has done this already. The product page says IT mode. I'm worried I'll brick it by accident. Closest thing I've done is flashing graphics cards but those have a BIOS toggle so if you brick one you still got the other.

Thank you for you time!
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
The raid 5 is dead applies more to spinning rust than it did to SSDs, raid 5 and SSD's was still okay due to rebuild speeds, raid5 and large spinning rust drives still applies to due potential failure rate of a flipped bit during rebuilds of 2TB or larger drives that most people buy almost being garunteed, and due to batches often being similar in failure rates and strain put on other drives in a raid 5 rebuild (and that doesnt even touch on abysmal performance during the rebuild)

personally if you want this for more than a decade, I would myself lean more to xeon's and supermicro / server grade gear vs workstation platforms. for 10 years, the support on the server grade side is likely to be better for future TrueNAS updates and builds.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
The raid 5 is dead applies more to spinning rust than it did to SSDs, raid 5 and SSD's was still okay due to rebuild speeds, raid5 and large spinning rust drives still applies to due potential failure rate of a flipped bit during rebuilds of 2TB or larger drives that most people buy almost being garunteed, and due to batches often being similar in failure rates and strain put on other drives in a raid 5 rebuild (and that doesnt even touch on abysmal performance during the rebuild)
Of course it applies more to HDDs. Nevertheless, after running actualy RAID 5 on probably 50 servers since close to when mdraid came out (no idea what year that was) and even with some hardware controllers, many with larger than 2TB drives, and never once losing a pool, the odds of that (from the math used in the ridiculous article) would be essentially impossible. Raid5 in all it's incarnations is still used by many today, including a number from the openzfs team. I'm not saying Raidz1 is as good as Raidz2, don't misunderstand. I disagree that most people say it is almost guaranteed, that's BS. If that were true, my weekly scrubs would be detecting lots of errors, weekly since they are almost guaranteed and obviously I'll have that flipped bit right? And of course with enterprise drives and TLER, a URE does not kill the pool as it used to with most Raid 5. Funny how my scrubs never ever come up with a flipped bit, yet, if I lost a drive suddenly it always would. I'm sure it's that darn strain. People do lose pools no question about it. And they lose more z1 pools than z2 pools. But they lose more z2 pools than z3 pools. And if they had z10 pools, they'd still lose some but the least of all. There are some limits to using z1 for sure, I would use them with 20TB drives and 6 wide (way out of range). But, no z level is a backup anyway, and they all have some risk. Losing a pool is inconvenient but not the end of the world with suitable backups.

I ran a 3 vdev z1 with 3 drives each (OLD drives) pool for around a year on Scale. I had (from memory) 4 resilvers, no issues. Those were 4TB drives, resilvers were something like 4 hours. I'd have no problem doing that again, though, I'd rather have fewer larger drives for power reasons.

But you likely know all that, I just can't stand it when someone brings that up, lol.

The article was undeniably and unquestionably wrong. No one has anywhere near that level of failure as was predicted. Orders of magnitude less.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
You definitely can run VMs in scale using qemu. They work very well. Some like proxmox, I would never ever use it myself, we all have our biases.

Windows NFS takes a little effort to setup though Microsoft includes a NFS client, I believe it has to be enabled though and it takes a couple other steps.

If you are afraid to flash the card, at least check what version it is and if it's not current, I would suggest you do. Just had a guy the other day here on an older firmware and he kept having drive errors and disconnects. A firmware update solved it. It's for your own benefit! I've flashed a couple of them, it's not hard.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
The raid 5 is dead applies more to spinning rust than it did to SSDs, raid 5 and SSD's was still okay due to rebuild speeds, raid5 and large spinning rust drives still applies to due potential failure rate of a flipped bit during rebuilds of 2TB or larger drives that most people buy almost being garunteed, and due to batches often being similar in failure rates and strain put on other drives in a raid 5 rebuild (and that doesnt even touch on abysmal performance during the rebuild)

Hm. I'm still not entirely convinced RAIDz1 is safe. Maybe if I do 4 drives per vdev instead of 8 but that'll leave me with the same amount of parity drives.

personally if you want this for more than a decade, I would myself lean more to xeon's and supermicro / server grade gear vs workstation platforms. for 10 years, the support on the server grade side is likely to be better for future TrueNAS updates and builds.

That pretty much mirrors my thought process. That settles it then, I'm going to return the ASUS board and get the Supermicro X13SAE-F instead.

Of course it applies more to HDDs. Nevertheless, after running actualy RAID 5 on probably 50 servers since close to when mdraid came out (no idea what year that was) and even with some hardware controllers, many with larger than 2TB drives, and never once losing a pool, the odds of that (from the math used in the ridiculous article) would be essentially impossible. Raid5 in all it's incarnations is still used by many today, including a number from the openzfs team. I'm not saying Raidz1 is as good as Raidz2, don't misunderstand. I disagree that most people say it is almost guaranteed, that's BS. If that were true, my weekly scrubs would be detecting lots of errors, weekly since they are almost guaranteed and obviously I'll have that flipped bit right? And of course with enterprise drives and TLER, a URE does not kill the pool as it used to with most Raid 5. Funny how my scrubs never ever come up with a flipped bit, yet, if I lost a drive suddenly it always would. I'm sure it's that darn strain. People do lose pools no question about it. And they lose more z1 pools than z2 pools. But they lose more z2 pools than z3 pools. And if they had z10 pools, they'd still lose some but the least of all. There are some limits to using z1 for sure, I would use them with 20TB drives and 6 wide (way out of range). But, no z level is a backup anyway, and they all have some risk. Losing a pool is inconvenient but not the end of the world with suitable backups.

I ran a 3 vdev z1 with 3 drives each (OLD drives) pool for around a year on Scale. I had (from memory) 4 resilvers, no issues. Those were 4TB drives, resilvers were something like 4 hours. I'd have no problem doing that again, though, I'd rather have fewer larger drives for power reasons.

But you likely know all that, I just can't stand it when someone brings that up, lol.

The article was undeniably and unquestionably wrong. No one has anywhere near that level of failure as was predicted. Orders of magnitude less.

Thinking about it reasonably does suggest RAIDz1 is perfectly fine assuming you have a backup that you can quickly restore from.

You definitely can run VMs in scale using qemu. They work very well. Some like proxmox, I would never ever use it myself, we all have our biases.

Interesting! Sounds like it could work for simple non-critical things like a Minecraft server.

Windows NFS takes a little effort to setup though Microsoft includes a NFS client, I believe it has to be enabled though and it takes a couple other steps.

I definitely want to keep things simple. At least at first. Is it possible to change from SMB to NFS and back after a dataset have been created?

If you are afraid to flash the card, at least check what version it is and if it's not current, I would suggest you do. Just had a guy the other day here on an older firmware and he kept having drive errors and disconnects. A firmware update solved it. It's for your own benefit! I've flashed a couple of them, it's not hard.

Ok, I'll check and if needed will flash the card. I don't want to give future me a head ache. :)
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
You can change from SMB to NFS to SMB, sure. You had mentioned downsides of SMB is the only reason I provided another option.

I wasn't saying to use Z1, I actually said Z2. But it's not becuase of data safety, the amount of time to restore that much data (presuming the source is either remote or spinning rust) would not be small. Z1 is semi often used to gain IO speed, typically with 3 drives per vdev. Not as fast as 2 drives per mirror, but, a little better space though typically you want a hot spare with multi vdev Z1 so it's not a huge savings. Always a tradeoff. Some swear by many vdev mirrors as it's easy to add on to (only have to add 2 drives at a time) and it's very fast for reads. Of course, same downside as Z1, you can't lose 2 drives in any given vdev. But it's pretty popular way due to expansion ease. Z2 is the right choice in this case.

It's still good practice to do at least weekly scrubs. For those that don't do scrubs or do monthly, I wouldn't trust them with Z1 or mirrors. Just my opinion.

I have a homeassistant VM, A Windows VM (my only Windows "machine" that I only use for a few specialized items that don't exist anywhere else like bd3d2mk3d), and an ubuntu VM. They're pretty fast too.

You should be very happy with the supermicro.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
You can change from SMB to NFS to SMB, sure. You had mentioned downsides of SMB is the only reason I provided another option.

I wasn't saying to use Z1, I actually said Z2. But it's not becuase of data safety, the amount of time to restore that much data (presuming the source is either remote or spinning rust) would not be small. Z1 is semi often used to gain IO speed, typically with 3 drives per vdev. Not as fast as 2 drives per mirror, but, a little better space though typically you want a hot spare with multi vdev Z1 so it's not a huge savings. Always a tradeoff. Some swear by many vdev mirrors as it's easy to add on to (only have to add 2 drives at a time) and it's very fast for reads. Of course, same downside as Z1, you can't lose 2 drives in any given vdev. But it's pretty popular way due to expansion ease. Z2 is the right choice in this case.

It's still good practice to do at least weekly scrubs. For those that don't do scrubs or do monthly, I wouldn't trust them with Z1 or mirrors. Just my opinion.

I have a homeassistant VM, A Windows VM (my only Windows "machine" that I only use for a few specialized items that don't exist anywhere else like bd3d2mk3d), and an ubuntu VM. They're pretty fast too.

You should be very happy with the supermicro.

8-wide Z2 it is then. Also makes more sense with how there was apparently a large failure rate with QVO drives some time ago. :smile:

I will definitely setup a monthly scrub.

I'll have to look into VMs to fully understand how to use them. I don't have much experience with them.

Regarding the X13SAE-F board I remembered that this is also classed as a workstation board by supermicro. How well does Supermicro/TrueNAS support workstation orientated boards? Looking around for alternatives made me realize there is no "real" server board with socket 1700. (Yet?)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The OS doesn't care much, but there are a few negatives:
  1. Audio support means that extra little bit of power and one more thing to go wrong
  2. The larger form factor makes everything more constrained
  3. The conventional PCI slot is not a problem per se, but you'll look antiquated to friends and family unless you fill it with something expensive, like a time sync card or interface card for expensive lab equipment
  4. Traditional ATX airflow means worse airflow in the vast majority of cases, as ATX's ideas of what PC airflow should look like have been antiquated for longer than they ever were modern.
Do any of these matter? You decide.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
The OS doesn't care much, but there are a few negatives:
  1. Audio support means that extra little bit of power and one more thing to go wrong
  2. The larger form factor makes everything more constrained
  3. The conventional PCI slot is not a problem per se, but you'll look antiquated to friends and family unless you fill it with something expensive, like a time sync card or interface card for expensive lab equipment
  4. Traditional ATX airflow means worse airflow in the vast majority of cases, as ATX's ideas of what PC airflow should look like have been antiquated for longer than they ever were modern.
Do any of these matter? You decide.
They're both ATX boards and both are workstation. My only real concern right now is that because even the supermicro board is considered "workstation" that there won't be long time support
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
8-wide Z2 it is then. Also makes more sense with how there was apparently a large failure rate with QVO drives some time ago. :smile:

I will definitely setup a monthly scrub.

I'll have to look into VMs to fully understand how to use them. I don't have much experience with them.

Regarding the X13SAE-F board I remembered that this is also classed as a workstation board by supermicro. How well does Supermicro/TrueNAS support workstation orientated boards? Looking around for alternatives made me realize there is no "real" server board with socket 1700. (Yet?)
Weekly scrub if you can.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Looking around for alternatives made me realize there is no "real" server board with socket 1700. (Yet?)
That's because that CPU is not a server CPU. As such, you'll have less memory channels, less threads, less lanes, less memory, things that typically matter to server owners. It doesn't mean you'll need all those things of course for the current use you are planning, you do not, but things change over time too. 10 years is a long time.
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
That's because that CPU is not a server CPU. As such, you'll have less memory channels, less threads, less lanes, less memory, things that typically matter to server owners. It doesn't mean you'll need all those things of course for the current use you are planning, you do not, but things change over time too. 10 years is a long time.
Don't think I'll need more RAM/PCIe. I've decided to go with the X13SAE-F
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The choice of a consumer-oriented serverish case and workstation motherboard is curious when all you need is a case with two 5.25" bays to host the IcyDock, and so is the choice of an i5-13500 with E-cores off for an efficient quiet server.
I've not tried to price the components, but I suspect there could be valuable savings by going for DDR4 hardware. You don't need the latest CPU for a NAS, and while SMB benefits from higher clocks it does not require the highest clock either.

The main potential red flags are the Samsung 870 QVO and knowing that the pool could be 90% full at start. One should really aim for 50% or less at start, to have some margin before upgrading.
Maybe you should consider having a SSD pool for the most accessed data and a HDD pool for the rest.

But have an external backup in any case. Using the Synology, or another cheaper TrueNAS build. For noise, use fewer HDDs but in large sizes—and then raidz2 (meaning 4 drives).
 

Z0eff

Dabbler
Joined
Oct 21, 2023
Messages
17
The choice of a consumer-oriented serverish case and workstation motherboard is curious when all you need is a case with two 5.25" bays to host the IcyDock, and so is the choice of an i5-13500 with E-cores off for an efficient quiet server.
I've not tried to price the components, but I suspect there could be valuable savings by going for DDR4 hardware. You don't need the latest CPU for a NAS, and while SMB benefits from higher clocks it does not require the highest clock either.

The 13500 should be plenty power efficient. The idea is for it to be in a sleep state for most of the time. The 12500 was my first choice but I saw the 13500 was the exact same price so I figured I might as well go for that and disable the E-Cores. The idea is also to have expandability and CPU overhead for any VMs I might want in the future. Maybe I'll end up repurposing this server. If I end up making a separate TrueNAS system for backups on spinning rust then that'll probably be using an Atom or maybe a Celeron of some kind.

And yeah DDR4 and it's associated hardware is a bit cheaper and would've been fine but I gravitated more towards current gen. There's pros and cons.

The main potential red flags are the Samsung 870 QVO and knowing that the pool could be 90% full at start. One should really aim for 50% or less at start, to have some margin before upgrading.
Maybe you should consider having a SSD pool for the most accessed data and a HDD pool for the rest.

If the 870 QVO's do end up failing one after the other I'll be sure to post about it on these forums for anyone looking to buy these.
I won't be filling it up with everything immediately. I'll be taking things slowly for sure.

But have an external backup in any case. Using the Synology, or another cheaper TrueNAS build. For noise, use fewer HDDs but in large sizes—and then raidz2 (meaning 4 drives).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
And yeah DDR4 and it's associated hardware is a bit cheaper and would've been fine but I gravitated more towards current gen. There's pros and cons.
With DDR5 and no proper scheduling in CORE I see more cons than pros. But SCALE Cobia should have support for the hybrid architecture.

I you want to go ahead, I might have found another choice of drive for you (enterprise TLC at 99% health should beat consumer QLC any day):
 
Top