FreeNAS Build with 10GBe and Ryzen

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
Hi,

I have been happily running an internal hardware RAID5 (LSI 9260-8i) for almost 10 years with no data loss. First it was 4x 2TB, then 5x, 6x, 7x, 8x (online expansion is awesome!). Later I've upgraded to 5x 4TB, then 6x, 7x and finally (now) 8x 4TB (again using online expansion). I am aware of the risks involved with RAID5 on such large arrays and I have an (often outdated, but still...) offline backup to cater for this risk...
This is all inside my main desktop computer, which I turn off @ night (mainly because of the noise).

But now I'm running out of space again and need to upgrade. Instead of moving to even larger HDDs in an internal hardware RAID5, I'd like to switch to a NAS running the FreeNAS OS. This way I hope to increase the reliability a bit compared to the HW RAID5, while hopefully minimizing loss of performance (If I spent this amount of money, I'd like it to be an upgrade instead of downgrade to the stone-age aka performance of 10 years ago). I am considering going for a RAIDZ2.

Network clients
  • My desktop should be able to use the FreeNAS as if it is local storage (hence the 10GBe). It will use the FreeNAS for backups, media (also for direct editing), software packages, my documents, pictures, etc. I will store installed applications / games and some VMs locally on the desktops SSD, but all other things will be stored on the NAS. I'm hoping for at least 400MB/sec and after expanding the number of HDDs even more (600+MB/sec).
  • My mediacenter will use it for media (just simple read) and backups (no extreme speed requirements for this)
  • My wifes laptop will use it for media, software installers and backups (no extreme speed requirements for this)
  • My work laptop will use it for media and backups (no extreme speed requirements for this)
  • Phones (no extreme speed requirements for this)
Environment
  • It will often (mainly @ night) be idling, so idle power consumption is important.
  • After some re-thinking, I will locate the server in the attic instead of my office. So extreme low noise is less important, but reasonably low noise is still important. A consequence of this is that it will need to handle pretty high ambient temperatures (with the recent heat wave, it became up to 39°C in there, but that is pretty exceptional).
Local clients
  • It will run some clients like Transmission and Plex
  • Perhaps also some VMs? Not sure yet
Shopping list
  • Fractal Design Define R6 USB-C
  • Asrock Rack X470D4U2-2T (includes onboard Intel X550-AT2 for 10GBe)
    https://www.asrockrack.com/general/productdetail.asp?Model=X470D4U2-2T#Specifications
  • AMD Ryzen 2600 or 3600 (still need to investigate / compare idle power consumption)
  • 2x Kingston KSM26ED8/16ME (32GB DDR4 ECC RAM)
  • Seasonic Focus Plus 650 Gold (or something comparable)
  • Intel Optane H10 512GB (as boot / SLOG / L2ARC device?)
  • LSI SAS 9211-8i controller
  • 8x WDC H510 (10TB) in RAIDZ2 (negotiating about an interesting deal with rectified HDDs)
About the HDDs
Although I was planning for simple SATA HDDs, these HDDs are actually Dual-Port 12 Gbit/s SAS Interface. But they were quite a bit cheaper than normal (331 euro / HDD)
  • Can I easily attach these to the LSI SAS Controller?
  • What cables do I need for this?
  • For using dual port, I suppose I need a 2nd controller? (not planning that)
  • Anyone have experience with these HDDs? Are they noisy? I actually couldn't find any decent noise-comparison on 12+TB HDDs anywhere. As they're all 7200rpm, I suppose they'll be louder than my 4TB WD RED HDDs...
  • Is there any other reason why I would not buy these HDDs for my requirements?
  • I can sometimes find "rectified" HDDs for even cheaper (300 euro). Do you think this is worth to consider as well?
About the SAS Controller
The LSI 9311 is quite difficult to find in webshops and also very expensive for a non-hardware-RAID (around 400 euro), but I did find them quite a lot on ebay for around 100 euro. This is usually from China or Hong Kong. To be honest, I have no experience in using eBay (since being ripped off by PayPal as a seller more than 10 years ago, I refused to use it). So can anyone tell me if these China-cards are reliable, non-fake? And what to pay attention to when buying one?


Boot / SLOG / L2ARC Device
I still need to read a bit about this, but feel free to give advise anyway :)
I tried doing some research on this, but it is still a bit unclear on what I need for this...
  • It isn't entirely clear to me how "required" or "beneficial" a SLOG and/or L2ARC would be in my use case. In a way, I think my workload can be considered as "medium/light", as only 1 concurrent client requires very good performance and normally max 2/3 concurrent clients will use it. On the other hand, my goal is very good performance to/from the NAS on my desktop, mainly for large sequential uploads and downloads (10-100GB @ +-500MB/sec in both directions is my goal) and sometimes I may also require at least reasonable non-sequential performance (doesn't have to be SSD-like-performance, but should come near internal HDD performance if possible).
  • I couldn't find any info on how well (if at all) the Intel Optane H10 performs as SLOG (and perhaps also L2ARC / boot device?). It is cheap, it has powerloss protection and it has 16/32GB of Optane memory in a 256/512GB M.2 SSD.
  • As I understand it, I only need a limited amount of SLOG (16GB or perhaps 32GB?). Can I use a different partition on the same device as boot device or perhaps even as L2ARC? Or would that destroy the performance advantage completely?

MOBO / CPU / RAM
As these are very new, I don't expect much info on this yet. But can I assume that this will work fine and reliable, as this is all "server-grade-material"?
(I am checking with Asrock Rack on how reliable their ECC should be on this mobo)
 
Last edited:

tfran1990

Patron
Joined
Oct 18, 2017
Messages
293
The choice you need to make is if you are going to go for 10Gb. If you absolutely have to have 10Gb speeds then thats fine, but in terms of hardware you can get ahold of/good pricing you might be paying more.
I would consider going the 1GBnetwork/ 6Gb sas route.
The network adapter and sas drives run hotter then normal sata 5400/5900 hdd and 2008 versions of HBAs.
If you buy a rebrand HBA its less likely you will get a fake/bootleg card.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Skip the LSI 9311 and go with a 6Gbps HBA. Do not buy anything from China or Hong Kong via eBay. There are massive issues with knockoff cards, discussed just earlier today I think. Basically you can generally trust eBay gear if it is coming from a known source of data center recycling, and not if it's coming from California (depends) or Asia (usually bad).

10G is cheap. Pick up a pair of Intel X520's for problem-free compatibility. Use fiber. You can get two 10G ports and 24 1G ports on something like a Dell 5524. We have a great 10G Networking Primer that discusses lots of this.
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
The choice you need to make is if you are going to go for 10Gb. If you absolutely have to have 10Gb speeds then thats fine, but in terms of hardware you can get a hold of/good pricing you might be paying more.
I would consider going the 1GBnetwork/ 6Gb sas route.
The network adapter and sas drives run hotter then normal sata 5400/5900 hdd and 2008 versions of HBAs.
If you buy a rebrand HBA its less likely you will get a fake/bootleg card.
As explained, I have had a hardware RAID5 since 2009, so I'm now used to 300-400MB/sec speed. This is going to cost me massive amounts of money and the last thing I want is to go back to stone-age-speeds of 1Gbit/s ;) So yes, 10GBe is absolutely required :)
I am aware that 10GBe and 7200rpm drives run hotter than 5900rpm drives. I was not aware that the SAS interface also plays a role in this though... Do you mean that 7200rpm SAS drives run hotter than 7200rpm SATA drives (because they are more enterprise focused perhaps?)?

Skip the LSI 9311 and go with a 6Gbps HBA. Do not buy anything from China or Hong Kong via eBay. There are massive issues with knockoff cards, discussed just earlier today I think. Basically you can generally trust eBay gear if it is coming from a known source of data center recycling, and not if it's coming from California (depends) or Asia (usually bad).

10G is cheap. Pick up a pair of Intel X520's for problem-free compatibility. Use fiber. You can get two 10G ports and 24 1G ports on something like a Dell 5524. We have a great 10G Networking Primer that discusses lots of this.
The reason I was looking @ the LSI 9311 is because I read somewhere that it has a SAS 3008 chip vs SAS 2008 chips in the 6Gbps HBAs. I read that the SAS2008 was getting a bit old and that it was unclear how much longer it would remain supported. As this is a long-term investment, I thought it would be wise / safer to buy something with a SAS 3008 chip?
Thanks for the eBay advise. I will certainly not buy this from China etc then.

Does anyone perhaps have a listing of advised HBAs using SAS 3008 (I know many brands sell practically the same HBA under a different name), so that I have a bit more to search for on more reliable websites?

Also, is anything wrong with Intel X550? As it is integrated on the mobo, that seemed a lot easier to me than buying (and occuping another PCIe) a seperate card for this. Fiber is not an option btw, as CAT6A cabling has already been installed thoughout the whole house...

Thanks
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
Edit:
Ok, just figured out that the LSI 9300 also has the SAS3008 and is not too expensive (I found a new one for 180 euro). And it is in the Hardware Recommendations Guide... So switching to that one...
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The reason I was looking @ the LSI 9311 is because I read somewhere that it has a SAS 3008 chip vs SAS 2008 chips in the 6Gbps HBAs. I read that the SAS2008 was getting a bit old and that it was unclear how much longer it would remain supported. As this is a long-term investment, I thought it would be wise / safer to buy something with a SAS 3008 chip?

SAS has moved on to 12Gbps and even 24Gbps out of a need to remain relevant. SAS is the primary technology used to interconnect multiple shelves of disks to a server, although other technologies such as FC, FCoE, iSCSI, etc., also exist. Because of this, they've pushed forward.

SATA, on the other hand, is basically a dead end. SATA is designed to attach a single drive (don't talk to me about port multipliers) and as the fastest speed a HDD is capable of is still not faster than maybe 250MBytes/sec, SATA 3Gbps is sufficient for HDD. SATA 6Gbps has been handy for SSD, but NVMe is taking over there. The SATA working group has stated that there is no SATA 4.0 standard on the horizon.

There's been some experimentation with things such as SATA Express which uses two 6Gbps channels on a single connector, but the general feeling seems to be to continue to use SATA for HDD and NVMe for HDD.

There are lots of things that are getting old. Ethernet's very old. The ATX format is pretty old. SCSI, which is the underpinning for SAS, is *ancient*. PCIe is not exactly a young chicken, and PCI is a quarter of a century old. Being old isn't in itself a problem.

If you have no plans to implement SAS drives, you will *never* be able to use the 12Gbps aspect of the 9300's.

The driver for the LSI 6Gbps HBA is generally considered to be extremely stable, and even if it is now a ten year old product, it seems likely to continue onward for quite some time. The big things that would kill this would be the death of PCIe or a driver framework rewrite in FreeBSD.

Taking that point by point:

Those of us who have been doing this for awhile recall with angst the transitions from ISA->MCA->EISA->PCI (several revs)->PCIe that happened between ~1985-2003 (18 years). These things forced major architectural changes on FreeBSD and device drivers, and were disruptive and unpleasant. By way of comparison, for the last 16 years, PCIe has been king. Even though there have been several revisions to the standard, backwards compatibility has been very good, and because PCIe is essentially native to modern CPU's, it does not seem likely that this will go out of style in the next 5-10 years.

I'm not aware of any major effort to restructure the FreeBSD driver framework. This happened a bunch of times in the early years, in order to allow for newer technologies (EISA/PCI/PCIe), to standardize I/O devices (CAM), and to allow better concurrency with SMP.

Also, investing in an inexpensive $30 6Gbps HBA now and then needing to reinvest in a 12Gbps HBA that will become cheaper in the future is probably still a better gameplan financially than just doing it all now.

All that having been said, though, if you don't mind spending the cash, it won't hurt you to go 12Gbps now. It just won't do anything for you either.
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
That Toshiba HDD is actually a SAS drives :) But yes, I do understand that it will not nearly saturate even 6Gbps ;)

I read somewhere the fear that SAS2008 might not keep on supporting future larger HDDs:
https://www.ixsystems.com/community/threads/questions-about-sas-controllers.61331/#post-436127

Also I saw that these new HBAs no longer use the SFF8087 connector and that is exactly the only negative thing I've ever experienced with my LSI MR 9260. The connector and / or cables (8087 to SATA) are not very reliable in my experience. (slightly touching a cable causes HDDs to drop from my aray)

So I'm kinda hoping they've improved that with the SFF8643...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
SFF8087 is a better connector than the 8643, in my opinion. The main thing against it is that it is relatively large by today's standards, and newer stuff like U.2 is driving adoption of 8643. 8643 is not exclusively 12Gbps; it is used on a number of boards to aggregate 6Gbps in the same way 8087 does, just in a smaller footprint.

The "supporting future larger HDDs" thing is FUD. LSI did in fact have an issue on their first generation (1078) controllers that effectively limited disk size to 2.2TB, which many people discovered the hard way. Having passed that limit, yes, I suppose when we get out to the petabyte-sized disks, we might once again run into an issue. However, it seems like HDD's may not survive more than another decade or so.

Back in 2010, a 120GB SSD cost around $300 ($2.50/GB), while a 2TB HDD cost around $180 ($0.09/GB).

Now in 2019, a 1TB SSD costs around $120 ($0.12/GB) while an 8TB shucked HDD costs around $130 ($0.016/GB).

The trend is that flash pricing is falling faster than HDD pricing. However, there are also factors such as density. A 1TB gumstick SSD is a lot smaller than a 3.5" HDD, and I can pile many TB of gumstick in the same space as even the largest 3.5" HDD available.
 
Joined
Jul 18, 2019
Messages
3
I started my search with the AsRock Rack X470D4U, the 1GBe 8 SATA stablemate to the board you are considering, I have a long 20 year history building AMD desktops and the recent Ryzen 3000 series are very impressive, the 3600 being a particularly excellent value even at launch price. these boards have IPMI and it looks like a great package for this use.

But there are some questions about EEC support, officially AMD says Ryzens do not support EEC,

Unofficially AMD personel state the circurty is there and not disabled and "working" but not validated. https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6vs2/

but some testers have gotten mixed results, this is from first gen Ryzen.

https://www.hardwarecanucks.com/for.../75030-ecc-memory-amds-ryzen-deep-dive-5.html

Has this changed with the server boards or is this a processor problem? the 3000 series Ryzen chips share a lot with the EPYC server chips, AMD has taken a modular approach to make anywhere from 6 to 64 cores on various sockets from the same "chiplets". it makes sense that the circuitry would be there on Ryzen, but without official word it gets kind of grey, is EEC not working because the OS is not looking for it on a platform that does not officially support it? unchecked or semi unchecked memory errors could under worst case conditions could corrupt the pool, I couldn't handle the uncertainty and bailed on that path,

My only server-ish experience consists of just one build back in 2003 with a pair of AMD Athlons pin shorted into MP's in a TYAN board that served a small page from home. that build still boots and runs to this day but its way outdated, for this FreeNas build I think I am going to stick to the beaten path, new OS new hardware. I don't need extra variables.

I would be interested to hear what you come up with for EEC support on these AM4 sockets.
 
Last edited:

tfran1990

Patron
Joined
Oct 18, 2017
Messages
293
To add to that @Yamelesswrench i can wait to see some 7nm ryzen freenas builds. With AMD 3000 series being as good or better then some of the consumer stuff from intel. Its going to give freenas builders the option to find the happy median.( for ecc support with intel you go with i3 or xeon for ecc support)

Ryzen has supported ECC ram for some time now but the hurdle is finding the right mobo that supports ecc and matching the QVL with the memory.
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
Regarding the SLOG / Bootdrive: I've just been reading a bit on this subject and discovered that Intel 660p isn't really a good candidate as SLOG (no powerloss protection). I also read that latency is the most important aspect of a SLOG, so I'm looking @ Optane for this. I see that the Intel Optane H10 series are pretty cheap and have powerloss protection. If I understand it well, they only have 16GB or 32GB of Optane memory and the rest is normal SSD (total size is 256GB or more). Would these be good as SLOG? Would the one with 16GB Optane be sufficient (256GB total)? Can I also put my boot partition on this same device or should a get a seperate SSD (no, I'm not considering a USB drive for that :p)

To be honest, I still don't really understand in which specific cases my SLOG would be used, but I certainly would prefer to have sync=standard, but without having a performance impact compared to sync=disabled (if possible). So would Intel Optane H10 be able to provide this to me?

SFF8087 is a better connector than the 8643, in my opinion. The main thing against it is that it is relatively large by today's standards, and newer stuff like U.2 is driving adoption of 8643. 8643 is not exclusively 12Gbps; it is used on a number of boards to aggregate 6Gbps in the same way 8087 does, just in a smaller footprint.

The "supporting future larger HDDs" thing is FUD. LSI did in fact have an issue on their first generation (1078) controllers that effectively limited disk size to 2.2TB, which many people discovered the hard way. Having passed that limit, yes, I suppose when we get out to the petabyte-sized disks, we might once again run into an issue. However, it seems like HDD's may not survive more than another decade or so.

Back in 2010, a 120GB SSD cost around $300 ($2.50/GB), while a 2TB HDD cost around $180 ($0.09/GB).

Now in 2019, a 1TB SSD costs around $120 ($0.12/GB) while an 8TB shucked HDD costs around $130 ($0.016/GB).

The trend is that flash pricing is falling faster than HDD pricing. However, there are also factors such as density. A 1TB gumstick SSD is a lot smaller than a 3.5" HDD, and I can pile many TB of gumstick in the same space as even the largest 3.5" HDD available.
Thanks for the connector advise. If 8643 is even worse than 8087, then that is certainly a strong argument against it for me. I also understand that old SAS2 controllers probably wont be dropped out of support very soon, but as I have the tendancy to buy good hardware and stick with it for quite some time (many parts of my current PC are 10 years old), I still have some concerns on this. Perhaps this question can help: "Is it possible to swap the HBA with a newer model without loosing the array?"

Ryzen ECC concerns
Ryzen ECC concerns
Wouldn't it be safe to assume that if Asrock Rack (the server brand of Asrock) releases some AM4 x470 mobos with ECC support, that this actually works? Asrock Rack is meant for servers, not desktops... I don't think they can afford having not 100% working ECC support...
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Wouldn't it be safe to assume that if Asrock Rack (the server brand of Asrock) releases some AM4 x470 mobos with ECC support, that this actually works? Asrock Rack is meant for servers, not desktops... I don't think they can afford having not 100% working ECC support...

You can assume what you like. I'd be cautiously optimistic at best. Features such as ECC are complicated, and while Supermicro has had many years and hundreds of products worth of experience on that front, smaller manufacturers may not have the engineering depth of experience necessary to be successful. Remember, it isn't just the detection and correction, it's also the reporting and alerting.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
ASRock Rack aims to play in the same market as Supermicro, but they're not quite there. The firmware side of things has been a bit iffier than Supermicro, historically.
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
I've send an email to Asrock Rack, asking about your ECC concerns.

Besides this, I've also made some new discoveries that changed my plans quite a bit...
  • As I would like expendability in the future, my initial plan was to buy a large enough NAS with 5x 12TB HDDs and then hope for RAIDZ expansion to become available by the time that I run out of space, so that I can add an HDD. But I just discovered that it actually is possible to "expand" a RAIDZ now already, by replacing each HDD one-by-one by a larger one.
  • As this way of expanding, does require many rebuilds, I decided to go for a RAIDZ2 instead of a RAIDZ1.
  • Also I decided to move the NAS into the attic. This does raise the cooling requirements a bit, but it makes my noise requirements realistic again.
  • I've also discovered that ZFS likes to keep 20% of free space in the pool, so that also increases my storage requirements a bit.
  • In the end, I've decided to go from a 5x 12TB RAIDZ1 to a 8x 10TB RAIDZ2 instead
I've updated my start post with this new information and I've also added some questions regarding performance / SLOG / L2ARC:
  • It isn't entirely clear to me how "required" or "beneficial" a SLOG and/or L2ARC would be in my use case. In a way, I think my workload can be considered as "medium/light", as only 1 concurrent client requires very good performance and normally max 2/3 concurrent clients (often even only 1) will use it. On the other hand, my goal is very good performance to/from the NAS on my desktop, mainly for large sequential uploads and downloads (10-100GB @ +-500MB/sec in both directions is my goal) and sometimes I may also require at least reasonable non-sequential performance (doesn't have to be SSD-like-performance, but should come near internal HDD performance if possible).
  • I couldn't find any info on how well (if at all) the Intel Optane H10 performs as SLOG (and perhaps also L2ARC / boot device?). It is cheap, it has powerloss protection and it has 16/32GB of Optane memory in a 256/512GB M.2 SSD.
  • As I understand it, I only need a limited amount of SLOG (16GB or perhaps 32GB?). Can I use a different partition on the same device as boot device or perhaps even as L2ARC? Or would that destroy the performance advantage completely?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I've send an email to Asrock Rack, asking about your ECC concerns.

And you expect them to tell you what, exactly? :smile: It's unlikely they'll say anything useful. Modern practice is that if a company understands that there's a defect with their product, you talk in a different direction or dance around the issue. And the whole point was that they have less experience overall.

[*]It isn't entirely clear to me how "required" or "beneficial" a SLOG and/or L2ARC would be in my use case.

Well a good starting point is explaining why you think you need those things. Typically, if you can't, you don't.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
But I just discovered that it actually is possible to "expand" a RAIDZ now already, by replacing each HDD one-by-one by a larger one
Note that ZFS always allows you to replace a disk before detaching it, so that you never lose more redundancy than you need to. Of course, you need a free port for the new disk.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
I guess the solution to lack of available ports is not taking any courses, then.
 

Mastakilla

Patron
Joined
Jul 18, 2019
Messages
202
And you expect them to tell you what, exactly? :) It's unlikely they'll say anything useful. Modern practice is that if a company understands that there's a defect with their product, you talk in a different direction or dance around the issue. And the whole point was that they have less experience overall.
We'll see... I do think I'll give it a shot either way. This isn't a company critical NAS, just a home NAS and it should definately be a step up in reliabilty from HW RAID5 that I've been running on my overclocked i7 920 (non-ECC if course) for 10 years now ;) and also better than using non-ECC memory, as I first planned for this FreeNAS build as well.
Or would you say "using ECC on a non-SuperMicro / non-Intel chipset is totally pointless and you better use non-ECC higher performance RAM in that case"?


Well a good starting point is explaining why you think you need those things. Typically, if you can't, you don't.
I guess my questions are "Can I achieve the performance that I'm after without them?" or "Would it help in increasing the performance in my use case?".

L2ARC, as I understand it, is a "simple" read cache. So if I'm not regularly reading the same data, this won't have much use. As I don't often read the same data (I think), this will probably not do very much. That is why I was only considering L2ARC in case it makes sense to have it on the same device as the boot disk (or perhaps even on the SLOG?).

From how I understand it, the need for a SLOG greatly depends on how "sync" is set.
  • sync=disabled : Writes will go first to the RAM and will immediatly be "confirmed-to-be-written" to the OS/app after that. Only after this the data will actually be written on the HDDs itself. In case of a power outage or kernel panic, you loose what is not written yet on the HDDs (same as HW RAID without BBU I guess). There is no benefit from having a SLOG at all in this case? Or can it still be faster in some special cases?
  • sync=always : Writes will go first to the RAM and then to the SLOG (which should be powerloss protected). Only after the data is written to the SLOG, it is "confirmed-to-be-written" to the OS/app. After this the data will actually be written on the HDDs itself. If there is no SLOG, the "confirmation-to-be-written" will only be given after the data is written to the HDDs. In case the HDDs can't keep up with the SLOG (more than 2 transaction logs in the queue), the "confirmation-to-be-written" will also have to wait longer for the HDDs to catch up. In case of a power outage or kernel panic, you do not loose what is not written yet on the HDDs (similar as HW RAID with BBU I guess). In this case the write performance is greatly dependant on the speed of the SLOG.
  • sync=standard : Only for specific (important) writes it will be like "always" and for all other writes it will be like "disabled".

What is not clear to me is how exactly this "confirmation-to-be-written" works and what effect the SLOG has in which use cases...
  • Is this a confirmation on file level or on block (or something similar) level?
  • Does it depend on the app or OS if / how this confirmation is "handled" or is it independant of this?
  • Does it depend on being a local or remote write if / how this confirmation is "handled" or is it independant of this?
Also unclear to me is how it determines what is an "important" write in the case of "sync=standard"?

A couple use cases below with my imagined assumptions (please feel free to correct / expand on this):
  1. I'm writing a big media file (10GB) to the NAS from my Windows desktop (10Gbit).
    sync=disabled : the file fits easily in the RAM and will be written at the speed of NIC?
    sync=always : the file fits mostly in 2 transaction groups (32GB RAM / 8 * 2) on the SLOG, so will be written at the speed of the NIC / SLOG (which ever is faster)?
    sync=standard : not sure if a file transfer from a Windows network client is an "important" write?
  2. I'm writing a huge media file (or multiple big ones) (say 100GB) to the NAS from my Windows desktop (10Gbit).
    sync=disabled : the file doesn't fit in the RAM at all and will be written at the speed of the HDDs? Not sure if a SLOG would help in this case also as "large additional write cache" or would it limit itself to those 2 transaction logs (8GB)?
    sync=always : the file doesn't fit in 2 transaction groups on the SLOG, so will be written at the speed of the HDDs? Not sure if non-uber-fast SLOG compared to sync=disabled would slow this down even more?
    sync=standard : not sure if a file transfer from a Windows network client is an "important" write, so if it will go the sync=disabled route or the sync=always route...
  3. I'm multiplexing (simultanious sequential read and write) a big or huge media file stored on the NAS from my Windows desktop (10Gbit)
    Not sure if this changes the situation a lot from 1) or 2) (except of course slower transfers)
  4. I'm multiplexing (simultanious sequential read and write) a big or huge media file stored on the NAS locally on the NAS (no network involved)
    Not sure if this changes the situation a lot from 3) (except of course slower transfers)
  5. I'm writing an Adobe installer (100000 mostly small files) to the NAS from my Windows desktop (10Gbit).
    I guess this use case depends the most on the performance of the SLOG. Having sync=always without a fast SLOG will probably be like having RAID controller without cache (=~ back to the stone-age for me).
  6. I'm installing something like an Adobe installer (100000 mostly small files) that is stored on the NAS to the NAS (simultanious non-sequential read and write) (no network involved)
    Similar to 5) I guess?
Note that ZFS always allows you to replace a disk before detaching it, so that you never lose more redundancy than you need to. If course, you need a free port for the new disk.
Interesting... However, with my current plans (9211-8i), I don't have a free port anymore... :( Is it also possible to set my pool to "read-only" before starting the rebuild (that way I also wouldn't lose the redundancy from the disk that I'm replacing, right?)?

If also that is not the case, then I guess I still have the 2nd redundancy HDD from RAIDZ2, if all hell broke loose...


Also... How about the Intel Optane H10 as SLOG? (in case I would need a SLOG)
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
971
But there are some questions about EEC support, officially AMD says Ryzens do not support EEC,

Unofficially AMD personel state the circurty is there and not disabled and "working" but not validated. https://www.reddit.com/r/Amd/comments/5x4hxu/we_are_amd_creators_of_athlon_radeon_and_other/def6vs2/

but some testers have gotten mixed results, this is from first gen Ryzen.

https://www.hardwarecanucks.com/for.../75030-ecc-memory-amds-ryzen-deep-dive-5.html

(snip)

I would be interested to hear what you come up with for EEC support on these AM4 sockets.

Oddly enough... One of the other father's in my son's Scout troop is an AMD engineer. I asked him Monday if the current crop of Ryzen CPU's support ECC memory. I heard back from him this morning: "Ryzen processors do support ECC memory. It has been tested."

Now wither it's documented or not, is another question. I'll see if I can get more details.
 
Top