RSync with rsyncd (no ssh) takes 50% CPU load

Status
Not open for further replies.

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
Dear FreeNAS Team and users,

I guess most of you will wrinkle their nose about my HW, a Thecus N7700+, but I like this HW and don't want to discuss this here. I rather would like to discuss what is possible with FreeNAS on this HW. My Thecus has a 1.8GHz Celeron processor (single core), 7x3TB WD red disks in ZFS Raid-Z3 config and 2GB of RAM. I thought the 2GB of RAM might be an issue, especially with a 21TB ZFS array, but the top command claims that 1478MB are free when an rsync runs. I use FreeNAS FreeNAS-9.2.1.8-RELEASE-x86.

The system runs reasonably well, e.g. 50MByte/s with writes on windows shares, but rsync is slow (15MByte/s) and takes 50% CPU load when pushing or pulling onto the FreeNAS-Thecus. I use rsync with rsyncd protocol. I am sure about this, because ssh is not enabled for the account I use on the other machine. Also I use the :: syntax like below to pull data from another host on the FreeBSD machine:

rsync -rvlHtS --progress <user>@<ip-adr>::<path> .

when the rsync runs, top executed on the FreeNAS machine gives me this

last pid: 8319; load averages: 1.35, 1.45, 1.47 up 0+01:57:23 11:08:49
43 processes: 2 running, 41 sleeping
CPU: 9.2% user, 0.0% nice, 51.3% system, 11.8% interrupt, 27.6% idle
Mem: 101M Active, 79M Inact, 344M Wired, 1168K Cache, 25M Buf, 1464M Free
ARC: 44M Total, 120K MFU, 12M MRU, 10M Anon, 518K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8069 michael 1 86 0 22692K 3348K RUN 13:11 48.49% rsync
3336 root 12 20 0 29364K 11128K uwait 0:11 0.00% collectd
3113 root 6 22 0 99316K 61192K usem 0:10 0.00% python2.7

First I must admit, I don't fully understand this. The top line claims that the system has 9.2% user load but that rsync uses 48.49%. The FreeBSD man page on top couldn't enlight me on this. The main issue is that the overall system load is >1. This is measured towards the end of the transfer of a large file (50GB).

If I stop the rsync process, the system is immediately at >99% idle.

If I run a simple disc write test like this:

dd if=/dev/zero of=./test bs=65536 count=65536
65536+0 records in
65536+0 records out
4294967296 bytes transferred in 30.503499 secs (140802447 bytes/sec)


I get these values with top:

last pid: 8427; load averages: 0.72, 1.36, 1.49 up 0+02:10:07 11:21:33
42 processes: 1 running, 41 sleeping
CPU: 0.0% user, 0.0% nice, 24.4% system, 0.5% interrupt, 75.1% idle
Mem: 100M Active, 77M Inact, 335M Wired, 1168K Cache, 25M Buf, 1475M Free
ARC: 60M Total, 120K MFU, 12M MRU, 26M Anon, 553K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8421 michael 1 26 0 9412K 1428K tx->tx 0:06 12.50% dd


So the system is 75% idle. Also the write speed seems to be adequate for a 1GBit/s network link (it is almost 10x higher than what I get with rsync).

When i run iperf between the two machines, I get a bandwidth of 930Mbits/s and this top result:

last pid: 8485; load averages: 0.23, 0.39, 0.92 up 0+02:17:08 11:28:34
42 processes: 2 running, 40 sleeping
CPU: 0.0% user, 0.0% nice, 6.1% system, 66.7% interrupt, 27.3% idle
Mem: 100M Active, 77M Inact, 336M Wired, 1168K Cache, 25M Buf, 1474M Free
ARC: 34M Total, 122K MFU, 12M MRU, 16K Anon, 549K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8466 michael 4 30 0 11488K 2828K RUN 0:04 5.57% iperf

so networking seems to load the system, but here the bandwidth is about 10x higher than what rsync does.

In the end I think when transfering a large file with rsyncd protocol, rsync shouldn't do much besides receiving data and writing it to the disk, and both of these seem to work reasonably well. For both individual tasks I get about 10x the bandwidth of what I get with rsync. Just the combination using rsync doesn't work out.

Does someone have an idea on what rsync might be doing here? I found lot's of posts about slow rsymc, but all claim that this has to do with ssh encyprtion, but I am not using ssh. From looking at the write and network speed, getting an increase of a factor of 4 seems reasonable, and this would be close enough to what you can get with 1GB ethernet, that I would be happy with it.

Is the low amount of RAM an issue, although top claims that a lot of RAM is free? I think that for a "write once / hopefully read never" backup server, a relatively small RAM should be ok, since I don't need a lot of caching, just for a few directories. Also I have mostly large files (VM images).

Thanks & best regards,

Michael
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Dear FreeNAS Team and users,

I guess most of you will wrinkle their nose about my HW, a Thecus N7700+, but I like this HW and don't want to discuss this here. I rather would like to discuss what is possible with FreeNAS on this HW. My Thecus has a 1.8GHz Celeron processor (single core), 7x3TB WD red disks in ZFS Raid-Z3 config and 2GB of RAM. I thought the 2GB of RAM might be an issue, especially with a 21TB ZFS array, but the top command claims that 1478MB are free when an rsync runs. I use FreeNAS FreeNAS-9.2.1.8-RELEASE-x86.

The system runs reasonably well, e.g. 50MByte/s with writes on windows shares, but rsync is slow (15MByte/s) and takes 50% CPU load when pushing or pulling onto the FreeNAS-Thecus. I use rsync with rsyncd protocol. I am sure about this, because ssh is not enabled for the account I use on the other machine. Also I use the :: syntax like below to pull data from another host on the FreeBSD machine:

rsync -rvlHtS --progress <user>@<ip-adr>::<path> .

when the rsync runs, top executed on the FreeNAS machine gives me this

last pid: 8319; load averages: 1.35, 1.45, 1.47 up 0+01:57:23 11:08:49
43 processes: 2 running, 41 sleeping
CPU: 9.2% user, 0.0% nice, 51.3% system, 11.8% interrupt, 27.6% idle
Mem: 101M Active, 79M Inact, 344M Wired, 1168K Cache, 25M Buf, 1464M Free
ARC: 44M Total, 120K MFU, 12M MRU, 10M Anon, 518K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8069 michael 1 86 0 22692K 3348K RUN 13:11 48.49% rsync
3336 root 12 20 0 29364K 11128K uwait 0:11 0.00% collectd
3113 root 6 22 0 99316K 61192K usem 0:10 0.00% python2.7

First I must admit, I don't fully understand this. The top line claims that the system has 9.2% user load but that rsync uses 48.49%. The FreeBSD man page on top couldn't enlight me on this. The main issue is that the overall system load is >1. This is measured towards the end of the transfer of a large file (50GB).

If I stop the rsync process, the system is immediately at >99% idle.

If I run a simple disc write test like this:

dd if=/dev/zero of=./test bs=65536 count=65536
65536+0 records in
65536+0 records out
4294967296 bytes transferred in 30.503499 secs (140802447 bytes/sec)


I get these values with top:

last pid: 8427; load averages: 0.72, 1.36, 1.49 up 0+02:10:07 11:21:33
42 processes: 1 running, 41 sleeping
CPU: 0.0% user, 0.0% nice, 24.4% system, 0.5% interrupt, 75.1% idle
Mem: 100M Active, 77M Inact, 335M Wired, 1168K Cache, 25M Buf, 1475M Free
ARC: 60M Total, 120K MFU, 12M MRU, 26M Anon, 553K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8421 michael 1 26 0 9412K 1428K tx->tx 0:06 12.50% dd


So the system is 75% idle. Also the write speed seems to be adequate for a 1GBit/s network link (it is almost 10x higher than what I get with rsync).

When i run iperf between the two machines, I get a bandwidth of 930Mbits/s and this top result:

last pid: 8485; load averages: 0.23, 0.39, 0.92 up 0+02:17:08 11:28:34
42 processes: 2 running, 40 sleeping
CPU: 0.0% user, 0.0% nice, 6.1% system, 66.7% interrupt, 27.3% idle
Mem: 100M Active, 77M Inact, 336M Wired, 1168K Cache, 25M Buf, 1474M Free
ARC: 34M Total, 122K MFU, 12M MRU, 16K Anon, 549K Header, 22M Other
Swap: 14G Total, 14G Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8466 michael 4 30 0 11488K 2828K RUN 0:04 5.57% iperf

so networking seems to load the system, but here the bandwidth is about 10x higher than what rsync does.

In the end I think when transfering a large file with rsyncd protocol, rsync shouldn't do much besides receiving data and writing it to the disk, and both of these seem to work reasonably well. For both individual tasks I get about 10x the bandwidth of what I get with rsync. Just the combination using rsync doesn't work out.

Does someone have an idea on what rsync might be doing here? I found lot's of posts about slow rsymc, but all claim that this has to do with ssh encyprtion, but I am not using ssh. From looking at the write and network speed, getting an increase of a factor of 4 seems reasonable, and this would be close enough to what you can get with 1GB ethernet, that I would be happy with it.

Is the low amount of RAM an issue, although top claims that a lot of RAM is free? I think that for a "write once / hopefully read never" backup server, a relatively small RAM should be ok, since I don't need a lot of caching, just for a few directories. Also I have mostly large files (VM images).

Thanks & best regards,

Michael

*Facepalm*

Yes, the fact that you're running one quarter of the minimum required amount of RAM is most certainly an issue. You will lose everything eventually with such a low amount of RAM.

The slow processor does not help your Rsync speeds either, but that's a secondary concern.
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
Dear Eric,

I have seen these claims, but how much truth is in them? Let's try to work with facts and measurable data and not hearsay. If I look at the memory and disk I/O statistics I see that the system hardly ever reads from the disks and the memory usage never went above 550MB today. Swap is not touched at all. So I don't see how adding memory could help here. I have read that read ahaed caching is disabled with low memory configurations, but I don't see how read ahead caching can help if there are anyway only very few reads from disks. As I mentioned, I use this system not as a file server, but as a backup server, which is obviously mostly written to. Also I think that claims like "you will lose anything" are not appropriate. FreeNAS is BSD based and should be rock solid as long as it has enough swap space. Actually I am convinced that the likelyhood to loose everything is much higher when I stick to the Thecus OS and a RAID6 configuration. I will think about adding memory if my system starts to touch swap.

Also if you look at the Oracle Solaris ZFS Administration Guide, they claim that you should have at least 1GB when using ZFS on Solaris.
http://docs.oracle.com/cd/E19253-01/819-5461/gbgxg/index.html
I cannot imagine that the same file system on BSD needs 8 times as much. Of cause also Oracle says that for highly loaded file servers you should have 1GB of RAM per 1 TB of storage, but I don't plan to build a highly loaded file server, just a single user backup server.

Best regards,

Michael

P.S.: to be sure, I won't loose anything if just one or two of my backup methods fail.
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
Dear FreeNAS Users,

it looks likere there is another reason why it is not a good idea to use a Thecus N7700 for FreeNAS: it doesn't have ECC memory and as it looks it is a bad idea to use non ECC memory. I see if I can get a 7710 motherboard, which supports larger memory and ECC memory.

Best regards,

Michael
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll put this short and sweet.

We have 8GB as the minimum for a bunch of reasons, reliability and preventing data loss being the two biggest. If that's not enough of a reason then you should find an alternative OS. Sorry, but you'll find virtually nobody here willing to talk about problems people have when they have insufficient RAM. FreeNAS can behave so erratically and such that we just don't care about problems people have with low RAM. Low RAM has sent us on so many useless wild goose chases we don't even entertain troubleshooting systems that don't meet the minimum.

With that said, good luck.
 
J

jkh

Guest
rsync is slow (15MByte/s) and takes 50% CPU load when pushing or pulling onto the FreeNAS-Thecus.
Sounds both reasonable and expected. Rsync is not a light-weight utility in terms of its need for both CPU horsepower and RAM to calculate differences between files, and this system is simply low-spec. 50% of a single-core Celeron? Sounds entirely reasonable. What's the problem?
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
I forgot to write that with the original Thecus firmware, the same machine does rsync with above 100 MByte/s. And it doesn't seem to be a memory problem, as I wrote. Memory consumption always stayed below 600M. A difference of a factor of 6 between different systems on the same machine sounds a bit to large.

Best regards,

Michael
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
@cyberjock:

I fully understand this. I was experimenting to find out what is possible. After my experiments I accept that FreeNAS with ZFS requires 8GB of ECC RAM. But I don't think I accept that it has to be like this. I think one might need two rather different systems for two different applications of NAS: A file server and a backup server. I fully understand that a file server needs to lot of RAM for caching. In work we have file servers with 512G of memory. But privately I just need a reliable thing to keep backups of my data. And for this application, were you just need to cache few directory structures and a few blocks on the fly between network and disk, I don't know what the 8GB shall be good for. I still would be interested in an explanation why FreeNAS needs so much RAM for this application.

Anyway, after reading through some forum discussions, I don't think that ZFS is the right thing for me. I will sort my thoughts and think them through and then write them down in a separate entry.

Best regards,

Michael
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
For fun, I play around with ZFS on a laptop with 2GB non-ECC RAM. I've seen crazy erratic behavior (including xterm slowing down to nothing), which is remarkable because I spend 90% of my time in xterm and vim. It's definitely like playing russian roulette, but that's the way I roll sometimes. That being said, I would never do this on an important system.
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
Yes, I also got crashes as soon as the filesystem was more than 3% filled.

Maybe this was also because I chose to install the 32 bit system. Since I had less than 4 GB RAM, I didn't saw the point in a 64 bit system. But this might have been stupid. I guess ZFS just needs more then 4GB of address space. It never touched the swap and also used only 1/3 of my 2GB of memory. Maybe this was because it went out of address space before it went out of physical memory.

Just out of curiosity I will also try the 64 bit system and see how it behaves. Not that I will use it in a production system, but I am just curious if it is more stable.

Best regards,

Michael
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Yes, I also got crashes as soon as the filesystem was more than 3% filled.

Maybe this was also because I chose to install the 32 bit system. Since I had less than 4 GB RAM, I didn't saw the point in a 64 bit system. But this might have been stupid. I guess ZFS just needs more then 4GB of address space. It never touched the swap and also used only 1/3 of my 2GB of memory. Maybe this was because it went out of address space before it went out of physical memory.

Just out of curiosity I will also try the 64 bit system and see how it behaves. Not that I will use it in a production system, but I am just curious if it is more stable.

Best regards,

Michael
There is a reason why the hardware requirements are what they are. If you can't meet them, go find a different OS. No one's feelings will be hurt.

Note that my 2GB machine is vanilla FreeBSD, and I am being crazy-don't-do-this-at-home stupid (because breaking stuff and trying to fix it is fun). This means "never ever do this on a server".
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
As I wrote, I do this out of curiosity. I trust a system more if it also runs under corner circumstances, and in case it doesn't do so it is understandable why.
Just because it is known that the system isn't well behaved unless you have 8GB of ECC RAM isn't a sufficient explanation for me. I want to understand why things are as they are.

The nice thing I found so far about FreeNAS is that if it crashed, it crashed in a rather nice way. It did reasonable proper error reporting, shut down the file system and so on. The RAID was still healthy after 10 crashes & reboots. So it was more an emergency shutdown than a crash.

Best regards,

Michael
 
J

jkh

Guest
Well, you can always write your own NAS using a different collection of open source components (and those you choose to write yourself of course). Every software product is full of trade-offs. ZFS has made some simplifying assumptions that serve it well in the larger fileserver market it was originally designed to target. We see pretty good performance with 100+ disk arrays with 192+GB of memory, certainly performance comparable to other solutions in the same class. Does that mean that it's also going to "scale down" to running on a 32 bit laptop (yes, I've seen this attempted) with 2GB of memory? Probably not, and not because that's completely impossible (it's all just a "simple matter of coding") but because none of the software engineers behind ZFS and/or FreeNAS targeted that configuration.

Like I said, you can always write your own NAS. There are a number of filesystems to choose from, technologies like GEOM to put underneath those filesystems, and OS technologies (everything from illumos to Linux) that can be bent to the purpose of creating a general-purpose NAS. I think such efforts are even to be encouraged, since they give us more points of comparison rather than purely academic discussions about what "should be possible" or endless debates on where the sweet spot for a specific NAS product is (big iron? small iron? something in the middle?).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
As I wrote, I do this out of curiosity. I trust a system more if it also runs under corner circumstances, and in case it doesn't do so it is understandable why.

I wouldn't consider "ignoring the documented minimum requirements" as a "corner circumstance". That's creating circumstances that have been deliberately documented as "not a good thing". Naturally, it doesn't work right.

Car analogy: I don't put diesel fuel in my car because it says "unleaded gasoline only". I can expect that if I put diesel in my car it would certainly be "not a good thing". I also change the oil in accordance with the vehicle's manual because if I don't that would also be "not a good thing". Same with my tire pressure and the same with how often I change my tires. So when I buy my car it is well documented that I should NOT try to use the vehicle outside of its designed tolerances.

There is no difference between using my car within its designed criteria and a computer, but for some reason you are arguing exactly that. Not sure why either. We have a documented limit, you shouldn't go below that. Problem solved.

We also know why. The system uses up all available RAM and processes being terminating inappropriately because there is no RAM left. This is totally explainable and understandable. There is no secret or anything to this. It's simple observation and understanding. ;)

If you want to see where every byte of RAM goes, the source code and binary is freely available. You are welcome to use it, modify it, etc as you see fit. If you want to get rid of the 1.5GB or so of RAM drives that FreeNAS runs on, feel free. If you want to get rid of ZFS, all the unused drivers that don't apply to your hardware, feel free. It's all there for you to take and use.

Just because it is known that the system isn't well behaved unless you have 8GB of ECC RAM isn't a sufficient explanation for me. I want to understand why things are as they are.

Microsoft doesn't tell you why their limit is 1GB vice 256MB. The numbers are just totally arbitrary. The difference between Microsoft and FreeNAS is that if you go below Microsoft's minimums the installer won't even install Windows. FreeNAS will. I've tried to convince @jkh to change this, but he doesn't see it as a problem.

If FreeNAS wouldn't install or boot without 8GB of RAM then the constant discussions in the forums on this topic would be mute. You either meet the requirements, or you don't.
 

MSoegtrop

Cadet
Joined
Oct 19, 2014
Messages
8
Dear cyberjock,

for me this discussion is a bit like you say "the sun goes up in the morning and goes down in the evening". I ask: interesting, why is this so? And you say: because it has always been like this. I would have expected an answer like "because the earth is rotating ..."

The problem I have with the 8GB is that I don't know how you came up with this number. Just trial and error? And if so, who guarantees that it will run fine with 8GB and that there are no conditions in which I don't get file system corruptions with 8GB? I think that the difference between 8GB and 2GB is not that large that I trust that it will bridge the gap between "crashes and corrupts the file system 100%" and "works 100% reliable". Just that the manual says you need 8GB is absolutely no reasonable explanation why it should crash or corrupt the file system with less, as long as sufficient swap space is available. It is just a documentation of experience. Virutal memory / page swapping based systems are not supposed to crash cause of lack of physical memory, only cause of lack of swap space or cause of lack of address space. It might be that experience shows that 8GB are fine, but this is not a sufficient to trust this system if it doesn't run reliably (but slowly) with 2GB.

My current assumption is that the problem might not be the memory size, but that I used the 32 bit and not 64 bit system. Running out of address space is an understandable reason for crashing. So I will try if FreeNAS does work reliably with 2 GB of RAM and a 64 bit system. If yes, fine. I will then use 8GB as the manual says. If not, I won't trust that it will work with 8GB either, unless someone has a good explanation why this is so.

And I fully agree that different requirements require different SW. I currently think about what my requirements are and how the SW which fullfills these requirements might look like.

Best regards,

Michael
 
J

jkh

Guest
for me this discussion is a bit like you say "the sun goes up in the morning and goes down in the evening". I ask: interesting, why is this so? And you say: because it has always been like this. I would have expected an answer like "because the earth is rotating ..."
Well, Cyberjock is a product of the military. If you don't understand something you were told to do the first time, the custom there is to simply yell louder the 2nd time and possibly also add a bit of physical training to reinforce the message. We don't have that option here, which is frankly a pity because if we did, everyone would be in much better physical shape.

Anyway, to continue with your analogy, the correct (and implied) response is more along the lines of: Because we don't have time to teach Astronomy class here. Just take it as read that the sun is doing the whole east/west thing and try to also ignore the fact that the orbital mechanics you learned in high school are basically superficial and only partially correct because everything in the solar system is just not rotating so much as it is moving in an elliptical corkscrew since every object is simultaneously orbiting around the galactic center, including the Sun, while also orbiting the Sun. See what happens when you start to really ask about the details? If only we had French animators working for us to visualize such things - the postings here might be shorter and Cyberjock less cranky.

My point above about building your own NAS was entirely apropos to this. If you really want to know how something works and don't want to just take someone else's word for it, then there's no substitute for building one. Similarly, if you really want to know how the 8GB of memory figure was arrived at, start measuring the footprint of python, nginx, samba and so on. There are over 130 processes running on a typical FreeNAS file server, and those are just userland processes - we're not counting kernel wired memory, ARC, or any other caches that system services (like LDAP or Kerberos) might keep in order to deliver good service. See how that might fit into 2GB instead and what might suffer or get outright evicted from memory (warning - the system likes to shoot Python first) as a result. Go ahead. Go look. We'll be waiting for you when you come back. :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, Cyberjock is a product of the military. If you don't understand something you were told to do the first time, the custom there is to simply yell louder the 2nd time and possibly also add a bit of physical training to reinforce the message. We don't have that option here, which is frankly a pity because if we did, everyone would be in much better physical shape.

Anyway, to continue with your analogy, the correct (and implied) response is more along the lines of: Because we don't have time to teach Astronomy class here. Just take it as read that the sun is doing the whole east/west thing and try to also ignore the fact that the orbital mechanics you learned in high school are basically superficial and only partially correct because everything in the solar system is just not rotating so much as it is moving in an elliptical corkscrew since every object is simultaneously orbiting around the galactic center, including the Sun, while also orbiting the Sun. See what happens when you start to really ask about the details? If only we had French animators working for us to visualize such things - the postings here might be shorter and Cyberjock less cranky.

My point above about building your own NAS was entirely apropos to this. If you really want to know how something works and don't want to just take someone else's word for it, then there's no substitute for building one. Similarly, if you really want to know how the 8GB of memory figure was arrived at, start measuring the footprint of python, nginx, samba and so on. There are over 130 processes running on a typical FreeNAS file server, and those are just userland processes - we're not counting kernel wired memory, ARC, or any other caches that system services (like LDAP or Kerberos) might keep in order to deliver good service. See how that might fit into 2GB instead and what might suffer or get outright evicted from memory (warning - the system likes to shoot Python first) as a result. Go ahead. Go look. We'll be waiting for you when you come back. :)

I think that the difference between 8GB and 2GB is not that large that I trust that it will bridge the gap between "crashes and corrupts the file system 100%" and "works 100% reliable".

I'd like to add that the jump from 2GB to 8GB is a 400% increase in RAM. It's not a trivial difference.

8GB is just the amount of RAM that has been empirically determined (nobody starts counting how much memory each component of the system will use before writing the OS - except maybe on some embedded systems) not to cause problems for typical uses.

To extend jkh's analogy, what you're asking is "Why does the Earth need 23 hours 56 minutes my-orbital-mechanics-professor-would-kill-me-for-not-knowing seconds to turn on its axis and not 12 hours?"
The answer is: "Well, if you could analyze every interaction the Earth has had since its creation, you'd have your answer."
The difference is that in this case, it is possible (though laborious) to account for everything - but you can't expect a detailed analysis to be made a priori, especially when the many components come from tons of different sources who'd each have to publish detailed memory usage results.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
8GB was determined as the minimum because people were losing pools spontaneously and without cause with less. They were also having errors in the WebGUI that were later attributed to being out of memory errors.

That's all you're going to get because I am *not* a professor and not going to teach everyone using all the threads I've read over the years. I'm not the only one that made this observation. It was discussed among the other forum admins that had noticed the same thing and we all agreed that the responsible thing to do for the sake of people's data was to up the limit to 8GB.

Virutal memory / page swapping based systems are not supposed to crash cause of lack of physical memory, only cause of lack of swap space or cause of lack of address space.

That statement is completely inaccurate too. You cannot swap kernel space to disk, and ZFS is kernel. Do you know what happens when you run out of RAM and the kernel needs more? Kernel panic. See the problem?

@jkh- Actually, I had one of about 5 jobs in the military where you were forbidden from answering with "that's how it was always done". We joking called it "operating by tribal knowledge" and it could end your career if anyone knew that you operated that way (along with anyone else that knew, etc.). So no, your analogy to the military may apply to most positions but it most certainly didn't apply to mine.
 
Last edited:

Oko

Contributor
Joined
Nov 30, 2013
Messages
132
Well, you can always write your own NAS using a different collection of open source components (and those you choose to write yourself of course).
I will go even further and even suggest particular pieces of open source ecosystem. He need to get himself a copy of DragonFly BSD iso. Read very carefully man pages for HAMMER file system and put 40-50 years of development into DragonFlyNAS roughly equivalent to what is up to now put into FreeNAS, and another 50 year to get DragonFly infrastructure to the usable point (LDAP authorization not working, NFSv4 not working, hell even SNMP not working properly). I guarantee you 2GB of RAM and that celeron processor will be sufficient. Oh please keep BSD licenses because I would like to use your work.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
8GB was determined as the minimum because people were losing pools spontaneously and without cause with less. They were also having errors in the WebGUI that were later attributed to being out of memory errors.

That's all you're going to get because I am *not* a professor and not going to teach everyone using all the threads I've read over the years. I'm not the only one that made this observation. It was discussed among the other forum admins that had noticed the same thing and we all agreed that the responsible thing to do for the sake of people's data was to up the limit to 8GB.



That statement is completely inaccurate too. You cannot swap kernel space to disk, and ZFS is kernel. Do you know what happens when you run out of RAM and the kernel needs more? Kernel panic. See the problem?

@jkh- Actually, I had one of about 5 jobs in the military where you were forbidden from answering with "that's how it was always done". We joking called it "operating by tribal knowledge" and it could end your career if anyone knew that you operated that way (along with anyone else that knew, etc.). So no, your analogy to the military may apply to most positions but it most certainly didn't apply to mine.
For some reason I'm under the impression you were a bubblehead. That in itself is sufficient explanation for *any* idiosyncrasies. As for as not asking questions - I think that's a marine thing, and 8GB - heck even 128GB ECC ram would not be enough to stop a marine from breaking a zpool.
 
Status
Not open for further replies.
Top