Dear FreeNAS Team and users,
I guess most of you will wrinkle their nose about my HW, a Thecus N7700+, but I like this HW and don't want to discuss this here. I rather would like to discuss what is possible with FreeNAS on this HW. My Thecus has a 1.8GHz Celeron processor (single core), 7x3TB WD red disks in ZFS Raid-Z3 config and 2GB of RAM. I thought the 2GB of RAM might be an issue, especially with a 21TB ZFS array, but the top command claims that 1478MB are free when an rsync runs. I use FreeNAS FreeNAS-9.2.1.8-RELEASE-x86.
The system runs reasonably well, e.g. 50MByte/s with writes on windows shares, but rsync is slow (15MByte/s) and takes 50% CPU load when pushing or pulling onto the FreeNAS-Thecus. I use rsync with rsyncd protocol. I am sure about this, because ssh is not enabled for the account I use on the other machine. Also I use the :: syntax like below to pull data from another host on the FreeBSD machine:
rsync -rvlHtS --progress <user>@<ip-adr>::<path> .
when the rsync runs, top executed on the FreeNAS machine gives me this
last pid: 8319; load averages: 1.35, 1.45, 1.47 up 0+01:57:23 11:08:49
43 processes: 2 running, 41 sleeping
CPU: 9.2% user, 0.0% nice, 51.3% system, 11.8% interrupt, 27.6% idle
Mem: 101M Active, 79M Inact, 344M Wired, 1168K Cache, 25M Buf, 1464M Free
ARC: 44M Total, 120K MFU, 12M MRU, 10M Anon, 518K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8069 michael 1 86 0 22692K 3348K RUN 13:11 48.49% rsync
3336 root 12 20 0 29364K 11128K uwait 0:11 0.00% collectd
3113 root 6 22 0 99316K 61192K usem 0:10 0.00% python2.7
First I must admit, I don't fully understand this. The top line claims that the system has 9.2% user load but that rsync uses 48.49%. The FreeBSD man page on top couldn't enlight me on this. The main issue is that the overall system load is >1. This is measured towards the end of the transfer of a large file (50GB).
If I stop the rsync process, the system is immediately at >99% idle.
If I run a simple disc write test like this:
dd if=/dev/zero of=./test bs=65536 count=65536
65536+0 records in
65536+0 records out
4294967296 bytes transferred in 30.503499 secs (140802447 bytes/sec)
I get these values with top:
last pid: 8427; load averages: 0.72, 1.36, 1.49 up 0+02:10:07 11:21:33
42 processes: 1 running, 41 sleeping
CPU: 0.0% user, 0.0% nice, 24.4% system, 0.5% interrupt, 75.1% idle
Mem: 100M Active, 77M Inact, 335M Wired, 1168K Cache, 25M Buf, 1475M Free
ARC: 60M Total, 120K MFU, 12M MRU, 26M Anon, 553K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8421 michael 1 26 0 9412K 1428K tx->tx 0:06 12.50% dd
So the system is 75% idle. Also the write speed seems to be adequate for a 1GBit/s network link (it is almost 10x higher than what I get with rsync).
When i run iperf between the two machines, I get a bandwidth of 930Mbits/s and this top result:
last pid: 8485; load averages: 0.23, 0.39, 0.92 up 0+02:17:08 11:28:34
42 processes: 2 running, 40 sleeping
CPU: 0.0% user, 0.0% nice, 6.1% system, 66.7% interrupt, 27.3% idle
Mem: 100M Active, 77M Inact, 336M Wired, 1168K Cache, 25M Buf, 1474M Free
ARC: 34M Total, 122K MFU, 12M MRU, 16K Anon, 549K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8466 michael 4 30 0 11488K 2828K RUN 0:04 5.57% iperf
so networking seems to load the system, but here the bandwidth is about 10x higher than what rsync does.
In the end I think when transfering a large file with rsyncd protocol, rsync shouldn't do much besides receiving data and writing it to the disk, and both of these seem to work reasonably well. For both individual tasks I get about 10x the bandwidth of what I get with rsync. Just the combination using rsync doesn't work out.
Does someone have an idea on what rsync might be doing here? I found lot's of posts about slow rsymc, but all claim that this has to do with ssh encyprtion, but I am not using ssh. From looking at the write and network speed, getting an increase of a factor of 4 seems reasonable, and this would be close enough to what you can get with 1GB ethernet, that I would be happy with it.
Is the low amount of RAM an issue, although top claims that a lot of RAM is free? I think that for a "write once / hopefully read never" backup server, a relatively small RAM should be ok, since I don't need a lot of caching, just for a few directories. Also I have mostly large files (VM images).
Thanks & best regards,
Michael
I guess most of you will wrinkle their nose about my HW, a Thecus N7700+, but I like this HW and don't want to discuss this here. I rather would like to discuss what is possible with FreeNAS on this HW. My Thecus has a 1.8GHz Celeron processor (single core), 7x3TB WD red disks in ZFS Raid-Z3 config and 2GB of RAM. I thought the 2GB of RAM might be an issue, especially with a 21TB ZFS array, but the top command claims that 1478MB are free when an rsync runs. I use FreeNAS FreeNAS-9.2.1.8-RELEASE-x86.
The system runs reasonably well, e.g. 50MByte/s with writes on windows shares, but rsync is slow (15MByte/s) and takes 50% CPU load when pushing or pulling onto the FreeNAS-Thecus. I use rsync with rsyncd protocol. I am sure about this, because ssh is not enabled for the account I use on the other machine. Also I use the :: syntax like below to pull data from another host on the FreeBSD machine:
rsync -rvlHtS --progress <user>@<ip-adr>::<path> .
when the rsync runs, top executed on the FreeNAS machine gives me this
last pid: 8319; load averages: 1.35, 1.45, 1.47 up 0+01:57:23 11:08:49
43 processes: 2 running, 41 sleeping
CPU: 9.2% user, 0.0% nice, 51.3% system, 11.8% interrupt, 27.6% idle
Mem: 101M Active, 79M Inact, 344M Wired, 1168K Cache, 25M Buf, 1464M Free
ARC: 44M Total, 120K MFU, 12M MRU, 10M Anon, 518K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8069 michael 1 86 0 22692K 3348K RUN 13:11 48.49% rsync
3336 root 12 20 0 29364K 11128K uwait 0:11 0.00% collectd
3113 root 6 22 0 99316K 61192K usem 0:10 0.00% python2.7
First I must admit, I don't fully understand this. The top line claims that the system has 9.2% user load but that rsync uses 48.49%. The FreeBSD man page on top couldn't enlight me on this. The main issue is that the overall system load is >1. This is measured towards the end of the transfer of a large file (50GB).
If I stop the rsync process, the system is immediately at >99% idle.
If I run a simple disc write test like this:
dd if=/dev/zero of=./test bs=65536 count=65536
65536+0 records in
65536+0 records out
4294967296 bytes transferred in 30.503499 secs (140802447 bytes/sec)
I get these values with top:
last pid: 8427; load averages: 0.72, 1.36, 1.49 up 0+02:10:07 11:21:33
42 processes: 1 running, 41 sleeping
CPU: 0.0% user, 0.0% nice, 24.4% system, 0.5% interrupt, 75.1% idle
Mem: 100M Active, 77M Inact, 335M Wired, 1168K Cache, 25M Buf, 1475M Free
ARC: 60M Total, 120K MFU, 12M MRU, 26M Anon, 553K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8421 michael 1 26 0 9412K 1428K tx->tx 0:06 12.50% dd
So the system is 75% idle. Also the write speed seems to be adequate for a 1GBit/s network link (it is almost 10x higher than what I get with rsync).
When i run iperf between the two machines, I get a bandwidth of 930Mbits/s and this top result:
last pid: 8485; load averages: 0.23, 0.39, 0.92 up 0+02:17:08 11:28:34
42 processes: 2 running, 40 sleeping
CPU: 0.0% user, 0.0% nice, 6.1% system, 66.7% interrupt, 27.3% idle
Mem: 100M Active, 77M Inact, 336M Wired, 1168K Cache, 25M Buf, 1474M Free
ARC: 34M Total, 122K MFU, 12M MRU, 16K Anon, 549K Header, 22M Other
Swap: 14G Total, 14G Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
8466 michael 4 30 0 11488K 2828K RUN 0:04 5.57% iperf
so networking seems to load the system, but here the bandwidth is about 10x higher than what rsync does.
In the end I think when transfering a large file with rsyncd protocol, rsync shouldn't do much besides receiving data and writing it to the disk, and both of these seem to work reasonably well. For both individual tasks I get about 10x the bandwidth of what I get with rsync. Just the combination using rsync doesn't work out.
Does someone have an idea on what rsync might be doing here? I found lot's of posts about slow rsymc, but all claim that this has to do with ssh encyprtion, but I am not using ssh. From looking at the write and network speed, getting an increase of a factor of 4 seems reasonable, and this would be close enough to what you can get with 1GB ethernet, that I would be happy with it.
Is the low amount of RAM an issue, although top claims that a lot of RAM is free? I think that for a "write once / hopefully read never" backup server, a relatively small RAM should be ok, since I don't need a lot of caching, just for a few directories. Also I have mostly large files (VM images).
Thanks & best regards,
Michael