Hi all. and thankyou for anytime spent helping me out.
I have been using zfs and freenas for a few years now but it is time to rebuild my server and I wanted some advice. As it was only in use by me and 2 pc's for backup, performanc was not a issue. Now it is going to be used for backups of 5 pc's 4 users shared folders and offsite backups of 3 online servers. Not a large quantity of data each day but the performance I am getting is inconsistant and seems low. I need to find the bottle neck or rebuild now it will be used 24/7. e.g currantly a 8TB scrub takes 2 days if I go with larger drives in the future scrubs will take to long if done weekly.
When I set this up as a opensolaris with just 1 raidz1 <50% full I recall a scrub speed of ~350MB/s and faster cifs transfer but I have no record of this. Over the years it has had 2 mirros added for extra storage capacity and weekly+monthy snapshots ~5GB held in snapshots.
Now I get:
Scrub ~125MB/s (no network transfers in progress)
from server to main PC
Max nfs share 40-55MB/s (not often used)
Max cifs 30-50MB/s transfer will clime to start at 50MB/s then gradualy drop to ~30MB/s
A cifs transfer to multiple pc's totals ~ 60MB/s dose not matter if its 2 or 5pc performance is about the same. No difrence if linux or windows 7.
server spec:
freenas 8.3 pool v28 performance was very simila on v15
AMD X3 processor
Asus workstation motherbord
PCI-x HBA supermicro aoc-sat2-mv8 (should be good for 100MB/s a channel) http://blog.zorinaq.com/?e=10
8GB ram
2x Icy Box IB-555SSK backplanes with 4 disks in each going to the supermicro HBA
Gigabit network 2 nic both realtek but difrent chipsets. No agrigaed link performance equal on both.
Intel server NIC same performance as reltek may put back in if I can get close to saturating a gigabitlink.
Switch Netgear prosafeport gigabit. Have tried 2 3com/hp 24port and 48port rackmount switches.no improvement is transfer times.
Find zpool status -v, top etc.. at the end of this post.
I will update with DD benchmark when latest scrub is finished.
Question time?
I would perfer to keep costs low by fixing what I have if its capable of say 90MB/s transfer speed.
1) Can anyone spot the bottle neck?
If I had to guess I would presume its the PCI-x link or the aoc-sat2-mv8 but as sun microsystems used them in there original 48 drive thumper server. It should work well for zfs.
2) anyone got a benchmark for the ixsystem mini?
would like to support freenas development but import taxes will probably make it to expensive.
3) Hardware recommendation on new setup?
I am looking to saturate a gigabit link(no agragated link etc..) This will be in a home office so quiet is perfered. 8 drive max probably 2x 4 drive raidz1 or 8 Drive raidz2 with 2 hot spares on the motherbord sata ports if I reuse the case and backplanes. The ibm m1015 is often recomended but is getting hard to find any recomended replacements.
Currantly my zpool status -v (gptid has been shortend to correct formatting)
top shows
systat -io
I have been using zfs and freenas for a few years now but it is time to rebuild my server and I wanted some advice. As it was only in use by me and 2 pc's for backup, performanc was not a issue. Now it is going to be used for backups of 5 pc's 4 users shared folders and offsite backups of 3 online servers. Not a large quantity of data each day but the performance I am getting is inconsistant and seems low. I need to find the bottle neck or rebuild now it will be used 24/7. e.g currantly a 8TB scrub takes 2 days if I go with larger drives in the future scrubs will take to long if done weekly.
When I set this up as a opensolaris with just 1 raidz1 <50% full I recall a scrub speed of ~350MB/s and faster cifs transfer but I have no record of this. Over the years it has had 2 mirros added for extra storage capacity and weekly+monthy snapshots ~5GB held in snapshots.
Now I get:
Scrub ~125MB/s (no network transfers in progress)
from server to main PC
Max nfs share 40-55MB/s (not often used)
Max cifs 30-50MB/s transfer will clime to start at 50MB/s then gradualy drop to ~30MB/s
A cifs transfer to multiple pc's totals ~ 60MB/s dose not matter if its 2 or 5pc performance is about the same. No difrence if linux or windows 7.
server spec:
freenas 8.3 pool v28 performance was very simila on v15
AMD X3 processor
Asus workstation motherbord
PCI-x HBA supermicro aoc-sat2-mv8 (should be good for 100MB/s a channel) http://blog.zorinaq.com/?e=10
8GB ram
2x Icy Box IB-555SSK backplanes with 4 disks in each going to the supermicro HBA
Gigabit network 2 nic both realtek but difrent chipsets. No agrigaed link performance equal on both.
Intel server NIC same performance as reltek may put back in if I can get close to saturating a gigabitlink.
Switch Netgear prosafeport gigabit. Have tried 2 3com/hp 24port and 48port rackmount switches.no improvement is transfer times.
Find zpool status -v, top etc.. at the end of this post.
I will update with DD benchmark when latest scrub is finished.
Question time?
I would perfer to keep costs low by fixing what I have if its capable of say 90MB/s transfer speed.
1) Can anyone spot the bottle neck?
If I had to guess I would presume its the PCI-x link or the aoc-sat2-mv8 but as sun microsystems used them in there original 48 drive thumper server. It should work well for zfs.
2) anyone got a benchmark for the ixsystem mini?
would like to support freenas development but import taxes will probably make it to expensive.
3) Hardware recommendation on new setup?
I am looking to saturate a gigabit link(no agragated link etc..) This will be in a home office so quiet is perfered. 8 drive max probably 2x 4 drive raidz1 or 8 Drive raidz2 with 2 hot spares on the motherbord sata ports if I reuse the case and backplanes. The ibm m1015 is often recomended but is getting hard to find any recomended replacements.
Currantly my zpool status -v (gptid has been shortend to correct formatting)
Code:
[root@freenas] ~# zpool status -v pool: DeathStar state: ONLINE scan: scrub in progress since Thu Feb 28 02:30:30 2013 5.11T scanned out of 7.07T at 125M/s, 4h35m to go 8K repaired, 72.28% done config: NAME STATE READ WRITE CKSUM DeathStar ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gptid/aa4d3 ONLINE 0 0 0 gptid/aa998 ONLINE 0 0 0 gptid/aae0ff ONLINE 0 0 0 gptid/ab26e ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/c4fa0 ONLINE 0 0 0 gptid/c56a5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/f025e1 ONLINE 0 0 0 gptid/f0ad67 ONLINE 0 0 0 errors: No known data errors
top shows
Code:
load averages: 0.34, 0.43, 0.41 33 processes: 1 running, 32 sleeping CPU: 0.0% user, 0.0% nice, 14.0% system, 1.5% interrupt, 84.5% idle Mem: 90M Active, 57M Inact, 536M Wired, 2260K Cache, 205M Buf, 6963M Free Swap: 16G Total, 16G Free
systat -io
Code:
/0 /1 /2 /3 /4 /5 /6 /7 /8 /9 /10 Load Average ||| /0% /10 /20 /30 /40 /50 /60 /70 /80 /90 /100 cpu user| nice| system|********* interrupt|X idle|*************************************** /0% /10 /20 /30 /40 /50 /60 /70 /80 /90 /100 md0 MB/s tps| md1 MB/s tps| md2 MB/s tps| ada0 MB/s tps|XX ada1 MB/s tps|XX ada2 MB/s tps| ada3 MB/s tps| ada4 MB/s*********************XX tps|******************************************XX566.65 ada5 MB/s*********************XX tps|******************************************XX644.17 ada6 MB/s*********************XX tps|******************************************XX571.04 ada7 MB/s*********************X tps|******************************************XX562.45