freenas-supero
Contributor
- Joined
- Jul 27, 2014
- Messages
- 128
Hello! First time posting here for me!
So I have installed Freenas onto an older supermicro super server running with 2x xeon L5420 (quad cores at 2.5GHZ) and 48GB DDR2-667 ECC RAM.
The server is using a M1015 flashed to IT mode with a mixture of 8X SATA2 & SATA3 drives (3 hitachis deskstar, 4 seagates barracudas and 1 WD caviar green) assembled as a single vdev RAIDZ3 for my pool. I plan adding another 8 drives in the future as 2 vdevs of 4 drives each (chassis with 16 hotswap caddies).
CPU usage seems very reasonable (usually below 20%) and RAM is maxed out (almost) with in average 38-40GB out of 48 as it would be expected with ZFS...
Each datasets is compressed using LZ4 and exported to a NFS share which I mount on my Linux boxes.
Freenas server is currently connected to my LAN via single gigabit connection but I am expecting to use LACP soon with a managed switch to increase bandwidth. I currently use a cheap D-Link green 8port switch. When moving stuff in-out of the NFS share I usually get transfer rates of about 35-38MB/s which is pale in comparison to my Proxmox virtualization server which uses 15krpm SAS drives and a M5016 RAID controller (sustained rates around 80-90MB/s).
Can I blame the single vdev for poor performance? The NFS share? The older hardware I use? This server will mostly be a backup server and unified storage appliance for media streaming and such, and consequently will not be used for performance related applications, unless I can tweak things up to get a decent speed.
Any tips from the gurus are much appreciated for a noob like me!!
thanks!
So I have installed Freenas onto an older supermicro super server running with 2x xeon L5420 (quad cores at 2.5GHZ) and 48GB DDR2-667 ECC RAM.
The server is using a M1015 flashed to IT mode with a mixture of 8X SATA2 & SATA3 drives (3 hitachis deskstar, 4 seagates barracudas and 1 WD caviar green) assembled as a single vdev RAIDZ3 for my pool. I plan adding another 8 drives in the future as 2 vdevs of 4 drives each (chassis with 16 hotswap caddies).
CPU usage seems very reasonable (usually below 20%) and RAM is maxed out (almost) with in average 38-40GB out of 48 as it would be expected with ZFS...
Each datasets is compressed using LZ4 and exported to a NFS share which I mount on my Linux boxes.
Freenas server is currently connected to my LAN via single gigabit connection but I am expecting to use LACP soon with a managed switch to increase bandwidth. I currently use a cheap D-Link green 8port switch. When moving stuff in-out of the NFS share I usually get transfer rates of about 35-38MB/s which is pale in comparison to my Proxmox virtualization server which uses 15krpm SAS drives and a M5016 RAID controller (sustained rates around 80-90MB/s).
Can I blame the single vdev for poor performance? The NFS share? The older hardware I use? This server will mostly be a backup server and unified storage appliance for media streaming and such, and consequently will not be used for performance related applications, unless I can tweak things up to get a decent speed.
Any tips from the gurus are much appreciated for a noob like me!!
thanks!