Ok I built two monster freenas machines for work. 24 x 3Tb Seagate Constellation SATA disk. Super micro chaise with 24 slot hot swap SAS/SATA. 2 x 256Gb enterprise SSD. 2 x intel xeon E5-2620 cpus. 98Gb ram. Emulux 10Gb ethernet adaptors.
So kind of monster machines. My performance is just awesome on the system but I can't seem to make the network part work very well. I can run some standard dd tests on the zfs pool and the performance is amazing. Im using the ssds mirrored and for cache. Using the 10Gb interfaces to server up nfs is getting me only 10MB/s speeds. I have tried auto tune on and off. Off seems to get me a little better performance. I know NFS and ZFS is kind of a big bag of tricks and trying to research how to make this performance better is frustrating because all the replys to people seem to be "ADD more RAM, ZFS loves RAM" or "Add an SSD" Well seeing as I have 256Gb SSD and almost 100Gb ram, lots of cpu cores and the cards I am using are natively supported. What am I missing? I played around with using freenas 8.3 and was able to disable sync on NFS which I know is a burst in performance but at a cost of data integrity if power fails. I upgraded to 9.1 and imported my ZFS pool then upgraded my ZFS version. Getting about the same really slow performance. Yes I am using vmware but come on 10MB/s on 10Gb connection???? The zfs seems to be performing fine on its own almost a 1GB/s write speeds and over that in read, if its local. Compression is turned off and no dedup is going on. I am trying to keep my setup very simple. I know I am leaving a bunch of stuff out and I will need to answer some other questions about my setup but here is at least a zpool status of my volume. Please let me know what I can do. Do I have poorly optimized NIC drivers? Do I need to add tuning? I did build two of these systems and did try some tests going from system1 to system2 using a cross over connection on the 10Gb cards, jumbo frames enabled and the NFS speed was a little better but still way way way less then I should be getting. I know based on the local file system benchmarks that I should be almost able to saturate the full 10Gb links. Ok well here is hopes you folks can help me. I see folks using machines with way less umph then me getting atleast 70-100 MB/s so I am hoping someone can find my silver bullet. If you need more details ask I will try to respond quickly.
pool: volume1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Aug 4 02:00:02 2013
config:
NAME STATE READ WRITE CKSUM
volume1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/85001b27-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/855f386b-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/85bccda0-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8618ed3c-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/867582a6-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/86d3d83c-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8731e68f-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/878da0ef-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/87e9258d-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8845c405-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/88a54226-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8901d5e1-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/895fc7ba-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/89bda854-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8a1bc16f-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8a7b89cb-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8adac5db-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8b3d0104-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8b9e3d11-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8bfd6826-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8c5f07c3-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8cc04ecb-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
logs
gptid/8eb6eccf-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
cache
gptid/8e728c27-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
spares
gptid/8db7f36c-de81-11e2-9a47-0090fa1ee116 AVAIL
gptid/8e1afc87-de81-11e2-9a47-0090fa1ee116 AVAIL
errors: No known data errors
So kind of monster machines. My performance is just awesome on the system but I can't seem to make the network part work very well. I can run some standard dd tests on the zfs pool and the performance is amazing. Im using the ssds mirrored and for cache. Using the 10Gb interfaces to server up nfs is getting me only 10MB/s speeds. I have tried auto tune on and off. Off seems to get me a little better performance. I know NFS and ZFS is kind of a big bag of tricks and trying to research how to make this performance better is frustrating because all the replys to people seem to be "ADD more RAM, ZFS loves RAM" or "Add an SSD" Well seeing as I have 256Gb SSD and almost 100Gb ram, lots of cpu cores and the cards I am using are natively supported. What am I missing? I played around with using freenas 8.3 and was able to disable sync on NFS which I know is a burst in performance but at a cost of data integrity if power fails. I upgraded to 9.1 and imported my ZFS pool then upgraded my ZFS version. Getting about the same really slow performance. Yes I am using vmware but come on 10MB/s on 10Gb connection???? The zfs seems to be performing fine on its own almost a 1GB/s write speeds and over that in read, if its local. Compression is turned off and no dedup is going on. I am trying to keep my setup very simple. I know I am leaving a bunch of stuff out and I will need to answer some other questions about my setup but here is at least a zpool status of my volume. Please let me know what I can do. Do I have poorly optimized NIC drivers? Do I need to add tuning? I did build two of these systems and did try some tests going from system1 to system2 using a cross over connection on the 10Gb cards, jumbo frames enabled and the NFS speed was a little better but still way way way less then I should be getting. I know based on the local file system benchmarks that I should be almost able to saturate the full 10Gb links. Ok well here is hopes you folks can help me. I see folks using machines with way less umph then me getting atleast 70-100 MB/s so I am hoping someone can find my silver bullet. If you need more details ask I will try to respond quickly.
pool: volume1
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Aug 4 02:00:02 2013
config:
NAME STATE READ WRITE CKSUM
volume1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/85001b27-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/855f386b-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/85bccda0-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8618ed3c-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/867582a6-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/86d3d83c-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8731e68f-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/878da0ef-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/87e9258d-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8845c405-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/88a54226-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8901d5e1-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/895fc7ba-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/89bda854-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8a1bc16f-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8a7b89cb-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8adac5db-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8b3d0104-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8b9e3d11-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8bfd6826-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8c5f07c3-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
gptid/8cc04ecb-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
logs
gptid/8eb6eccf-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
cache
gptid/8e728c27-de81-11e2-9a47-0090fa1ee116 ONLINE 0 0 0
spares
gptid/8db7f36c-de81-11e2-9a47-0090fa1ee116 AVAIL
gptid/8e1afc87-de81-11e2-9a47-0090fa1ee116 AVAIL
errors: No known data errors