Can't get NFS to work (Windows 7 Ultimate)

Status
Not open for further replies.

John M. Długosz

Contributor
Joined
Sep 22, 2013
Messages
160
The FreeNAS docs explain that NFS can have better performance than CIFS, and gives directions for installing 3rd party NFS solutions on versions of Windows that don't support it directly. I want to take advantage of that, so I enabled NFS (supported by "Ultimate") and indeed connected to the FreeNAS box.

But, although I can browse the directories and see the file names, all files give errors that they are locked by some other process, and can't be read.

I tried turning off the Samba service completely in case there were some residual effects. It did not make a difference.

Now this shouldn't be any harder than serving NFS on one end and using NFS on the other, and the docs don't mention any other funny business. The permissions are OK since it worked on CIFS for the same directory, and the error doesn't indicate that it's a permissions problem. But I wonder if it still might be, and Windows doesn't interpret the error correctly? Do NFS shares have some access rules supplied directly onto the share, or interpret things differently than Samba might, or anything?

—John
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
I understand your trying to maximize throughput for your Windows to FreeNAS system, however, it would be interesting to know what issues you had with CIFS ?

There are a number of threads on the forum where people have had issues with CIFS, however, there are also a number of threads where people have had some major wins as far as speeding up their transfer speeds.

What numbers (transfer speeds) were you getting ? What were you hoping for ? There are a number of factors that can effect transfer speeds. If you post your initial results and setup I'm sure some of the forum members would post their thoughts ?
 

John M. Długosz

Contributor
Joined
Sep 22, 2013
Messages
160
My results are documented here. The issue isn't sustained throughput, but individual file opening and closing (e.g. using lots of small files rather than one big one) and the lack of caching.

At the very least, I'd like to run benchmarks and find out what the differences are between NFS and CIFS. Much information on SMB (Windows shares) is out of date, and the version of Samba present on FreeNAS, and the version of Windows I'm running, both support the newer stuff.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Opening a large number of small files has(and pretty much always will be) a problem because you are opening one file, which is fetched by your server, then served to your desktop, then your desktop makes another request. Each time your server has to fetch another file that includes more read requests where disk latency of just a 2-3ms can really add up in your pool. If you have a need to get better performance with large numbers of small files your best bet is to make an SSD pool.
 

John M. Długosz

Contributor
Joined
Sep 22, 2013
Messages
160
The total size of the files in the "compile" benchmark is far smaller than the RAM. The DAS case also has to seek etc. to open each file; in fact it seems to be CPU bound so I didn't bother with a RAMdisk test as a control. Running a second time immediately afterwards shows the same time, but won't the files already be cached from the previous pass?

The "small files" test, now that I review my own chart, is similar for DAS and NAS so that is not adding too much through CIFS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The total size of the files in the "compile" benchmark is far smaller than the RAM. The DAS case also has to seek etc. to open each file; in fact it seems to be CPU bound so I didn't bother with a RAMdisk test as a control. Running a second time immediately afterwards shows the same time, but won't the files already be cached from the previous pass?

Maybe. Maybe not. Don't assume that since you have 1GB of small files and 16GB of RAM that they will all be cached either.

The "small files" test, now that I review my own chart, is similar for DAS and NAS so that is not adding too much through CIFS.

I'm not sure what the "small files" test is, but benchmarks aren't necessarily the best way to see what is or isn't going on, actual data and actual workload is best while observing pool latency, throughput, etc.

Small files have always been(and will always be) a performance killer for file servers. You are potentially asking for lots of seeks for just a few kb at a time. At just a few kb/sec it doesn't take much to make 100MB worth of files to suddenly take what appears to be an absurd time to load. The best fix is SSD since there is almost no seek time at all, so those millisecond seek times multiplied by the number of disks and vdevs in your pool are no microseconds. A factor of 1000 improvement(or better) in latency is significant.
 
Status
Not open for further replies.
Top