Network File Server
Last updated 11 February 2003

Status

NFS is working great -- spello's ve and vh are automounted on sigillo, vh is automounted on cyberspace, and cyberspace's vc is automounted on sigillo. gubbio is currently down, so vm is offline. All machines are running nfs v3 in the kernel (avoid the userspace version).

Note that after installing NFS v3, I'm no longer able to mount the root directory on cyberspace's root directory on gubbio. When I do exportfs -ra, I get invalid; when I change /etc/exports to /home instead, it works fine. Note also that you may need to export precisely the directory you want to mount, so for instance exporting /mnt fails if you then try to mount /mnt/ve.

There does not appear to be a logfile, so maybe you should get one started?

There are some speed tests below, not very seriously pursued; see also hard drives and file systems.

To do

  • Use sshfs for occasional access, see instructions (local)
  • Optimize performance -- see notes below
  • Add 'proto=tcp' as a mount option (see NFS FAQ entry B10)
  • Verify that permission for file copying works as it should
  • Security should be pretty good, since you use hosts.allow and deny with ip numbers. Are those files themselves read protected?
  • Try NIC bonding! See Documentation/networking/bonding.txt in the kernel tarball. It lets you use two ethernet cards as a single interface, assuming the switch they're plugged into supports it. Cheap way to get 200 mbps throughput, especially with a switch that only does gigabit on the uplink port. I wonder if I could get some more speed out of our system this way.

Guides

Commands

rpcinfo -p for current status (rpc = Remote Procedure Call)
showmount
shows which machines are using the exports
showmount -e shows which file systems have been exported
showmount -e chi shows exports on a remote system
/var/lib/nfs/rmtab shows what the other machine has exported
/var/log/daemon.log show error messages (if logged)
/etc/hosts.allow enter hosts to export to
/etc/exports enter resources to export
exportfs -ra to make nfsd re-read the /etc/exports
exportfs list resources exported
exportfs -u :/tv1 unexport a particular resource
vserver (?)
/etc/vserver/nfs.conf (?)
/var/lib/nfs/etab list of exports and parameters

OSX

NFS mount requests from OSX clients use ports that Linux considers insecure (above 1024). To allow a connection do either one of these:

  • add the parameter "insecure" to /etc/exports, as in

    128.97.185.116(rw,no_root_squash,async,insecure)

  • replace the file rpc.lockd in OSX with the one at http://nic.phys.ethz.ch/readme/82
The latter is the preferable solution, but it's not tested.

2.6 changes

This note was added to the 2.6.0-test2 kernel:

NFS-utils
---------

In 2.4 and earlier kernels, the nfs server needed to know about any client that expected to be able to access files via NFS.  This information would be given to the kernel by "mountd" when the client mounted the filesystem, or by "exportfs" at system startup.  exportfs would take information about active clients from /var/lib/nfs/rmtab.

This approach is quite fragile as it depends on rmtab being correct which is not always easy, particularly when trying to implement fail-over.  Even when the system is working well, rmtab suffers from getting lots of old entries that never get removed.

With 2.6 we have the option of having the kernel tell mountd when it gets a request from an unknown host, and mountd can give appropriate export information to the kernel.  This removes the dependency on rmtab and means that the kernel only needs to know about currently active clients.

To enable this new functionality, you need to:

mount -t nfsd nfsd /proc/fs/nfs
before running exportfs or mountd.  It is recommended that all NFS services be protected from the internet-at-large by a firewall where that is possible.

Updated version -- or just install nfs-utils.


Update 21 February 2006

I often need to unmount a drive that has been exported, and nfs claims it's busy until nfs-kernel-daemon is stopped. What I want to do is unexport a particular drive while leaving the rest unchanged. Solution:

exportfs -u :/tv1

When you remount the drive, it's automatically reexported.

Update 16 April 2005

I'm trying to figure out how to export a file system to a different user than the originator -- a problem I encounter on the Chianti tv-recorder machine.  Here's where you can see all the exports:

cat /var/lib/nfs/etab
After extensive fiddling I discover this is not what NFS makes possible -- that is to say, you cannot map user "steen" on sigillo to user "tna" on chi. What you can do is this: if user steen on chi is uid 500 and gid 500, while user steen on sigillo is uid 1000 and gid 1000, you can map the uid and gid over in /etc/exports:
128.97.221.30(rw,all_squash,async,anonuid=1000,anongid=1000)
I'm still not sure what the all_squash does. 

Update 8 November 2004

I noticed clitunno:/ssa was no longer mounting, giving the error "Connection refused".  Machines that had already mounted /ssa continued undisturbed.  Running "rpcinfo -p trevi" gave

rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
The problem was portmap, and ps aux | grep portmap showed it was running with the option "-i 127.0.0.1". I found  /etc/default/portmap had this line:
# By default, listen only on the loopback interface
OPTIONS="-i 127.0.0.1"
I commented it out. It may be good security, but I can't use the machine.

Update 19 June 2004

Enable NFS over TCP in the kernel. Verify that it's present with

rpcinfo -p
Add to the mount command along with the options, e.g.,
noauto,rw,rsize=8k,wsize=8k,hard,intr,tcp
Compare performance.

Update 19 December 2003

To mount a Linux drive through NFS on OSX, use one of these methods:

  • In console, create /Users/Steen/vc as a mount point and issue
mount_nfs -P cyberspace:/vc vc
or
mount -o -P cyberspace:/vc vc
You can use mount_nfs or mount -o interchangeably; the -o just tells mount to use an option relevant to nfs. Note you need the -P option (use a reserved port) in either case, or the connection gets rejected as "request from insecure port" (see /var/log/syslog).

Note you need to be root to do this.

I don't know how to export an NFS volume in OSX, but it's not very relevant.
  • In the regular interface, follow this recipe.

  • Use NFSManager (shareware, €15). It installs fine (I put it under Applications), but I didn't figure out how to use it.

Installation history

NFS was installed in late May 2002; later I discovered it was in the kernel all along and just needed to be activated (details below).

Setting up the server

I used the instructions in Setting Up an NFS Server at http://nfs.sourceforge.net/nfs-howto/server.html. First step: configuring the server on cyberspace.

In SuSE, various entries in /etc/rc.config need to be set to enable NFS:

# start the file access monitoring daemon called "fam"
# KDE use this daemon to monitor directorys and files.
# It can reduce the network load, if your home directory is mounted via NFS.
START_FAM="no"

# KDE use the fam daemon
# (makes only sense on NFS mounted directorys)
KDE_USE_FAM="no"

# should the NFS server be started on this host? ("yes" or "no")
# (needs activated portmapper)
NFS_SERVER="yes"

# the kernel nfs-server supports multiple server threads
USE_KERNEL_NFSD_NUMBER="4"

# start portmap? ("yes" or "no")
# this is needed, if the NFS server is started or if NIS is used
# Caution! The portmapper will be started with no regard to
# START_PORTMAP if NFS_SERVER is set to "yes"!
START_PORTMAP="yes"

I verified that nfsd is running after rebooting. Note that there is a kernel-space and a user-space version of nfs; I'm still not confident which one I'm using and where exactly the loading needs to be requested. At the moment everything is working, so I'm leaving it alone.

Next, you need to define what will be accessible from other machines, in /etc/exports:

# Gubbio has root access
/ 128.97.184.97(rw,no_root_squash)

# Tim's machines have access to groeling
/home/groeling 128.97.184.150(rw) 128.97.184.151(rw)

# Tim and Blue have access to the student directory (could be differentiated)
/home/student 128.97.184.96(rw) 128.97.184.150(rw) 128.97.184.151(rw)

# We all have rw access to the DVD drive
/media/dvd 128.97.184.97(rw) 128.97.184.96(rw) 128.97.184.150(rw) 128.97.184.1
51(rw)

I then added the same machines to hosts.allow and blocked all others from hosts.deny. The only way in is ssh and http -- except vsftp, which may not be very safe, but does provide very limited access.

On gubbio, I only gave cyberspace access to the giant drive:

/mnt/giant 128.97.184.95

Note that the giant drive is now permanently mounted at /mnt/giant rather than under steen, both on gubbio and on cyberspace. It's a resource that should be available to all users of cyberspace -- or is it? You could give away certain directories only. Anyway, for now this is how it is (15 June 2002).

Note some syntax peculiarities, ponder this example of /etc/exports from redhat:

    /mnt/export speedy.redhat.com(rw)
    /mnt/export speedy.redhat.com (rw)

The first line grants users from speedy.redhat.com read-write access and denies all other users. The second line grants users from speedy.redhat.com read-only access (the default) and allows the rest of the world read-write access.

Details on exporting:

Exporting to IP networks, DNS and NIS domains does not enable clients from these groups to access NFS immediately; rather, these sorts of exports are hints to mountd(8) to grant any mount requests from these clients. This is usually not a big problem, because any existing mounts are preserved in rmtab across reboots.

Information on the state of the system:

less /var/lib/nfs/rmtab                  shows what the other machine has exported (on gubbio)
less /etc/rmtab                            shows what the other machine has exported (on cyberspace)

After changing /etc/exports, I killed rpc.mountd, rpc.portmap, and rpc.nfsd and restarted them, forcing them to read these new values. There is a more elegant way to achieve this result -- namely

exportfs -ra

This actually works on gubbio, but for some reason not on cyberspace.

I then gave the command

gubbio:/home/steen/mnt # mount cyberspace:/home cyberspace

and it connected! So let's try the icon. It works! Very elegant. It seems more robust than samba and handles security a bit differently, but otherwise it's much the same. You can add it to fstab and log on automatically.

All right, I got it working. It doesn't have lockd, which means you should really get it compiled into the kernel. I'm currently running the userspace NFS, not the kernelspace NFS, on cyberspace. And it seems I don't need to have NFS on gubbio. I killed nfsd and removed it from rc.config -- but note that you do need rpc.mountd and rpc.portmap. You should also get lockd for the userspace NFS.

In early June 2002 I reinstated nfs on gubbio, so that cyberspace can reach the giga drive for video encoding. Cyberspace runs at 1.2GHz and encodes dv files into divx at 3.6 frames per second, against gubbio's 2 or 2.5.

File access in NFS

Note that NFS is user-specific! Giant may be mounted for steen but invisible and inaccessible to root. I've had this happen on cyberspace: /home/steen/mnt/gubbio was mounted for steen, but not for root, and I had to log in as user steen to unmount it.

As of 5 June 2002, I had giant mounted on cyberspace (as it says in fstab) at

gubbio:/home/steen/mnt/giant /home/steen/mnt giant

-- that is to say, NFS allows you to mount an external drive as though it were local. It may be more logical to mount giant under /mnt and just to it from /home/steen/mnt/giant, but for the moment this works fine.

If you do want to change it, you need to make the change in three places on both machines:

  1. cyberspace:/etc/fstab --> list the new location
  2. gubbio:/etc/exports --> ditto
  3. cyberspace:/home/steen/mnt --> create a symlink to the new location
  4. gubbio:/home/steen/mnt --> ditto
  5. cyberspace steen desktop --> change the icon
  6. gubbio steen desktop --> ditto

For the moment, it's OK to leave it under /home/steen, as I'm the only one using it. Note also that at the moment, you don't in fact have access from cyberspace to gubbio's main drive -- just the video drive. I'm amazed that you can mount this drive on both machine with little concern with which system it is physically located on.

Tweaking

On 19 June 2002, I read in the NFS Howto at http://www.tldp.org/HOWTO/NFS-HOWTO/client.html#MOUNTOPTIONS that a remote volume should be mounted as rw,hard,intr -- I had previously mounted it as noauto,users,rw.

The mounting procedure tends to be iffy, but this time it went perfectly smoothly: On cyberspace, I unmounted vm, then changed fstab, did a mount -a, and then remounted it. I did the same for vc on gubbio. On gubbio, the cyberspace / and DVD remain on noauto,users,rw.

The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using hard, intr on all NFS mounted file systems.

Note that I wasn't able to copy, move, or delete files on vc from gubbio until I changed the file permissions on gubbio:/mnt/vm and cyberspace:/mnt/vc to 777 (perhaps excessively generous?).

Copying is now moving at a wonderfully unexpected pace of 10MB/s or just below! Before we got the new link switch, I was clocking in at 850KB/s.

Speed tests:

First create a file about 256k large on the remote machine:
time dd if=/dev/zero of=/mnt/vc/testfile bs=16k count=16384

Then read it with a block size of 16k:

gubbio:/home/steen # time dd if=/mnt/vc/testfile of=/dev/null bs=16k
16384+0 records in
16384+0 records out

real 0m28.753s
user 0m0.030s
sys 0m1.060s

Umount and remount both drives, and read it with a larger block size:

gubbio:/home/steen # time dd if=/mnt/vc/testfile of=/dev/null bs=1024k
256+0 records in
256+0 records out

real 0m30.654s
user 0m0.010s
sys 0m2.140s

The results were worse, and the results with the default values appear to be so good that it's not obviously worth fiddling with this.

Getting NFS v3 on cyberspace

NFS

On 24 June I made changes to /etc/fstab on gubbio and on cyberspace in order to enable NFS v3, which allows files larger than 2GB.

On cyberspace, /etc/fstab had this line:

gubbio:/mnt/vm /mnt/vm nfs rw,hard,intr 0 0

I started by unmounting /vm. Then I changed the options rw,hard,intr (which work perfectly) conservatively at first, adding only nfsvers=3. If this works well, you should add rsize=8192,wsize=8192 also, to speed up transmission. Finally, you should experiment with synch and asynch and see how it affects transmission speeds.

After changing /etc/fstab on cyberspace, I just did a mount -a and rcnfs restart. The latter may not have been necessary; I got this receipt:

Remove Net File System (NFS) done
Importing Net File System (NFS) done

Then I moved to gubbio and unmounted /vc and /CyberNFS. On /etc/fstab, I have three lines:

cyberspace:/ /mnt/CyberNFS nfs noauto,user,rw 0 0
cyberspace:/mnt/vc /mnt/vc nfs rw,hard,intr 0 0
cyberspace:/media/dvd /dev/dvd nfs noauto,user,rw 0 0

I added nfsvers=3 to the /vc mount first to test things. I got the receipt,

mount: RPC: Unable to receive; errno = Connection refused

This is perhaps because the /vc mount is implicitly defined as auto -- but why wouldn't it mount automatically?

I then tried rcnfs restart and got this receipt:

Remove Net File System (NFS) done
Importing Net File System (NFS)mount: RPC: Unable to receive; errno =
Connection refused failed

This suggests there may be a permission problem that has cropped up on cyberspace. I try mount /mnt/vc and indded this is the problem. It appears to be in fstab, as mount /mnt/CyberNFS works fine!

Bat news: when I remove nfsvers=3 from the /vc line, it mounts fine. On the other hand, /vm mounts fine on cyberspace with that parameter. It actually looks as if cyberspace has blocked NFS v3! Where is it doing this? Not in /etc/rc.config. I do a locate nfs and find these files:

/var/adm/fillup-templates/rc.config.nfsserv
/var/lib/nfs
/var/lib/nfs/devtab
/dev/nfsd
/etc/init.d/nfs
/etc/init.d/nfsserver
/etc/init.d/rc3.d/K09nfsserver
/etc/init.d/rc3.d/K11nfs
/etc/init.d/rc3.d/S11nfs
/etc/init.d/rc3.d/S13nfsserver
/etc/init.d/rc5.d/K09nfsserver
/etc/init.d/rc5.d/K11nfs
/etc/init.d/rc5.d/S11nfs
/etc/init.d/rc5.d/S13nfsserver

None of these files contain anything useful, as far as I can see. It is of course remotely possible that the cyberspace NFS server is not configured for version 3, though that seems implausible.

I'll now look on gubbio to see if the NFS client could possibly be blocking v3.

On gubbio, you have more files -- for instance, /usr/sbin/nfsstat It shows some interesting results, separated by Server nfs v2 and Server nfs v3. Now, it's only Server nfs v3 that has values, suggesting that gubbio is already running v3 as a default. On the other hand, the stats also show that Clien nfs v2 has values and Client nfs v3 does not!

So what does that mean? gubbio is a v3 server, but a v2 client. This suggests the problem lies squarely with cyberspace: it is a v2 client.

Since I couldn't find anywhere that defined v2 on cyberspace, I'll see if it needs a software update.

NFS software on cyberspace.

The first thing I discover is that cyberspace has the userspace NFS server installed, version 2.2beta47-112. Gubbio is running the kernelspace NFS server and doesn't even have the userspace one installed. It's likely that cyberspace is running the userspace NFS server.

This may need to be updated. Here are some options:

knfsd - linux 2.2 nfs server at http://freshmeat.net/projects/knfsd/?topic_id=150

This is a much-improved Linux NFS server with support for NFSv3 as well as
NFSv2. NFSv4 is being worked on. These patches are considered stable and
are indeed shipping with most distributions. The stock Linux 2.2 NFS
server can't be used as a cross-platform file server.

Alan Cox says the NFS daemon was always a weak point and that 2.2 now has
a kernel version -- implying it's better to run NFS from the kernel.

There's also finally this very useful command for finding out which
version of things is running:

rpcinfo -p

This is what I needed all along. On gubbio I get this:

gubbio:/home/steen # rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32768 nlockmgr
100021 3 udp 32768 nlockmgr
100021 4 udp 32768 nlockmgr
100024 1 udp 916 status
100024 1 tcp 918 status
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100005 1 udp 32770 mountd
100005 1 tcp 32768 mountd
100005 2 udp 32770 mountd
100005 2 tcp 32768 mountd
100005 3 udp 32770 mountd
100005 3 tcp 32768 mountd

As you can see, nfs is running both version 2 and 3, as are several of the other programs. On cyberspace I get this:

cyberspace:~ # rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
545580417 1 udp 830 bwnfsd
545580417 1 tcp 832 bwnfsd
391002 2 tcp 853
100005 1 udp 885 mountd
100005 2 udp 885 mountd
100005 1 tcp 888 mountd
100005 2 tcp 888 mountd
100003 2 udp 2049 nfs
100003 2 tcp 2049 nfs
100021 1 udp 1037 nlockmgr
100021 3 udp 1037 nlockmgr
100021 4 udp 1037 nlockmgr

and indeed, you can see nfs is only running version 2, as are portmapper and mount. So what you want is the kernel-space version, or a software update... Here it's a hassle that cyberspace has the default SuSE installation is is harder to mess with.

Incidentally, note that gubbio is running nfs udp only (versions 2 and 3), while cyberspace is running nfs udp and tcp (version 2 only).

Note that you can (and should) see the rpcinfo on the remote machine, by issuing rpcinfo -p cyberspace from gubbio. The listing is identical: cyberspace lacks nfs v3. Whew! That took some diagnosing. I found the latest hints on http://nfs.sourceforge.net/nfs-howto/troubleshooting.html

First I try a software update. There's an old security advisory about nfs-server 2.2beta47 dating from 1999, so it's possible that I installed an older version and not the latest from SuSE 7.3. But no: even 8.0 is still distributing 2.2beta47.

I may need to go into the kernel to fix this. In the meantime, I posted some questions on the SuSE newsgroup, so I'll wait for answers there first.

That's the progress for today -- I know pretty much exactly what the problem is -- cyberspace is not providing an NFS v3 daemon -- and the solution is either in a parameter file or I have to rebuild the kernel.

This is a minor matter and it may have a simple solution. However, it is tempting to switch cyberspace to Debian, or at least to begin to build a Debian partition. This should be delayed until the dissertation is done.

Useful information on the automount program:
http://www.redhat.com/docs/manuals/linux/RHL-7.3-Manual/ref-guide/s1-nfs-client-config.html

To show what resources are exported, do showmount -e

Notes from various places:

Use these options in fstab:

vers=3,rsize=8192,wsize=8192,intr

hard is default; vers=3 ensures you use version 3 instead of the apparent default version 2.

If you specify "default" under options, you get the default options:

rw, suid, dev, exec, auto, nouser, async

async is more secure but slower; see details at http://nfs.sourceforge.net/

Some other ideas:

rsize=32768,wsize=32768,intr,rw,nolock,nfsvers=3,suid,hard
exec,dev,suid,rw,bg,rsize=8192,wsize=8192,nfsvers=3,tcp,hard,intr

Once you've changed fstab on both machines, try ./nfs restart or rcnfs
restart.

RH7.3 defaults to NFS version 3

SuSE's main NFS script appears to be /etc/rc.d/nfs -- it contains this tidbit for the /scripts directory:

# It's sometime usefull to mount NFS devices in
# background with an ampersand (&) and a sleep time of
# two or more seconds, e.g:
#
# mount -at nfs &
# sleep 2
#
# Note: Some people importing the /usr partition.
# Therefore we do _NOT_ use an ampersand!
#
mount -at nfs
rc_status
sleep 1

Check the current status of nfs like this:

gubbio: #rcnfs status
Checking for mounted nfs shares (from /etc/fstab): running

usage: start|stop|status|reload|force-reload|restart

The 2.2.19 kernel integrates complete support for NFS V2 and V3 over UDP.

Build Kernel with NFS support, including NFS Version 3 server support

NFS v3 can handle files >2GB. You can use it via "mount ... -o mountvers=3".

Mount options for nfs (from man mount)

Instead of a textual option string, parsed by the kernel, the nfs file system expects a binary argument of type struct nfs_mount_data. The program mount itself parses the following options of the form `tag=value', and puts them in the struc ture mentioned: rsize=n, wsize=n, timeo=n, retrans=n, acregmin=n, acregmax=n, acdirmin=n, acdirmax=n, actimeo=n, retry=n, port=n, mountport=n, mounthost=name, mountprog=n, mountvers=n, nfsprog=n, nfsvers=n, namlen=n. The option addr=n is accepted but ignored. Also the following Boolean options, possibly preceded by no are recognized: bg, fg, soft, hard, intr, posix, cto, ac, tcp, udp, lock. For details, see nfs(5).

Especially useful options include

rsize=8192,wsize=8192

This will make your nfs connection much faster than with the default buffer size of 1024. (NFSv2 does not work with larger values of rsize and wsize.) hard The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. This is probably what you want. soft This option allows the kernel to time out if the nfs server is not respond­ ing for some time. The time can be specified with timeo=time. This option might be useful if your nfs server sometimes doesn't respond or will be rebooted while some process tries to get a file from the server. Usually it just causes lots of trouble. nolock Do not use locking. Do not start lockd.

Here is an example from an /etc/fstab file from an NFS mount.

server:/usr/local/pub /pub nfs rsize=8192,wsize=8192,timeo=14,intr Options rsize=n

The number of bytes NFS uses when reading files from an NFS server. The default value is dependent on the kernel, currently 1024 bytes. (However, throughput is improved greatly by asking for rsize=8192.) wsize=n The number of bytes NFS uses when writing files to an NFS server. The default value is dependent on the kernel, currently 1024 bytes. (However, throughput is improved greatly by asking for wsize=8192.) hard If an NFS file operation has a major timeout then report "server not responding" on the console and continue retrying indefinitely. This is the default. intr If an NFS file operation has a major timeout and it is hard mounted, then allow signals to interupt the file operation and
cause it to return EINTR to the calling program. The default is to not allow file operations to be interrupted.

Example of nfs v3 slowing down transfer because of no cache use

I have a NetApp NFS server, and did an interesting test with Linux RedHat 6.2 + 2.2.16 kernel + linux-2.2.16-nfsv3-0.22.0.dif patch

$ mount -o mountvers=3 192.168.1.90:/vol/vol0 /mnt
$ time perl -e `for ( 1 .. 5000 ) { open F, "</mnt/ttt"; close F; }` # try this timing script!
0.07user 0.25system 0:10.86elapsed 2%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (251major+46minor)pagefaults 0swaps

$ mount -o mountvers=2 192.168.1.90:/vol/vol0/s1/etc /mnt
$time perl -e `for ( 1 .. 5000 ) { open F, "</mnt/ttt"; close F; }`
0.05user 0.06system 0:00.10elapsed 103%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (251major+46minor)pagefaults 0swaps

It shows that first case ( NFS3 ) is 100 times slower than second case ( NFS2)

nfsstat shows 10000 accesses in the first case and just 4 accesses in the second case.

This leads me to a conclusion that ( at least in my config ) NFS 3 does not use caching.

From http://nfs.sourceforge.net/

1. Q. What are the primary differences between NFS Versions 2 and 3?
A. From the system point of view, the primary differences are these:

Version 2 limits the client to 2 GB file size (32 bit offset) support; Version 3 supports larger files (up to 64 bit offsets), depending on the server

The Version 2 protocol limits the maximum data transfer size to 8K (8192 bytes); the Version 3 protocol will allow support of up to 64K. This maximum value is set to 8K on currently released Linux kernels, and it has been set to 32K in the latest experimental patches

The Version 2 protocol requires that each write request be posted to the server's disk before the server replies to the client; Version 3 requires only that the server returns the status of the data involved in the write request, and also provides a commit request which will not return until all data specified in the request has been posted to the server's disk

NFS resolved

On cyberspace, I found the kernel-source-2.4.16.SuSE-24.i386.rpm, put it in /usr/src/packages/RPMS/i386 and issued rpm -Uvh ke*. This installed the expected files in /usr/src/linux. The most recent files -- including the .config file -- appear to be from 18 December 2001.

I open make xconfig and check out the existing installation. Under File Systems | Network File Systems I find that NFS file system support is yes, Provide NFSv3 client support is yes, Root file system on NFS is no, NFS server support is activated as a module, and Activate NFSv3 server support is checked yes. Incidentally, the SMB file system is also a module.

So in brief this means you have NFSv3 on cyberspace, and you don't need to rebuild the kernel. In fact, as far as I can see you don't need nfs-server 2.2beta 47 and I'm tempted to uninstall it. I quit xconfig without saving.

I entered kpackage and uninstalled nfs-server; there were no dependencies.

I then discovered that Yast2 has an nfs server configuration tool, but I don't appear to have it. I found out it is a yast2 module called lan_nfs_server and that it is part of the yast2-config-network rpm, such as the yast2-config-network-2.3.21-0.noarch RPM.

Now, I have 2.4.19-6, but I take it the lan_nfs_server configuration tool is only in the Professional package. Let me try to activate the kernel-space nfs server module on my own.

I then unmounted /vm on cyberspace and /vc on gubbio; nothing is mounted through nfs. I verified with rpcinfo -p that everything was running (the nlockmgr stopped when I had unmounted everything, but nfs, portmapper, bwnfsd, and mountd were still running).

I then issued rcnfs stop and got Remove Net File System (NFS) done. This made no difference to what shows up on rpcinfo -p -- it just removed the mounts I guess (there weren't any).

Verify your kernel supports nfs:

cyberspace:~ # cat /proc/filesystems
nodev rootfs
nodev bdev
nodev proc
nodev sockfs
nodev tmpfs
nodev shm
nodev pipefs
ext2
minix
msdos
vfat
iso9660
nodev nfs
nodev devpts
nodev usbdevfs

which is fine.

I found instructions for shutting down the nfs server here -- http://www.tldp.org/HOWTO/PLIP-Install-HOWTO-11.html. "Note that /etc/rc.d/init.d/ is /sbin/init.d/ on SuSE Linux systems."

However, I did a locate portmap and found /etc/init.d/portmap. I issued

/etc/init.d/portmap stop

and got "Shutting down RPC portmap daemon done" --yeah! When I now do rpcinfo -p I get "rpcinfo : can't contact portmapper: RPC: Remote system error - Connection refused." I also issue

/etc/init.d/nfs stop

but this just gives me the same as rcnfs stop. I take it I could also do rcportmap stop. These two appear to be the ones controlling the whole system.

Now, can I start this -- and get the kernel module activated? After removing the userspace nfs server. I then issued rcportmap start and got "Starting RPC portmap daemon done"
and rcnfs start and then rpcinfo -p:

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 1038 nlockmgr
100021 3 udp 1038 nlockmgr
100021 4 udp 1038 nlockmgr

So I have neither mountd nor nfs -- this is just the client. I try lsmod
nfs and get this ream:

cyberspace:~ # lsmod nfs
Module Size Used by
af_packet 11856 0 (autoclean)
abi-ibcs 6448 0 (autoclean) (unused)
abi-svr4 82944 0 (autoclean) [abi-ibcs]
lcall7 2240 0 (autoclean) [abi-ibcs]
abi-util 1728 0 (autoclean) [abi-svr4 lcall7]
snd-pcm-oss 19328 0 (autoclean)
snd-pcm-plugin 14640 0 (autoclean) [snd-pcm-oss]
snd-mixer-oss 5344 0 (autoclean) [snd-pcm-oss]
NVdriver 946672 0 (autoclean)
snd-card-cs4236 5200 0
isa-pnp 27408 0 [snd-card-cs4236]
snd-cs4236 19536 0 [snd-card-cs4236]
snd-opl3 4464 0 [snd-card-cs4236]
snd-hwdep 3280 0 [snd-opl3]
snd-mpu401-uart 2288 0 [snd-card-cs4236]
snd-rawmidi 9568 0 [snd-mpu401-uart]
snd-seq-device 3744 0 [snd-rawmidi]
snd-cs4231 17808 0 [snd-card-cs4236 snd-cs4236]
snd-pcm 29984 0 [snd-pcm-oss snd-pcm-plugin snd-cs4231]
snd-timer 8976 0 [snd-opl3 snd-cs4231 snd-pcm]
snd-mixer 26912 0 [snd-mixer-oss snd-cs4236 snd-cs4231]
snd 32592 1 [snd-pcm-oss snd-pcm-plugin
snd-mixer-oss snd-card-cs4236 snd-cs4236 snd-opl3 snd-hwdep
snd-mpu401-uart snd-rawmidi snd-seq-device snd-cs4231 snd-pcm snd-timer
snd-mixer]
soundcore 3248 3 [snd]
ipv6 124768 -1 (autoclean)
ip_vs 51200 0 (autoclean)
evdev 4544 0 (unused)
input 3072 0 [evdev]
uhci 23424 0 (unused)
usbcore 48000 1 [uhci]
e100 51872 1 (autoclean)
iptable_nat 12464 0 (autoclean) (unused)
ip_conntrack 12720 1 (autoclean) [iptable_nat]
iptable_filter 1728 0 (autoclean) (unused)
ip_tables 10336 4 [iptable_nat iptable_filter]
lvm-mod 47776 3 (autoclean)

Lots of stuff I don't want, and no nfs server. So what do I do? I verify that I now have no mountd -- on gubbio I have /usr/sbin/rpc.mountd.

Maybe if I kill the inetd daemon and restart it, with

kill -HUP `cat /var/run/inetd.pid`

This will load the modules? Interestingly, ps -ax shows that these are running:

3622 ? S 0:00 /usr/sbin/rpc.ugidd
3676 ? S 0:00 /usr/sbin/rpc.mountd
3679 ? S 13:49 /usr/sbin/rpc.nfsd
9223 ? S 0:00 /sbin/portmap

So what's going on? I kill the /usr/sbin daemons; their files have been deleted. I take the machine to init 2 and back to init 3 -- only portmap starts. Failure for "do it yourself" strategy -- which just makes me curse SuSE's complex control scripts.

Still, I submit: I look for lan_nfs_server and find the following:

7 files are missing:
lan_nfs/lan_nfs.en_GB.po
lan_nfs_client/lan_nfs_client.en_GB.po
lan_nfs_exports/lan_nfs_exports.en_GB.po
lan_nfs_fstab/lan_nfs_fstab.en_GB.po
lan_nfs_server/lan_nfs_server.en_GB.po
lan_sendmail/lan_sendmail.en_GB.po
lan_ypclient/lan_ypclient.en_GB.po

in http://rpmfind.net/linux/SuSE-Linux/noarch/i18n/yast2-7.0/status-y2t_lan.en_GB, which confirms my suspicion that the nfs server configuration is left out of SuSE 7.3 personal. This file from SuSE 7.2 has it -- http://www-isia.cma.fr/fr/presentation/materiel/pc_linux/base_logiciels/base_logici els_moinette/yast2-config-network-2.3.21-0.noarch.html

I do a search for yast2-config-network suse 7.3 and find that in http://www.suse.de/en/products/suse_linux/i386/packages_professional/yast2-config-network.html, it is not present! Under http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/yast1/ I find the script in its own rpm, at http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/yast1/yast2-config-nfs-server.rpm -- I just get it and install it on cyberspace. I first get "failed dependencies: yast2-trans-nfs-server" and get that from the same location. They both install uten aa mukke.

I now start yast2 and as I suspected have a new icon in network configuration for nfs server. It tells me I need nfs-utils, which I get from http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/n1/ -- it installs fine and says "updating rc.config".

What this makes me suspect is that this is where nfsd and mountd are -- and that I now don't need yast2. I try just starting nfs. It adds "status" but not mount and nfs. I let yast2 start the service -- and it works! I've now got nfs v3 on cyberspace. Mission accomplished.

I edit gubbio:/etc/fstab, adding vers=3 to the options column. Actually, I removed it again -- this wasn't the problem. The default will be vers=3 and hard, but I added rsize=8192,wsize=8192 on both machines. I should be done now and can go home. I'll just test it.

So I added the 8k block sizes to /etc/fstab on both systems, and left out hard and vers=3. When I run nfsstat, I confirm that v3 is in fact being used. The speed is no faster than it was -- even a bit slower, hovering around 9.6. However, it may be more secure. You can experiment with asynch and synch at some point, but on the whole this is great. I confirmed that the file size limit is gone -- I can now move the huge video files between the systems.

The moral of this installation: I solved the problem by looking into every nook and cranny:

  • diagnosing it: nfs is serving v2 from cyberspace, v3 from gubbio
  • checking the kernel: it has nfs server v3 enabled
  • uninstalling the user-space nfs server and losing nfs and mount
  • tracking down Yast's nfs server configuration script
  • that guided me to install nfs-utils, which is what I mainly needed
  • I still had to use yast to start nfs -- don't know why, but it works

So in the end SuSE came through, though it was frankly more trouble than just doing it on your own. You have to find out so much about SuSE that it would have been much better to just spend that time learning about Linux. Next time you get Debian, which is really what I wanted from the start.

For the moment, however, you're happy: I made it work, and I can now move my huge files around at ease. It is 12:45am and I can still get some sleep.

Note that /etc/fstab has to have user or users to allow steen to mount /vc, as below:

cyberspace:/ /mnt/CyberNFS nfs noauto,user,rw 0 0
cyberspace:/mnt/vc /mnt/vc nfs user,rw,rsize=8192,wsize=8192,hard,intr 0 0
cyberspace:/media/dvd /dev/dvd nfs noauto,user,rw 0 0

Full list of Remote Procedure Call utilities on Cyberspace

/sbin/rpc.lockd
/sbin/rpc.statd
/usr/bin/rpcclient
/usr/bin/rpcgen
/usr/sbin/rpc.mountd
/usr/sbin/rpc.nfsd
/usr/sbin/rpcinfo

See also rc calls (you could put this in commands.html)

/sbin/rcdhclient
/sbin/rcgpm
/sbin/rchotplug
/sbin/rcportmap
/sbin/rcsyslog
/usr/bin/rcp
/usr/bin/rcs
/usr/bin/rcs-checkin
/usr/bin/rcs2log
/usr/bin/rcsclean
/usr/bin/rcsdiff
/usr/bin/rcsmerge
/usr/bin/rcvAppleSingle
/usr/lib/YaST2/bin/rc_create_data
/usr/sbin/rcacct
/usr/sbin/rcalsasound
/usr/sbin/rcapache
/usr/sbin/rcapmd
/usr/sbin/rcat
/usr/sbin/rcautofs
/usr/sbin/rcboot.setup
/usr/sbin/rccron
/usr/sbin/rcdummy
/usr/sbin/rcfam
/usr/sbin/rcfbset
/usr/sbin/rcinetd
/usr/sbin/rcipvsadm
/usr/sbin/rckbd
/usr/sbin/rcksysguardd
/usr/sbin/rclisa
/usr/sbin/rclpd
/usr/sbin/rcnetwork
/usr/sbin/rcnfs
/usr/sbin/rcnfsserver
/usr/sbin/rcnscd
/usr/sbin/rcpersonal-firewall
/usr/sbin/rcpowerfail
/usr/sbin/rcrandom
/usr/sbin/rcraw
/usr/sbin/rcroute
/usr/sbin/rcrwhod
/usr/sbin/rcsamba
/usr/sbin/rcsane
/usr/sbin/rcsendmail
/usr/sbin/rcsingle
/usr/sbin/rcsmb
/usr/sbin/rcsmbfs
/usr/sbin/rcsshd
/usr/sbin/rcxdm
/usr/sbin/rcxfs
/usr/sbin/rcxinetd
/usr/sbin/rcxntpd
/usr/sbin/rcypbind

Permissions
28 June 2002

I occasionally get "no permission" on NFS mounts after having changed the parameters. It may be that the source of the problem is that old values linger in the system. For instance, long after I had defined the 160GB drive as /vm, /var/lib/nfs/rmtab showed it exported to cyberspace as giant!

On cyberspace I now get 128.97.184.97:/mnt/vc:0x00000001 and on gubbio similarly I get cyberspace.ucla.edu:/mnt/vm:0x00000001. So it is quite possible that the old value in this file prevented the reconnection.


 

 

top
Debate
Evolution
CogSci

Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles


CogWeb