Network File Server Status NFS is working great -- spello's ve and vh are automounted on sigillo, vh is automounted on cyberspace, and cyberspace's vc is automounted on sigillo. gubbio is currently down, so vm is offline. All machines are running nfs v3 in the kernel (avoid the userspace version). Note that after installing NFS v3, I'm no longer able to mount the root directory on cyberspace's root directory on gubbio. When I do exportfs -ra, I get invalid; when I change /etc/exports to /home instead, it works fine. Note also that you may need to export precisely the directory you want to mount, so for instance exporting /mnt fails if you then try to mount /mnt/ve. There does not appear to be a logfile, so maybe you should get one started? There are some speed tests below, not very seriously pursued; see also hard drives and file systems. To do
Guides
OSX NFS mount requests from OSX clients use ports that Linux
considers insecure (above 1024). To allow a connection do either one of
these:
2.6 changes This note was added to the 2.6.0-test2 kernel:
I often need to unmount a drive that has been exported, and
nfs claims it's busy until nfs-kernel-daemon is stopped. What I want to
do is unexport a particular drive while leaving the rest unchanged.
Solution: When you remount the drive, it's automatically reexported. Update 16 April 2005 I'm trying to figure out how to export a file system to a
different user than the originator -- a problem I encounter on the
Chianti tv-recorder machine. Here's where you can see all the
exports: cat /var/lib/nfs/etabAfter extensive fiddling I discover this is not what NFS makes possible -- that is to say, you cannot map user "steen" on sigillo to user "tna" on chi. What you can do is this: if user steen on chi is uid 500 and gid 500, while user steen on sigillo is uid 1000 and gid 1000, you can map the uid and gid over in /etc/exports: 128.97.221.30(rw,all_squash,async,anonuid=1000,anongid=1000)I'm still not sure what the all_squash does. Update 8 November 2004 I noticed clitunno:/ssa was no longer mounting, giving the
error "Connection refused". Machines that had already mounted
/ssa continued undisturbed. Running "rpcinfo -p trevi" gave rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refusedThe problem was portmap, and ps aux | grep portmap showed it was running with the option "-i 127.0.0.1". I found /etc/default/portmap had this line: # By default, listen only on the loopback interfaceI commented it out. It may be good security, but I can't use the machine. Update 19 June 2004 Enable NFS over TCP in the kernel. Verify that it's present
with rpcinfo -pAdd to the mount command along with the options, e.g., noauto,rw,rsize=8k,wsize=8k,hard,intr,tcpCompare performance. Update 19 December 2003 To mount a Linux drive through NFS on OSX, use one of these
methods:
mount_nfs -P cyberspace:/vc vcor
Installation history NFS was installed in late May 2002; later I discovered it was
in the kernel all along and just needed to be activated (details
below). Setting up the server I used the instructions in Setting Up an NFS Server at http://nfs.sourceforge.net/nfs-howto/server.html. First step: configuring the server on cyberspace. In SuSE, various entries in /etc/rc.config need to be set to enable NFS:
I verified that nfsd is running after rebooting. Note that there is a kernel-space and a user-space version of nfs; I'm still not confident which one I'm using and where exactly the loading needs to be requested. At the moment everything is working, so I'm leaving it alone. Next, you need to define what will be accessible from other machines, in /etc/exports:
I then added the same machines to hosts.allow and blocked all others from hosts.deny. The only way in is ssh and http -- except vsftp, which may not be very safe, but does provide very limited access. On gubbio, I only gave cyberspace access to the giant drive:
Note that the giant drive is now permanently mounted at /mnt/giant rather than under steen, both on gubbio and on cyberspace. It's a resource that should be available to all users of cyberspace -- or is it? You could give away certain directories only. Anyway, for now this is how it is (15 June 2002). Note some syntax peculiarities, ponder this example of /etc/exports from redhat: /mnt/export speedy.redhat.com(rw) The first line grants users from speedy.redhat.com read-write access and denies all other users. The second line grants users from speedy.redhat.com read-only access (the default) and allows the rest of the world read-write access. Details on exporting:
Information on the state of the system:
After changing /etc/exports, I killed rpc.mountd, rpc.portmap, and rpc.nfsd and restarted them, forcing them to read these new values. There is a more elegant way to achieve this result -- namely
This actually works on gubbio, but for some reason not on cyberspace. I then gave the command
and it connected! So let's try the icon. It works! Very elegant. It seems more robust than samba and handles security a bit differently, but otherwise it's much the same. You can add it to fstab and log on automatically. All right, I got it working. It doesn't have lockd, which means you should really get it compiled into the kernel. I'm currently running the userspace NFS, not the kernelspace NFS, on cyberspace. And it seems I don't need to have NFS on gubbio. I killed nfsd and removed it from rc.config -- but note that you do need rpc.mountd and rpc.portmap. You should also get lockd for the userspace NFS. In early June 2002 I reinstated nfs on gubbio, so that cyberspace can reach the giga drive for video encoding. Cyberspace runs at 1.2GHz and encodes dv files into divx at 3.6 frames per second, against gubbio's 2 or 2.5. File access in NFS Note that NFS is user-specific! Giant may be mounted for steen but invisible and inaccessible to root. I've had this happen on cyberspace: /home/steen/mnt/gubbio was mounted for steen, but not for root, and I had to log in as user steen to unmount it. As of 5 June 2002, I had giant mounted on cyberspace (as it says in fstab) at
-- that is to say, NFS allows you to mount an external drive as though it were local. It may be more logical to mount giant under /mnt and just to it from /home/steen/mnt/giant, but for the moment this works fine. If you do want to change it, you need to make the change in three places on both machines:
For the moment, it's OK to leave it under /home/steen, as I'm the only one using it. Note also that at the moment, you don't in fact have access from cyberspace to gubbio's main drive -- just the video drive. I'm amazed that you can mount this drive on both machine with little concern with which system it is physically located on. Tweaking On 19 June 2002, I read in the NFS Howto at http://www.tldp.org/HOWTO/NFS-HOWTO/client.html#MOUNTOPTIONS that a remote volume should be mounted as rw,hard,intr -- I had previously mounted it as noauto,users,rw. The mounting procedure tends to be iffy, but this time it went perfectly smoothly: On cyberspace, I unmounted vm, then changed fstab, did a mount -a, and then remounted it. I did the same for vc on gubbio. On gubbio, the cyberspace / and DVD remain on noauto,users,rw. The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using hard, intr on all NFS mounted file systems. Note that I wasn't able to copy, move, or delete files on vc from gubbio until I changed the file permissions on gubbio:/mnt/vm and cyberspace:/mnt/vc to 777 (perhaps excessively generous?). Copying is now moving at a wonderfully unexpected pace of 10MB/s or just below! Before we got the new link switch, I was clocking in at 850KB/s. Speed tests: First create a file about 256k large on the remote machine: Then read it with a block size of 16k:
Umount and remount both drives, and read it with a larger block size:
The results were worse, and the results with the default values appear to be so good that it's not obviously worth fiddling with this. Getting NFS v3 on cyberspace NFS On 24 June I made changes to /etc/fstab on gubbio and on cyberspace in order to enable NFS v3, which allows files larger than 2GB. On cyberspace, /etc/fstab had this line:
I started by unmounting /vm. Then I changed the options rw,hard,intr (which work perfectly) conservatively at first, adding only nfsvers=3. If this works well, you should add rsize=8192,wsize=8192 also, to speed up transmission. Finally, you should experiment with synch and asynch and see how it affects transmission speeds. After changing /etc/fstab on cyberspace, I just did a mount -a and rcnfs restart. The latter may not have been necessary; I got this receipt:
Then I moved to gubbio and unmounted /vc and /CyberNFS. On /etc/fstab, I have three lines:
I added nfsvers=3 to the /vc mount first to test things. I got the receipt,
This is perhaps because the /vc mount is implicitly defined as auto -- but why wouldn't it mount automatically? I then tried rcnfs restart and got this receipt:
This suggests there may be a permission problem that has cropped up on cyberspace. I try mount /mnt/vc and indded this is the problem. It appears to be in fstab, as mount /mnt/CyberNFS works fine! Bat news: when I remove nfsvers=3 from the /vc line, it mounts fine. On the other hand, /vm mounts fine on cyberspace with that parameter. It actually looks as if cyberspace has blocked NFS v3! Where is it doing this? Not in /etc/rc.config. I do a locate nfs and find these files:
None of these files contain anything useful, as far as I can see. It is of course remotely possible that the cyberspace NFS server is not configured for version 3, though that seems implausible. I'll now look on gubbio to see if the NFS client could possibly be blocking v3. On gubbio, you have more files -- for instance, /usr/sbin/nfsstat It shows some interesting results, separated by Server nfs v2 and Server nfs v3. Now, it's only Server nfs v3 that has values, suggesting that gubbio is already running v3 as a default. On the other hand, the stats also show that Clien nfs v2 has values and Client nfs v3 does not! So what does that mean? gubbio is a v3 server, but a v2 client. This suggests the problem lies squarely with cyberspace: it is a v2 client. Since I couldn't find anywhere that defined v2 on cyberspace, I'll see if it needs a software update. NFS software on cyberspace. The first thing I discover is that cyberspace has the userspace NFS server installed, version 2.2beta47-112. Gubbio is running the kernelspace NFS server and doesn't even have the userspace one installed. It's likely that cyberspace is running the userspace NFS server. This may need to be updated. Here are some options: knfsd - linux 2.2 nfs server at http://freshmeat.net/projects/knfsd/?topic_id=150 This is a much-improved Linux NFS server with support for
NFSv3 as well as Alan Cox says the NFS daemon was always a weak point and that
2.2 now has There's also finally this very useful command for finding out
which rpcinfo -p This is what I needed all along. On gubbio I get this:
As you can see, nfs is running both version 2 and 3, as are several of the other programs. On cyberspace I get this:
and indeed, you can see nfs is only running version 2, as are portmapper and mount. So what you want is the kernel-space version, or a software update... Here it's a hassle that cyberspace has the default SuSE installation is is harder to mess with. Incidentally, note that gubbio is running nfs udp only (versions 2 and 3), while cyberspace is running nfs udp and tcp (version 2 only). Note that you can (and should) see the rpcinfo on the remote machine, by issuing rpcinfo -p cyberspace from gubbio. The listing is identical: cyberspace lacks nfs v3. Whew! That took some diagnosing. I found the latest hints on http://nfs.sourceforge.net/nfs-howto/troubleshooting.html First I try a software update. There's an old security advisory about nfs-server 2.2beta47 dating from 1999, so it's possible that I installed an older version and not the latest from SuSE 7.3. But no: even 8.0 is still distributing 2.2beta47. I may need to go into the kernel to fix this. In the meantime, I posted some questions on the SuSE newsgroup, so I'll wait for answers there first. That's the progress for today -- I know pretty much exactly what the problem is -- cyberspace is not providing an NFS v3 daemon -- and the solution is either in a parameter file or I have to rebuild the kernel. This is a minor matter and it may have a simple solution. However, it is tempting to switch cyberspace to Debian, or at least to begin to build a Debian partition. This should be delayed until the dissertation is done. Useful information on the automount program: To show what resources are exported, do showmount -e Notes from various places: Use these options in fstab:
hard is default; vers=3 ensures you use version 3 instead of the apparent default version 2. If you specify "default" under options, you get the default options:
async is more secure but slower; see details at http://nfs.sourceforge.net/ Some other ideas:
Once you've changed fstab on both machines, try ./nfs restart
or rcnfs RH7.3 defaults to NFS version 3 SuSE's main NFS script appears to be /etc/rc.d/nfs -- it contains this tidbit for the /scripts directory:
Check the current status of nfs like this:
The 2.2.19 kernel integrates complete support for NFS V2 and V3 over UDP. Build Kernel with NFS support, including NFS Version 3 server support NFS v3 can handle files >2GB. You can use it via "mount ... -o mountvers=3". Mount options for nfs (from man mount) Instead of a textual option string, parsed by the kernel, the nfs file system expects a binary argument of type struct nfs_mount_data. The program mount itself parses the following options of the form `tag=value', and puts them in the struc ture mentioned: rsize=n, wsize=n, timeo=n, retrans=n, acregmin=n, acregmax=n, acdirmin=n, acdirmax=n, actimeo=n, retry=n, port=n, mountport=n, mounthost=name, mountprog=n, mountvers=n, nfsprog=n, nfsvers=n, namlen=n. The option addr=n is accepted but ignored. Also the following Boolean options, possibly preceded by no are recognized: bg, fg, soft, hard, intr, posix, cto, ac, tcp, udp, lock. For details, see nfs(5). Especially useful options include
This will make your nfs connection much faster than with the default buffer size of 1024. (NFSv2 does not work with larger values of rsize and wsize.) hard The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. This is probably what you want. soft This option allows the kernel to time out if the nfs server is not respond ing for some time. The time can be specified with timeo=time. This option might be useful if your nfs server sometimes doesn't respond or will be rebooted while some process tries to get a file from the server. Usually it just causes lots of trouble. nolock Do not use locking. Do not start lockd. Here is an example from an /etc/fstab file from an NFS mount.
The number of bytes NFS uses when reading files from an NFS
server. The default value is dependent on the kernel, currently
1024 bytes. (However, throughput is improved greatly by asking
for rsize=8192.) wsize=n The number of bytes NFS uses when
writing files to an NFS server. The default value is dependent
on the kernel, currently 1024 bytes. (However, throughput is improved
greatly by asking for wsize=8192.) hard If an NFS file operation has a
major timeout then report "server not responding" on the console and
continue retrying indefinitely. This is the default. intr If an NFS
file operation has a major timeout and it is hard mounted, then allow
signals to interupt the file operation and Example of nfs v3 slowing down transfer because of no cache use I have a NetApp NFS server, and did an interesting test with Linux RedHat 6.2 + 2.2.16 kernel + linux-2.2.16-nfsv3-0.22.0.dif patch
It shows that first case ( NFS3 ) is 100 times slower than second case ( NFS2) nfsstat shows 10000 accesses in the first case and just 4 accesses in the second case. This leads me to a conclusion that ( at least in my config ) NFS 3 does not use caching. From http://nfs.sourceforge.net/ 1. Q. What are the primary differences between NFS Versions 2
and 3? Version 2 limits the client to 2 GB file size (32 bit offset) support; Version 3 supports larger files (up to 64 bit offsets), depending on the server The Version 2 protocol limits the maximum data transfer size to 8K (8192 bytes); the Version 3 protocol will allow support of up to 64K. This maximum value is set to 8K on currently released Linux kernels, and it has been set to 32K in the latest experimental patches The Version 2 protocol requires that each write request be posted to the server's disk before the server replies to the client; Version 3 requires only that the server returns the status of the data involved in the write request, and also provides a commit request which will not return until all data specified in the request has been posted to the server's disk NFS resolved On cyberspace, I found the kernel-source-2.4.16.SuSE-24.i386.rpm, put it in /usr/src/packages/RPMS/i386 and issued rpm -Uvh ke*. This installed the expected files in /usr/src/linux. The most recent files -- including the .config file -- appear to be from 18 December 2001. I open make xconfig and check out the existing installation. Under File Systems | Network File Systems I find that NFS file system support is yes, Provide NFSv3 client support is yes, Root file system on NFS is no, NFS server support is activated as a module, and Activate NFSv3 server support is checked yes. Incidentally, the SMB file system is also a module. So in brief this means you have NFSv3 on cyberspace, and you don't need to rebuild the kernel. In fact, as far as I can see you don't need nfs-server 2.2beta 47 and I'm tempted to uninstall it. I quit xconfig without saving. I entered kpackage and uninstalled nfs-server; there were no dependencies. I then discovered that Yast2 has an nfs server configuration tool, but I don't appear to have it. I found out it is a yast2 module called lan_nfs_server and that it is part of the yast2-config-network rpm, such as the yast2-config-network-2.3.21-0.noarch RPM. Now, I have 2.4.19-6, but I take it the lan_nfs_server configuration tool is only in the Professional package. Let me try to activate the kernel-space nfs server module on my own. I then unmounted /vm on cyberspace and /vc on gubbio; nothing is mounted through nfs. I verified with rpcinfo -p that everything was running (the nlockmgr stopped when I had unmounted everything, but nfs, portmapper, bwnfsd, and mountd were still running). I then issued rcnfs stop and got Remove Net File System (NFS) done. This made no difference to what shows up on rpcinfo -p -- it just removed the mounts I guess (there weren't any). Verify your kernel supports nfs:
which is fine. I found instructions for shutting down the nfs server here -- http://www.tldp.org/HOWTO/PLIP-Install-HOWTO-11.html. "Note that /etc/rc.d/init.d/ is /sbin/init.d/ on SuSE Linux systems." However, I did a locate portmap and found /etc/init.d/portmap. I issued
and got "Shutting down RPC portmap daemon done" --yeah! When I now do rpcinfo -p I get "rpcinfo : can't contact portmapper: RPC: Remote system error - Connection refused." I also issue
but this just gives me the same as rcnfs stop. I take it I could also do rcportmap stop. These two appear to be the ones controlling the whole system. Now, can I start this -- and get the kernel module activated?
After removing the userspace nfs server. I then issued rcportmap
start and got "Starting RPC portmap daemon done"
So I have neither mountd nor nfs -- this is just the client. I
try lsmod
Lots of stuff I don't want, and no nfs server. So what do I do? I verify that I now have no mountd -- on gubbio I have /usr/sbin/rpc.mountd. Maybe if I kill the inetd daemon and restart it, with
This will load the modules? Interestingly, ps -ax shows that these are running:
So what's going on? I kill the /usr/sbin daemons; their files have been deleted. I take the machine to init 2 and back to init 3 -- only portmap starts. Failure for "do it yourself" strategy -- which just makes me curse SuSE's complex control scripts. Still, I submit: I look for lan_nfs_server and find the following:
in http://rpmfind.net/linux/SuSE-Linux/noarch/i18n/yast2-7.0/status-y2t_lan.en_GB, which confirms my suspicion that the nfs server configuration is left out of SuSE 7.3 personal. This file from SuSE 7.2 has it -- http://www-isia.cma.fr/fr/presentation/materiel/pc_linux/base_logiciels/base_logici els_moinette/yast2-config-network-2.3.21-0.noarch.html I do a search for yast2-config-network suse 7.3 and find that in http://www.suse.de/en/products/suse_linux/i386/packages_professional/yast2-config-network.html, it is not present! Under http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/yast1/ I find the script in its own rpm, at http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/yast1/yast2-config-nfs-server.rpm -- I just get it and install it on cyberspace. I first get "failed dependencies: yast2-trans-nfs-server" and get that from the same location. They both install uten aa mukke. I now start yast2 and as I suspected have a new icon in network configuration for nfs server. It tells me I need nfs-utils, which I get from http://garbo.uwasa.fi/pub/linux/SuSE/7.3/suse/n1/ -- it installs fine and says "updating rc.config". What this makes me suspect is that this is where nfsd and
mountd are -- and that I now don't need yast2. I try just
starting nfs. It adds "status" but not mount and nfs. I let
yast2 start the service -- and it works! I've now got nfs v3 on
cyberspace. Mission accomplished. I edit gubbio:/etc/fstab, adding vers=3 to the options column. Actually, I removed it again -- this wasn't the problem. The default will be vers=3 and hard, but I added rsize=8192,wsize=8192 on both machines. I should be done now and can go home. I'll just test it. So I added the 8k block sizes to /etc/fstab on both systems, and left out hard and vers=3. When I run nfsstat, I confirm that v3 is in fact being used. The speed is no faster than it was -- even a bit slower, hovering around 9.6. However, it may be more secure. You can experiment with asynch and synch at some point, but on the whole this is great. I confirmed that the file size limit is gone -- I can now move the huge video files between the systems. The moral of this installation: I solved the problem by looking into every nook and cranny:
So in the end SuSE came through, though it was frankly more trouble than just doing it on your own. You have to find out so much about SuSE that it would have been much better to just spend that time learning about Linux. Next time you get Debian, which is really what I wanted from the start. For the moment, however, you're happy: I made it work, and I can now move my huge files around at ease. It is 12:45am and I can still get some sleep. Note that /etc/fstab has to have user or users to allow steen to mount /vc, as below:
Full list of Remote Procedure Call utilities on Cyberspace
See also rc calls (you could put this in commands.html)
Permissions I occasionally get "no permission" on NFS mounts after having changed the parameters. It may be that the source of the problem is that old values linger in the system. For instance, long after I had defined the 160GB drive as /vm, /var/lib/nfs/rmtab showed it exported to cyberspace as giant! On cyberspace I now get 128.97.184.97:/mnt/vc:0x00000001 and on gubbio similarly I get cyberspace.ucla.edu:/mnt/vm:0x00000001. So it is quite possible that the old value in this file prevented the reconnection.
|
|
|
|
|||||
Maintained by Francis F. Steen, Communication Studies, University of California Los Angeles |