NIS and NFS - An Overview

by Rick Moen

Revised: Thursday, 2002-05-30

Master version will be at http://linuxmafia.com/faq/Network_Other/nisnfs.html, and I'll try to improve it there.


Background to Both Services - The Portmapper

NFS (Network File System) and NIS (Network Information Services), like many other inventions we use, were originated by Sun Microsystems. Both work atop a third key invention, the RPC (Remote Procedure Call) Portmapper. (Also: ruptime, ruser, the obsolete pcnfsd)

The portmapper (aka portmap or rpc.portmap) is the key to understanding both services. It's a generic server facility (running on ports 111/tcp and 111/udp) for handing out UDP or TCP ports to other services built upon it. So, for example, when your Linux machine's NFS client attempts to mount a remote NFS share, it actually sends a remote procedure request (typically over UDP packets) to the remote host's portmapper, which responds with the port to contact for that service, to which the NFS client then sends the request as a regular socket request to the related NFS daemons.

Notice this means that the various NFS daemons don't have fixed port (service) number assignments: They're assigned and tracked dynamically by the server host's portmapper. (Exception: nfsd always runs on 2049/udp or 2049/tcp.)

It also means that RPC-based services tend to have very problematic security characteristics. You cannot very effectively limit access to particular ports when you can't predict what they'll be, and the daemon software in common use has no encryption or strong authentication. Also, both NFS and NIS (as examples of RPC-based services) are somewhat vulnerable to client machines failing to cooperate on mapping of UIDs between the two machines. However, there are some facilities to attempt security control, which we'll cover.

(Late addition: Port assignments can be controlled by specifying on the daemon's start-up command line, e.g., "/usr/sbin/rpc.mountd --port 2219"

Running "rpcinfo -p [host]" against a given IP will query its portmapper about what RPC services it's running.

Example output:

   program vers proto   port
              100000    2   tcp    111  portmapper
              100000    2   udp    111  portmapper
              100011    1   udp    749  rquotad
              100011    2   udp    749  rquotad
              100005    1   udp    759  mountd
              100005    1   tcp    761  mountd
              100005    2   udp    764  mountd
              100005    2   tcp    766  mountd
              100005    3   udp    769  mountd
              100005    3   tcp    771  mountd
              100003    2   udp   2049  nfs
              100003    3   udp   2049  nfs
              300019    1   tcp    830  amd
              300019    1   udp    831  amd
              100024    1   udp    944  status
              100024    1   tcp    946  status
              100021    1   udp   1042  nlockmgr
              100021    3   udp   1042  nlockmgr
              100021    4   udp   1042  nlockmgr
              100021    1   tcp   1629  nlockmgr
              100021    3   tcp   1629  nlockmgr
              100021    4   tcp   1629  nlockmgr

NFS:

Linux has kernel-based nfsds (knfsd) and userspace nfsds. The kernel-based ones are greatly preferable as to performance and general quality.

Sample "rpcinfo -p localhost" output when running the kernel-based nfsd:

# rpcinfo -p localhost
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100021    1   udp  33133  nlockmgr
    100021    3   udp  33133  nlockmgr
    100021    4   udp  33133  nlockmgr
    100005    1   udp    922  mountd
    100005    1   tcp    925  mountd
    100005    2   udp    922  mountd
    100005    2   tcp    925  mountd
    100005    3   udp    922  mountd
    100005    3   tcp    925  mountd

Note the support for NFSv3 mountd operations in the "vers" column — by which you can spot use of the kernel nfsd.

For contrast, here's "rpcinfo -p localhost" output when running the (inferior) user-space nfsd:

# rpcinfo -p
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   udp   2049  nfs
    100003    2   tcp   2049  nfs
    100005    1   udp    758  mountd
    100005    2   udp    758  mountd
    100005    1   tcp    761  mountd
    100005    2   tcp    761  mountd

Solving some NFS problems can be as simple as detecting and fixing a server machine's use of the wrong nfsd.


Server-side programs:

rpc.statd: Auto-invoked by nfsd, supports lockd.
rpc.lockd: Manages file locks.
rpc.mountd: Implements the NFS mount protocol
rpc.rquotad: User quotas on exported filesystems
rpc.nfsd: Hands out file handles and such. (Note that you still need this even though you are running knfsd.)


Server-side configuration:

/etc/exports syntax:
directory machine1(option11,option12) machine2(option21,option22)

E.g.:
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/home 192.168.0.1(rw) 192.168.0.2(rw)

Use root_squash option for mounts, if possible (server shouldn't trust the server's root account): Maps client's UID 0 (root account) to the server's "nobody" account. Note that client's root account can still su to any non-root user on the exported NFS share. Note that this also means any security-sensitive files on the server's share should be owned by the root user, not bin, etc. Note that some backup utilities like tob aren't compatible with root_squash.

After editing hosts, run "exportfs -ra" to HUP nfsd, or the "kill -1" command if that isn't available.

/usr/sbin/showmount:  Displays state of remote nfsd.
   -e :  Show export list
   -a :  List client hostnames and mounted filesystems
   -d :  List directories mounted


Kernels 2.2.18+ and 2.4.x support TCP-NFS client. 2.2.x will need H.J. Lu's patches for knfsd over TCP; 2.4.x does not. http://nfs.sourceforge.net/ However, note that Lu's Web page says NFSv.3 doesn't work over TCP. I believe this means you can do TCP-NFS only with over NFSv2. Documentation is inconsistent, but seems to suggest the TCP-NFS code is more mature on the client end.


NFS v. 4:


Mounting options:


Restricting access to rpc.portmap to specific originating IPs, via /etc/hosts.deny :
portmap: ALL


/etc/hosts.allow:
portmap: 192.168.0.0/255.255.255.0

Never use DNS-derived hostnames in hosts.allow or hosts.deny. It's safest to use IP addresses, only. Also, you can inadvertantly cause portmap/name-lookup loops.


Misc. debugging points:

Does "mount" claim the filesystem is mounted, and with what options?

Is there a mount inside/atop your NFS mount?

What's the "mount" error message. "Permission denied" is an /etc/exports problem. Double-check /proc/fs/nfs/exports on the server. Message "RPC: [anything]" means nfsd isn't running, or doesn't support the protocol type you requested. Message "RPC: No Remote Programs Registered" means access.deny/access.allow problems.

What does /proc/mounts show?

If you get a permissions error, do you have write permission on the client's mount point?

Is there a UID mismatch? Run "id [username]" on both machines.

Is there whitespace between hostnames and options, in /etc/exports?

Did you run "exportfs -ra"?

Is /etc/exports root-readable? Are the daemon binaries set executable?

Was kernel compiled with NFS server support? (Ours are.)

Did you check /var/log/messages and similar on the server?

Is client's version of mount at least 2.10m? If not, can't use NFSv3. In general, if protocol-support differences are a possibility, try specifying NFSv2 over UDP.

What does "rpcinfo -p [host]" report?

Any ipchains/netfilter rules in the way?

Do the filtering rules block fragmented packets?

Does HUPping nfsd help?

Try reducing rsize & wsize to 1024 bytes? If NFS suddenly works, the switch or one of the ethernet interfaces may have a problem with large block sizes.

Turn off autonegotiated speed and duplex in the ethernet switch? (It is often useful to force a sane port mode, in the switch configuration.)

Are you exporting from a journaling filesystem? ext3 is compatible, along with recent ReiserFS.


Alternatives to NFS:

AFS: Heavy-weight code, complex. Used to require proprietary server piece, but that is now available as open source.
http://www.transarc.ibm.com/Product/EFS/AFS/
http://www.openafs.org/

Coda: Never quite finished. http://www.coda.cs.cmu.edu/

Intermezzo: Immature; being written by Coda developer Peter Braam.
http://www.inter-mezzo.org/

Lustre: Scalable network filesystem for clusters.
http://www.lustre.org/


NIS:

Also invented at Sun, as "Yellow Pages". British Telecom PLC complained on trademark grounds, leading Sun to rename the service "NIS" = Network Information Services. But you still see many references to "yp". Later, "NIS+" came out with improved security and handling of very large installations. (You will also, rarely, see mention of "NYS", which is a merger of some functions from both protocols. Stands for NIS+, YP, and Switch. Requires recompiling glibc to include third-party NYS code.)

Note that alleged "NIS+" support on Linux isn't really that; at best, it's a partial implementation of the NIS+ extensions. Real NIS+ would require SecureRPC. (It encrypts all authentication and data transfer.) For purposes of this class, we'll ignore the partial NIS+ implementation.


What does it do?

Provides unified login and authentication for a group of machines. Like LDAP, it handles passwords, groups, protocols, networks, services.


Setting up the Client Side:

Client side support is mostly inside glibc. Uses a daemon called ypbind to find and authenticate against an NIS server.

Sample /etc/yp.conf for ypbind:
   ypserver 10.10.0.1
   ypserver 10.0.100.8
   ypserver 10.3.1.1

Use only IP addresses, unless the system has some non-NIS way of resolving IP addresses.

Add "nis" to the "Order" line in /etc/host.conf:
   order hosts,bind
   multi on

Add "nis" to /etc/nsswitch.conf:
   passwd:     compat
   group:      compat
   # For libc5, you must use shadow: files nis
   shadow:     compat

   passwd_compat: nis
   group_compat: nis
   shadow_compat: nis

   hosts:      nis files dns

   services:   nis [NOTFOUND=return] files
   networks:   nis [NOTFOUND=return] files
   protocols:  nis [NOTFOUND=return] files
   rpc:        nis [NOTFOUND=return] files
   ethers:     nis [NOTFOUND=return] files
   netmasks:   nis [NOTFOUND=return] files
   netgroup:   nis
   bootparams: nis [NOTFOUND=return] files
   publickey:  nis [NOTFOUND=return] files
   automount:  files
   aliases:    nis [NOTFOUND=return] files

Test ypbind by setting the domain the client is in, using "/bin/domainname nis.domain". (Ideally, nis.domain should be a non-obvious name.)

Shadow passwords cannot be used on an NIS network (although it's not a bad idea to retain it for the root user). glibc has some support for shadow passwords over NIS, but it's buggy.

Replace /etc/pam.d/* libpwdb entries with ones for pam_unix. E.g., for /etc/pam.d/login:

   auth       required     /lib/security/pam_securetty.so
   auth       required     /lib/security/pam_unix.so
   auth       required     /lib/security/pam_nologin.so
   account    required     /lib/security/pam_unix.so
   password   required     /lib/security/pam_unix.so
   session    required     /lib/security/pam_unix.so


You should now be able to run all the client-side programs:

Lines in /etc/passwd starting with "+" are NIS-modified lines: Client will not attempt local resolution on entries starting with the first such line.


Setting up the Server Side:

(Configure the server machine as an NIS client, first.)

Authentication database is stored in BerkeleyDB files (on Red Hat: in gdbm files) called "NIS maps" (compare LDAP). These can be mirrored to "slave" NIS servers using yppush.

Set /etc/sysconfig/network's "NISDOMAIN=" variable.

Initialise server database
/usr/lib/yp/ypinit -m
or
/usr/lib/yp/ypinit -s masterhost

On the master, all slave servers must be listed in /var/yp/ypservers. (Then, re-run make using /var/yp/Makefile to rebuild the server maps.

So: Run make against /var/yp/Makefile to build the initial server map. This will extract info from /etc/passwd, /etc/group, etc.

Doing "ypcat passwd" should now dump the password database. Doing "ypmatch [userid] passwd will give [userid]'s password entry.


Server-side programs:


Debugging tips:

/var/yp" must exist.

Is the portmapper running?

Is ypbind running? Check using "rpcinfo -u localhost ypbind".

Did ypbind register with the portmapper? Check using "rpcinfo -p localhost".

Test ability to talk to the NIS server by dumping its database: "ypcat passwd.byname".

Are there restrictions in /etc/hosts.allow and hosts.deny?

Is ypserv running? Check using "rpcinfo -u localhost ypserv".


Miscellany:

SecureRPC http://www.cs.vu.nl/~gerco/SecureRPC/

Why no SSH tunneling:
Because portmapper services by definition don't use fixed ports, they cannot be redirected over SSH. NFS/NIS are thus the canonical example (with ftp) of services incompatible with SSH.

Caching of NIS -- nscd
Drastically improves performance on clients. Be careful not to use nscd to cache DNS information.


Exercises:

1. Set up an NFS mount between two boxes you have root access on. Find the different resulting permissions for root-owned files with and without "root_squash" in /etc/exports. Observe the output of "rpcinfo -p" and shoumount.

2. After making a safety copy of /etc/, migrate your system to being both an NIS server and client. Change your password using yppasswd. Observe output of "rpcinfo -p", "ypcat".


References:

http://www.linuxdoc.org/HOWTO/NFS-HOWTO/
http://www.linuxdoc.org/HOWTO/NIS-HOWTO/
http://www.nfsv4.org/
http://nfs.sourceforge.net/

Greg Banks's lecture "Making NFS Suck Faster": http://www.linux.org.au/conf/2007/talk/41.html