Gentoo Wiki ArchivesGentoo Wiki

NFS/Share_Directories

Contents

About

You want to share directories and files between computers on your network but don't want to use SAMBA or transfer them via FTP. Also, you can mount directories with NFS, so they appear as local files/folders. NFS also has the advantage of not needing special programs to move files back and forth.

Kernel Support

First you need to make sure your server and client machines have support for NFS volumes. The following instructions apply to both 2.4 and 2.6 kernels.

Change to your current kernel source directory and load up menuconfig: cd /usr/src/linux && make menuconfig

Code: make menuconfig
 File Systems --->
  Network File Systems --->
   <M> NFS file system support
   [*]   Provide NFSv3 client support
   <M> NFS server support
   [*]   Provide NFSv3 server support

Only enable the NFS file system and NFS server modules and options listed above. Don't worry about NFS v4 (if it's available - it's experimental in 2.6 as of November 2006) as NFS v3 works perfectly fine. Leave any other modules or options alone.

If you have a running kernel and don't want to rebuild your kernel (or even reboot the machine), simply choose to build as modules. Compile your kernel and install the modules with: make modules && make modules_install The modules can be loaded on-the-fly with: modprobe nfs && modprobe nfsd

Add "nfs" and "nfsd" to /etc/modules.autoload.d/kernel-2.6 to have the modules loaded each time you boot.

When building NFS support into a kernel, don't forget to backup, compile and install your new kernel.

Emerging NFS

Install the nfs-utils package on both the server and the client (while not necessary for basic NFS on the client, it provides some useful features such as locking) with: emerge nfs-utils

Modify EXPORTS

Once NFS is installed on the server you need to tell the daemon which directories to export to NFS. The /etc/exports file controls access to the NFS shares and is in the following format:

directory machineA(option,option) machineB(option,option) ...

Where:

File: /etc/exports

By IP address

/opt/media 192.168.0.100(async,no_subtree_check,rw) 192.168.0.101(async,no_subtree_check,rw)

By DNS name

/opt/media spunkster(async,no_subtree_check,rw) nivvy(async,no_subtree_check,rw)

Or by IP range

/opt/media 192.168.0.0/255.255.255.0(async,no_subtree_check,rw)

(If you get an "Error exporting NFS directories" while using the "IP" example above switch to the "IP range" style.)

After this is complete, start the NFS daemon (on both the client and the server) and add it to the default runlevel by issuing: /etc/init.d/nfs start && rc-update add nfs default

If you make later changes to your exports file it is recommended that you run the following command and restart the NFS daemon: exportfs -ra && /etc/init.d/nfs reload

Options

The options can tailor the access the connecting machines have to the directory. Options are on a per-client-machine basis and are a simple comma-separated list enclosed in parenthesis after the machine they are modifying.

ro 
(default) The client machine will have READ-ONLY access to the directory.
rw 
The client machine will have READ/WRITE access to the directory.
no_root_squash 
By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
no_subtree_check 
If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
sync 
By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesystem. This behavior may cause data corruption if the server reboots, and the sync option prevents this. See Section 5.9 of the NFS FAQ for a complete discussion of sync and async behavior.
async 
Opposite to sync, if not set system will default to sync option. Using async will also speed up transfers.

(descriptions taken from NFS FAQ listed below)

insecure 
Tells the NFS server to use unpriveledged ports (ports above 1024). This may be needed to allow mounting the NFS share from MacOS X or through the nfs:/ kioslave in KDE.

Mounting exported directories

After the server is set up and the NFS daemon is running, you can move on to mount the exported directories on your client.

First, make sure you have the portmap service on, start the portmap daemon by issuing: /etc/init.d/portmap start

Adding the portmap daemon to the boot process is not usually needed because Gentoo's init scripts figure that out by themselves! However to make sure that the init scripts indeed added portmap to the default runlevel type: rc-update show

If 'default' isn't present on the portmap line, then it won't start at the next bootup. Correct this by issuing: rc-update add portmap default

To do a test mount, create a mount directory and mount the remote drive by issuing: mount x.x.x.x:/directory /mount_directory

Where:

This may take a couple of minutes to mount, be patient.

hosts.allow

If you try to mount your NFS partition and get something similar to this:

# mount /mnt/nivvy
NFS Portmap: RPC: Program not registered

then it's being blocked. To unblock it, edit the following:

On the NFS server, add all IPs you want accessing your NFS shares (again)

File: /etc/hosts.allow
# Portmapper is used for all RPC services; protect your NFS!
# (IP addresses rather than hostnames *MUST* be used here)
portmap: 192.168.0.20
lockd: 192.168.0.20
rquotad: 192.168.0.20
mountd: 192.168.0.20
statd: 192.168.0.20

or by IP Range:

# Portmapper is used for all RPC services; protect your NFS!
# (IP addresses rather than hostnames *MUST* be used here)
portmap: 192.168.0.0/255.255.255.0
lockd: 192.168.0.0/255.255.255.0
rquotad: 192.168.0.0/255.255.255.0
mountd: 192.168.0.0/255.255.255.0
statd: 192.168.0.0/255.255.255.0

Followed by the command: /etc/init.d/portmap restart

Automatic mounting via FSTAB

To make the mounting occur on startup, add the following line to your FSTAB:

x.x.x.x:/directory  /mount_directory    nfs          rw            0    0

Where the variables are defined as above.

nfsvers=3 is a useful mount option to include. It's required for large-file (>4GB) support:

x.x.x.x:/directory  /mount_directory    nfs          rw,nfsvers=3            0    0

If the nfs resource is using as a portage directory please read also the Shared Portage via NFS.

To automatically mount the nfs resource at boot time you have to add the /etc/init.d/nfsmount program to the startup routine:

rc-update add nfsmount default

Security Implications

If you specify 'no_root_squash' in the server's /etc/exports file, anyone who gains root permissions on the client automatically has root permissions on the server within that exported directory - good for sharing Portage directories, bad if nasty people want to compile and/or run evil software on your boxes.

IP addresses are not always static, so when using numeric addresses (as opposed to DNS names), anyone who gains that IP has access to what you've exported. Keep this in mind with confidential information.

Note: The paragraph below no longer seems to be true. The latest versions of NFS support the sec=krb5 export option, which authenticates via Kerberos 5 instead of UIDs and GIDs.

NFS also uses numeric user and group ID's, so, even if you keep the passwd files identical on your systems somehow, someone else with the right IP address can create a user on their own system that can access anybody's files. Unless you are certain it is impossible to forge an IP address on your network, you cannot depend on the normal user/group access control. For this reason, NFS is not recommended for sharing user-private data (home directories, for example).

Setting Up Firewall (Server Side)

Setting up a firewall to cover NFS ports is quite tricky because there are ports that are assigned randomly as the NFS daemon is restarted. To see what ports you need to open, type in:

rpcinfo -p

Try restarting the NFS daemon:

/etc/init.d/nfs restart

Then type in rpcinfo -p again. You'll see that some ports are changed. You probably note that some of these ports are static: Port 111 (tcp and udp) are for portmaps, and port 2049 (tcp and udp) are for nfs. The rest, which are equally important, are random. In order to fix this, you need to edit /etc/conf.d/nfs file to should look something like this:

# Number of servers to be started up by default
RPCNFSDCOUNT=8
# Options to pass to rpc.mountd
# ex. RPCMOUNTDOPTS="-p 32767
RPCMOUNTDOPTS="-p 32767"
# Options to pass to rpc.statd
# ex. RPCSTATDOPTS="-p 32765 -o 32766"
RPCSTATDOPTS="-p 32765 -o 32766"
# OPTIONS to pass to rpc.rquotad
# ex. RPCRQUOTADOPTS="-p 32764"
RPCRQUOTADOPTS="-p 32764" 

EDITED: has not worked for me. Instead I used

# Number of servers to be started up by default
RPCNFSDCOUNT=8
# Options to pass to rpc.mountd
# ex. RPCMOUNTDOPTS="-p 32767
RPCMOUNTDOPTS="-p 4002"
# Warning for  >=nfs-utils-1.1.0
#OPTS_RPC_MOUNTD="-p 4002" 

# Options to pass to rpc.statd
# ex. RPCSTATDOPTS="-p 32765 -o 32766"
RPCSTATDOPTS="-p 4000"
# Warning for  >=nfs-utils-1.1.0
#OPTS_RPC_STATD="-p 4000"

This way, you'll fix status, mountd, and quotad ports to 32764-32767. The only task left is to fix the lock manager ports (nlockmgr).

Fixing the nlockmgr ports depends on the version of your kernel and whether or not you build NFS into the kernel or as a module.

Deduce whether or not you have NFS built in to the kernel (Y), as a module (M) or not at all (N):

zgrep CONFIG_NFSD /proc/config.gz

If Y:

mount /boot -o remount,rw
If GRUB:

edit the file /boot/grub/grub.conf in your favorite editor.

If LILO:

edit the file /etc/lilo.conf in your favorite editor, and then run

lilo

Within the editor, append one of these lines to your kernel options, depending on your kernel version:

lockd.nlm_udpport=4001 lockd.nlm_tcpport=4001 # for 2.6.x kernels
lockd.udpport=4001 lockd.tcpport=4001 # for 2.4.x kernels

And reboot your machine.

If M: Open /etc/modprobe.d/nfs in your favorite editor. Append this line:

options lockd nlm_udpport=4001 nlm_tcpport=4001

Run

update-modules

That way, you fix the nlockmgr ports into 4001 tcp/udp.

Warning for genkernel Users: If you have compiled nfs as module the above won't work, because genkernel does not put the module options into the initrd. Unfortunately the initrd loads the nfs module. You have two options:

Adding the Firewall Rules

It's probably best that you reboot your computer to ensure that all of the appropriate daemons and modules are reloaded, then double check that the ports in use are what you expect by running rpcinfo -p. If that's all set, then add the firewall rules.


1. Save your current firewall rules iptables-save > /etc/iptables.bak
2. Open /etc/iptables.bak in your favorite text editor
3. Add the following rule(s) in appropriate order (according to your existing rules).

Firewall Rule: nfs
-A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2049 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 4001 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 4001 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 32764:32767 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 32764:32767 -j ACCEPT

4. Restore all rules to be part of your current configuration iptables-restore /etc/iptables.bak

For a shorter version of the first few ports (if you want your iptables list to look smaller), you can use -m multiport instead, as follows:

-A INPUT -p tcp -m state --state NEW -m multiport --dport 111,2049,4001,32764:32767 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m multiport --dport 111,2049,4001,32764:32767 -j ACCEPT

Setting Up Firewall (Client Side)

Setting up firewall on the client side is much, much simpler. The only relevant port is 111 tcp/udp. This is the port for portmap, the only service required for client to run.

Hint

Troubleshooting

If you are getting this message mount.nfs: No such device you might want to check if your kernel has NFS support.

See also

Retrieved from "http://www.gentoo-wiki.info/NFS/Share_Directories"

Last modified: Mon, 22 Sep 2008 13:30:00 +1000 Hits: 164,831

Created by NickStallman.net, Luxury Homes Australia
Real estate agents should list their apartments, townhouses and units in Australia.