- It takes a LONG time to 'emerge --sync' all the servers on a LAN.
- The gentoo etiquette says you may not sync more than once a day.
A shared NFS portage. One box synchronises via a cron job. The other boxes mount their /usr/portage tree via NFS.
- Get NFS working on the server system and the client systems.
- Configure your NFS server system to export its portage directory (/usr/portage).
- Mount the NFS share on your client system (issue mount command, or modify fstab for a more permanent mount).
- Make portage aware of the "new" portage directory (modifying /etc/make.conf on the client and set some variables).
- Only one server has to worry about sync'ing (nightly with a cron job).
- NO local rsync server needed.
- A shared /usr/portage/distfiles means that you dont have to download the same files over and over again.
- NFS rocks ;)
NFS is a very poor protocol concerning security. Make sure to enable it only on your LAN where you have absolute trust in your users. You have been warned!
NFS support in Kernel
You MUST have NFS support loaded in your kernel, either built-in (that will require a reboot) or as modules.
|Linux Kernel Configuration: NFS Support built in|
File systems ---> Network File Systems ---> <*> NFS file system support [*] Provide NFSv3 client support [ ] Provide NFSv4 client support (EXPERIMENTAL) [ ] Allow direct I/O on NFS files (EXPERIMENTAL) <*> NFS server support [*] Provide NFSv3 server support [ ] Provide NFSv4 server support (EXPERIMENTAL) [ ] Provide NFS server over TCP support (EXPERIMENTAL) ...
|Linux Kernel Configuration: NFS Support as modules|
File systems ---> Network File Systems ---> <M> NFS file system support [*] Provide NFSv3 client support [ ] Provide NFSv4 client support (EXPERIMENTAL) [ ] Allow direct I/O on NFS files (EXPERIMENTAL) <M> NFS server support [*] Provide NFSv3 server support [ ] Provide NFSv4 server support (EXPERIMENTAL) [ ] Provide NFS server over TCP support (EXPERIMENTAL) ...
I added in NFS server support since this is the box that is going to be hosting the shared portage. I added client support just because you should be able to mount NFS if you can serve it. IMHO if you are on a multi-platform (Windows and Linux) network, this would be a good time to add in SMB support as well...but that's a different HOW-TO.
Save your kernel config and re-compile the kernel.
Gentoo has a the Genkernel tool available for this. If you use this, you can skip to the next section. If you do not wish to use it, the kernel needs to be rebuilt manually. For documentation on that, see Kernel.
Reboot if you built NFS into the kernel or if you built it as a module, simply run:
If you're using a module, edit your /etc/modules.autoload.d/kernel-version and add the following:
Emerge the NFS utilities on the server and clients.
bash_server # emerge nfs-utils
bash_client # emerge nfs-utils
Add the NFS services to your default runlevel (both client and server systems). Portmap doesn't seem to always be started on the client (though it should be a dependency), so it's a good idea to explicitly add it to the default runlevel.
bash_server # rc-update add portmap default bash_server # rc-update add nfs default
bash_client # rc-update add portmap default bash_client # rc-update add nfsmount default bash_client # /etc/init.d/nfsmount start
Start NFS on the server.
bash_server # /etc/init.d/nfs start
NFS Server Setup
Export the NFS portage server's portage directory. Edit /etc/exports on the NFS server and add the following code (change ip_range/subnet to the appropriate values or "*" for a trusted LAN):
As you changed /etc/exports, restart the NFS daemon:
exportfs -ra /etc/init.d/nfs reload
NFS Client Setup
Mount the NFS portage server's share on the client system. First make a new directory to mount to:
bash_client # mkdir /mnt/nfs_portage
Mount the NFS share:
bash_client # mount -t nfs 192.168.0.11:/usr/portage /mnt/nfs_portage/
Or edit your /etc/fstab on the clients and add in the following code. If you are sharing distfiles, and wish to write to the distfiles share from the client, you'll need to mount rw instead of ro. If you only want to mount portage ro (read-only), which is safer, you will want to add a local DISTDIR directory (below):
192.168.0.11:/usr/portage /mnt/nfs_portage nfs rw,nfsvers=3,hard 0 0
If you decide to use a DNS name (i.e., server.on.my.network.com) instead of the IP address, then add the name to your /etc/hosts. Otherwise, you may get occasional "permission denied" errors during emerge operations.
Note that the portage cache is not shared through nfs. If the cache update on the next emerge command you run after a sync bothers you, try adding this to your clients crontabs. This will update the cache on your clients machines 10 mins after the server starts syncing, adjust appropriately for your connection.
10 0 * * * emerge --metadata
Modify your /etc/make.conf on the client systems to use the new portage tree. Add the following entries to /etc/make.conf:
PORTDIR=/mnt/nfs_portage PKGDIR=/mnt/nfs_portage/packages DISTDIR=/mnt/nfs_portage/distfiles #DISTDIR=/usr/portage/distfiles # retain usage of local distfiles directory if Portage is mounted read-only RPMDIR=/mnt/nfs_portage/rpm PORTAGE_TMPDIR=/var/tmp #FEATURES="-distlocks" # possibly required, depending on type and quality of network share. see below
To test sync the server:
bash_server # emerge --sync
Check the client system with the emerge command:
bash_client # rm -rf /etc/make.profile && ln -sf /mnt/nfs_portage/profiles/default-linux/x86/2006.1/server /etc/make.profile bash_client # emerge -pu world
You should see network traffic, and if so the client system should see updates because of the newer sync'ed portage tree on the server.
Explanation of Code
"0 0 * * *" means to sync once a day at midnight "emerge --sync > /dev/null 2>&1 || true --nospinner" does a sync piping output to trash with nospinner "&& emerge world -vup" does an emerge world -vup and emails the root user what is to be updated (assuming mail is setup correctly)
"/usr/portage" is the directory we are exporting "ip_range/subnet" is limiting to who can mount this entry on the network "(sync,no_root_squash,rw)" common options that I have always used.
'no_root_squash' means that root has permission to perform actions on the mounted drive. Without it root is effectively made a nobody-level user.
"server_ip:/usr/portage" what you are going to mount "/usr/portage" where you are going to mount it locally "nfs" what filesystem it is "bg,hard 0 0" common options I have always used
"nfs" tells what modules to load on boot for that kernel-version
from make.conf man page:
"Portage uses lockfiles to ensure competing instances don't clobber each other's files. This feature is enabled by default but may cause heartache on less intelligent remote filesystems like NFSv2 and some strangely configured Samba server (oplocks off, NFS re-export). A tool /usr/lib/portage/bin/clean_locks exists to help handle lock issues when a problem arises (normally due to a crash or disconnect)."
I found it practical to directly mount the NFS share at /usr/portage on my laptop. That way a fallback-portage exists, which is hidden when the up-to-date portage tree is mounted.
192.168.0.11:/portage/ /usr/portage nfs4 noauto,rw 0 0
Another script then decided whenever to mount my NFS4-share or not, depending on the location of my laptop.
- no change of make.conf
- no change of symlinks
- when all else fails, some portage tree still remains usable (i.e. portage tree providing important ebuilds and distfiles neccessary to establish network link to 192.168.0.11).
The synchronization of the portage ends with the (re)build task of a portage cache. This cache speeds up the query process, but it's located outside the portage tree (/var/cache/edb/dep).
After a server side sync, the client cache becomes old and you'll see a slow down in the first emerge execution: the client have to rebuild its cache before exploring the package data.
To avoid this slow down, you can rebuild the cache in the client (nightly in crontab, if you like):
bash_client # emerge --metadata
This will populate the local cache.
$ man emerge
for more details.
To have eix update its database correctly, write this file on the clients:
(For versions of eix before 0.11.1, use the value 'none' instead of 'parse'.)
Note that the update takes a while now. Add update-eix on your crontab to run a while after each emerge sync.
Complaints? Compliments? Praise? Let us know.
Xushi - One thing to note is how easy/hard it is when a person takes the machine (or laptop) off the nfs share, and wants to sync without editing any/all files again. Can i just emerge --sync and expect it to update my local portage? Do i have to edit make.conf every time? etc..
Answer: whenever the nfs mount is not mounted you can just emerge --sync and it will update your local portage, that's why you used /usr/portage as the mount point, if the nfs share is mounted /usr/portage is whatever is on the server, when it's not mounted it's just a directory. While the nfs share is mounted whatever you had in that directory is hidden and you just see the share.
I had a problem to work in a nfs- mounted directory. A "ls -l /nfsdrive" had no reactions til timeout. The problem was a to high r/w- size per default:
mount -t nfs -o rw,rsize=1024,wsize=1024 192.168.0.1:/usr/portage /usr/portage/
If you mount your portage tree under something different than /usr/portage and If you get an error message:
!!! ARCH is not set... Are you missing the /etc/make.profile symlink?
!!! Is the symlink correct? Is your portage tree complete?
It's possible that your /etc/make.profile symlink is pointing to a location that no longer exists. In my case I simply recreated the symlink using command below (obviously the profile and platform depend on your setup don't just copy paste).
ln -sf /mynfsserver/mypath/portage/profiles/default-linux/amd64/2006.0/ /etc/make.profile
If you get a problem like "Error starting NFS daemon" try
- mount -t nfsd nfsd /proc/fs/nfsd
this is what worked for me.
If you have a problem with blocking emerge processes on the client or very bad performance, your network setup might have a problem. In my case, the client was connected on 10 MBit/sec, the server with 100 MBit/sec. Both should be connected at the same rate, also check duplex mode! I could not change this and got lots of packet fragmentation and failed re-assembly of packets (see output of netstat -s and nfsstat -o net). Solution: I set the NFS block size to the largest multiple of 1024 below the MTU size. In an ethernet the MTU is typically 1500, so the NFS uses only packets of 1024 bytes (parameters rsize and wsize):
192.168.x.x:/usr/portage /mnt/nfs_portage nfs rw,noatime,rsize=1024,wsize=1024 0 0
When mounting manually, use parameter "-o rsize=1024,wsize=1024" for mount.
NOTE: According to the manpage, this might be related to using UDP (the default for nfs<version 4). TCP handles fragmentation better (less data to retransmit in case of a transfer error). You could try to either just use nfs4, which defaults to tcp and/or force tcp with the mount option "proto=tcp". Also you don't have to specify a multiple of 1024 as parameter, because the value gets rounded down automatically (which helps if you have to use a variable) -- snv
- NFS/Share Directories
- Additional information: http://nfs.sourceforge.net/nfs-howto/ar01s05.html#packet_size
Created by NickStallman.net, Luxury Homes Australia
Real estate agents should list their apartments, townhouses and units in Australia.