NFS - Automate attaching / detaching of shares

From Blue-IT.org Wiki

Revision as of 01:48, 19 August 2007 by Apos (talk | contribs) (Mount/umount on the client)

Prerequisites and what this article is NOT about

The Prerequisites to understand and handle this article are:

  • Administrative privileges to the system
  • Profound knowledge of
    • user administration,
    • bash editing,
    • nfs administration

This article does not cover the nfs administration of a linux/unix system.

The solution shown here is in principle scalable to any system. But specially the way nfs clients are handled here is only a quick hack, not threaded and therefore only useful for small networks with about a couple of PCs. Further development is possible.

This solution requires a new unpriviledged user, that is in the sudoers list allowed to do the mount and unmount command. Additionally it requires the instantiation of passwordless ssh connections between the machines for this user.

For some systems, this might be an security whole.

The problem: add or removing shares

Mainly the intention of this HowTo was the problem, that in case you are removing an exported - and mounted - nfs file system from a network this will have severe side effects:

  • All applications depending on the file system will fail/hang
  • The Desktop Manager which will be left in an unusable state for minutes
  • Drives will not be synced, data loss can occur

The solution: Publisher/Subscriber Pattern

If a nfs exported filesystem is removed from the nfs server, it must also be removed on all computers that are connected to the nfs server. This machine therefore has to have a list of these computers. In the view of the publisher/subscriber pattern it is the publisher.

The nfs clients are the subscribers. That's the way nfs is used. They provide the mechanisms to add or remove files from the server.

Therefore they must connect to the server - and that's the problem of most of the current nfs client implementations: Mostly, they connect in certain time intervals to the nfs servers in the network. The publisher pattern however wants the subscribers (clients) to be reported immediately, when the system state changed, e.g. a drive is unmounted.

All informations in the /etc/exports

The one and only file on the nfs server to handle

  • nfs shares
  • possibly connected clients

is the /etc/exports.

Shares

It is easy to extract all exported shares of the nfs server with a little bash and awk charm and store it in a bash variable called EXPORTS

EXPORTS=`cat /etc/exports | grep -v ^# | grep " " \
| cut -f 1 | awk '{system("echo -n " $1 "----")}' \
| sed s/----/"  "/g`

It filters out all commentary lines, all empty lines, and writes them - devided by spaces - in the string.

Clients

It could be possible, to extract these information for each share of this file with a little bash magic. Feel free to do this. It also might be possible to write all this in perl ;) Drop me a mail, if you made something new.

In this solution I quickly source the variables of the nfs server in a configuration file, called nfs_clientlist.sh. You can use the $HOSTNAME line, if you just have one network card and correctly configured your /etc/hosts:

############################################################
### PLEASE EDIT ############################################
############################################################

# All CLIENTLIST that use the nfs server on this machine
CLIENTLIST=" client1 client2 client3 "

# The hostname of this machine.
# Might be different with more than on network card.
#THISHOST=$HOSTNAME
THISHOST=myServerName

############################################################

Trigger the remotely mount/unmount of shares

For the nfs servers machine to be able trigger the mount/unmount on connected clients there might be two way's:

  • Writing a client/server program that communicates on a special port.
  • Issuing a bash mount/umount via ssh on the remote machine

The first approach might be possible in e.g. perl/pyhton, but for my purposes connection/reconnecting via ssh fits my needs.

Prepare every PC in the network

To be able to trigger the mount/unmount on each client we need a user has the following abilities:

  • Part of sudoers for mount/umount
  • Passwordless reachable via ssh

Create an unpriviledged user with ssh

Log in as root and create an unpriviledged user with a unique user id (in my case 130) and give it the password udevmount. Because we enable passwordless login via ssh a secure password makes no sin. Afterwards we create the ssh keys. Please press just ENTER until the key generation is finished. We don not use a password!

useradd -g 0 -m -u 130 -s /bin/bash udevmount
passwd udevmount
su udevmount -c /bin/bash
ssh-keygen -t dsa

As root, add the user to the sudoers file:

visudo

Add the two lines

udevmount ALL=(ALL) NOPASSWD: /bin/mount
udevmount ALL=(ALL) NOPASSWD: /bin/umount

Export the ssh keys from the nfs server to each client

On each nfs server you have to attach the generated public key to the authorized_keys2 file of the client you want to connect to.

Login as user udevmount:

su udevmount -c /bin/bash

Execute the following script on your nfs server as user udevmount

#!/bin/sh
#
# /root/bin/ssh_helper.sh

source /root/bin/nfs_clientlist.sh

for client in $CLIENTLIST
do
 ssh $client "mkdir -p ~/.ssh; chmod 700 ~/.ssh"
 scp id_dsa.pub $client:~/.ssh/id_dsa.pub_${HOSTNAME}
 ssh $client "cat ~/.ssh/id_dsa.pub_${HOSTNAME} \
   >> ~/.ssh/authorized_keys2; \
   rm ~/.ssh/id_dsa.pub_${HOSTNAME}"
 chmod 600 id_dsa.pub
done

Here the version for a single client in one line ;)

CLIENT=mySingleClient;ssh $CLIENT "mkdir -p ~/.ssh; chmod 700 ~/.ssh";scp id_dsa.pub $CLIENT:~/.ssh/id_dsa.pub_${HOSTNAME}; ssh $NFS_SERVER "cat ~/.ssh/id_dsa.pub_${HOSTNAME} >> ~/.ssh/authorized_keys2; rm ~/.ssh/id_dsa.pub_${HOSTNAME}"

If you now try to connect to your client via ssh you should not be prompted for a password.

Mount/umount on the client

A script will trigger the mount/umount on each client. It is called with the parameter [mount|umount]:

#!/bin/sh
#
# /root/bin/mount_nfs_exports_on_attached_clients.

# Umount all exported nfs mount on attached clients.
# Do an appropriate fstab entry on these clients.
#
# The used mount directories AND client names MUST 
# match the udev and fstab entries 
# on thes machines - which should be the same - exactly.

export HOME=/root
export USER=root
export LANG=de_DE.UTF-8
export PATH=/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/opt/gnome/bin
. /root/bin/nfs_clientlist.sh

# Add some extra space to end and begin
CLIENTLIST=" ${CLIENTLIST} "

# Next line grabs all nfs exports
EXPORTS=`cat /etc/exports | grep -v ^# | grep " " | cut -f 1 \
   | awk '{system("echo -n " $1 "----")}' | sed s/----/"  "/g`
EXPORTS=" $EXPORTS "
MOUNT=$1
ME=$0

echo $0 on $THISHOST.
echo CLIENTLIST: $CLIENTLIST 
echo exports: $EXPORTS

# force remount on network CLIENTLIST
for client in $CLIENTLIST
do
 if ping -c2 $client
 then
  logger -t $ME "Trying to do $MOUNT on $client - `date`"
  for export in $EXPORTS
  do
   su udevmount -c "/usr/bin/ssh udevmount@${client} 'sudo /bin/$MOUNT ${THISHOST}:${export}'" \
   | logger -t $ME 
  done
 fi
done

Call the script on shutdown/boot

To trigger all attached clients of the system, you should call the script in your systems shutdown or boot scripts. On my ubuntu system this works like this:

Boot

Just call the script in the file /etc/init.d/rmnologin. It is the last script called on the system before the login prompt appears:

vim /etc/init.d/rmnologin

Add the line

# Mount nfs shares on connected pcs
sh /root/bin/mount_nfs_exports_on_attached_clients mount

Pay attention to the parameter mount to call the script. This mounts all shares on this PC according to fstab. The script /etc/rc.local didn't work for me, because network connections where not aware when calling the script.

Shutdown

Unfortunatly there is not runlevel script for this on ubuntu. So I devired the rc.local accordingly:

vim /etc/init.d/rc.shutdown
#! /bin/sh

PATH=/sbin:/bin:/usr/sbin:/usr/bin
[ -f /etc/default/rcS ] && . /etc/default/rcS
. /lib/lsb/init-functions

do_stop() {
       if [ -x /etc/rc.shutdown ]; then
               log_begin_msg "Running local shutdown scripts (/etc/rc.shutdown)"
               /etc/rc.shutdown
               log_end_msg $?
       fi
}

case "$1" in
   start)
       # DO nothing
       ;;
   restart|reload|force-reload)
       echo "Error: argument '$1' not supported" >&2
       exit 3
       ;;
   stop)
       do_stop
       ;;
   *)
       echo "Usage: $0 stop" >&2
       exit 3
       ;;
esac

Now edit the init file:

vim /etc/rc.shutdown

#!/bin/sh -e
#
# rc.local
#
# This script is executed when machine is shutting down or entering an runlevel 
# without network support.
#
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

# Unmount nfs shares on attached clients
sh /root/bin/mount_nfs_exports_on_attached_clients umount

exit 0

And add it to the runlevels

update-rc.d -f rc.shutdown defaults

This works out of the box.

Using udev to mount/umount drives

This chapter has additional prerequisites:

  • Editing special udev rules for the actions add/remove
  • Unique naming conventions for udev-rules and fstab

Udev Rule for an external USB drive

The udev rule we intend to write has to take care of

  • Giving the device to be mounted a unique name
  • Triggering our mount_nfs_shares... script according to a remove or adding of the device
  • Rejecting a buggy udev behaviour, that unpredictably fires on remove actions

The following udev rule suffices this:

vim /etc/udev/rules.d/10-local.rules

Unfortunately there is no variable handling in udev rules. So we have to specify the name of our device two or three times. This name must exactly match the entry in the /etc/fstab !!!

# LOKAL UDEV-RULES
#
# You can get infos about devices with
#
#   e.g. udevinfo  -a -p $(udevinfo -q path -n /dev/sda)

# External USB 3,5 MySpecialHarddisk
ACTION=="add",   SUBSYSTEMS=="usb", KERNEL=="sd?1", ATTRS{serial}=="200501CAF55", NAME="MySpecialHarddisk", SYMLINK:="disk/MySpecialHarddisk", GROUP="users", MODE="660", RUN+="/root/bin/udev_mount.sh $env{ACTION} 'MySpecialHarddisk' &"
ACTION=="remove", ENV{PRODUCT}=="67b/3507/1", RUN+="/root/bin/udev_mount.sh $env{ACTION} 'MySpecialHarddisk' &", OPTIONS=="ignore_remove"

The add action is no problem, but on remove action, the ATTRS{} key for the device is not avaiable anymore.

But with

udevmonitor --env

you will get the PRODUCT_ID - comes to ENV{PRODUCT} in the udev rule - that matches the disconnected device. It took me over a year to figure this trick out!

On the clients the /etc/fstab should look like this

# on client
vim /etc/fstab
[...]
nfs_server:/media/MySpecialHarddisk    /media/MySpecialHarddisk     nfs     defaults        0       0

The /etc/fstab on the server accordingly should mount the device the same way to /media/MySpecialHarddisk with noauto

# on nfs server
vim /etc/fstab

/dev/disk/MySpecialHarddisk  /media/ MySpecialHarddisk  ext3      rw,noauto,sync,noexec,users  0 0

Triggering the clients

As you can see, this rule triggers a script called udev_mount.sh. It is responsible for

  • mount/umount local devices according to the above udev rule
  • mount/umount shares on remote clients

The following script does all this. Since udev/hal fires unpredictible on removing a device, I blocked the call of the script for 15 seconds (BLOCK_FOR_SECONDS). This should be enough on any system but is not a save solution for the problem. I hope further versions of udev/hal will fix this.

This script takes two parameters, that are triggerd with the udev rule:

  • the action that occurs: remove->umount, add->mount
  • the name of the device: mount/umount according to /etc/fstab
#!/bin/sh
#
# /root/bin/udev_mount.sh

# mount tevion according to /etc/udev/rules.d/10-local.rules
# Do an appropriate fstab entry, so mount -a
# will do the rest
BLOCK_FOR_SECONDS=15

export HOME=/root
export USER=root
export PATH="$PATH:/usr/local/bin:/sbin/:/usr/bin:/usr/sbin/:/usr/X11R6/bin:/bin:/opt/gnome/bin"
. /root/bin/nfs_clientlist.sh

ACTION=$1
NAME=$2
ME=$0
LOCKFILE=/tmp/udev_mount.lock
TIMESTAMP=`cat $LOCKFILE`
myTime=`date +%s`

# Troubleshooting of fireing hal - udev events
test -f $LOCKFILE || echo $myTime > $LOCKFILE
if [ $myTime -lt $TIMESTAMP ]
then
  #logger -t udev_mount.sh "exiting ..."
  exit 1
else
  #logger -t udev_mount.sh "setting new timestamp ..."
  expr $myTime + $BLOCK_FOR_SECONDS > $LOCKFILE
fi

###################################################
# NOW LETS go 

logger -t $ME "${NAME} ${ACTION}ed" 

# Check Lockfile
#test -f /tmp/$LOCKFILE && exit 1 || touch /tmp/$LOCKFILE

# --------------------------------------------------
# mount local devices
if [ "$ACTION" == "add" ]
then
 if test mount -t /media/$NAME 
 then
  echo already mounted ...
 else
  mkdir -p /media/$NAME
  mount /media/$NAME

  # force remount on network CLIENTLISTs
  for client in $CLIENTLIST
  do
    if ping -c2 $client
    then
      logger -t $ME "Trying to mount $NAME on $client."
      su udevmount -c "/usr/bin/ssh udevmount@${client} \
                      'sudo /bin/mount  ${THISHOST}:/media/${NAME}'" \
      | logger -t $ME
    fi
  done

 fi
fi

if [ "$ACTION" == "remove" ]
then
  
  # force remount on network CLIENTLISTs
  for client in $CLIENTLIST 
  do
    if ping -c2 $client
    then
     logger -t $ME "Trying to do unmount ${NAME} on $client"
     su udevmount -c "/usr/bin/ssh udevmount@${client} \
                      'sudo /bin/umount ${THISHOST}:/media/${NAME}'" \
     | logger -t $ME
    fi
  done

  #/etc/init.d/nfs-kernel-server restart
  #sleep 3

  #umount -f /media/${NAME}

fi

logger -t $ME "$ACTION of $NAME finished."

Test it

If you now add or remove a device, shutdown or boot your nfs server, on all your clients should seamlessly appear/disappear the devices. No desktop manager should hang anymore.

Discussion

I know that some aspects of this implementation could cause problems.

On is the fact, that ping is used to look, if each PC in the client list ist connected to the PC. This might be a problem in secure networks, where ping is not enabled, or firewalls are blocking it.

Another is, that the udevmount user could be a security whole. The only way to decouple this, is to write a real wrapper or client/server application, that runs's as root and tunnels this aspect.

For the system administrator it might be a little bit complicated to handle all the files, but with just one nfs server it boiles down to

  • Add a custom udev rule
  • Unify the fstab mountpints and udev names for the devices
  • edit the nfs_clients.sh with clientlist and local hostname.

The naming convention for equal names in fstab, exports and udev could lead to less flexible mounts. Dealing with symlink could fix the problem.