NFS - Automate attaching / detaching of shares

From Blue-IT.org Wiki

Revision as of 17:36, 25 August 2007 by WikiSysOp (talk | contribs) (Handle mount/umount on the client)

Prerequisites and what this article is NOT about

Prerequisites:

  • Administrative privileges to the system
  • Profound knowledge of
    • user administration,
    • bash editing,
    • nfs administration

This article does not cover the nfs administration of a linux/unix system.

The solution shown here is in principle scalable to any system. But specially the way client mounting via nfs is handled is only a quick hack, not threaded and therefore only useful for small networks with about a couple of PCs. Each nfs client is first ping and then handles mount/umount commands. Further development is possible.

This solution requires a new unpriviledged user on each client. This user is in the sudoers list, allowed to do the mount and unmount command.

Additionally the instantiation of passwordless ssh connections between the machines is required for this user.

For some systems, this might be an security risc.

This solution requires that both, nfs client and server are permanently connected through the network.

The problem: add or removing shares

The intention for this HowTo was the problem, that in case you are removing an exported - and mounted - nfs file system from a network this will have severe side effects on the clients:

  • All applications depending on the file system will fail / hang
  • The desktop manager which will be left in an unusable state for minutes
  • Drives will not be synced, data loss can occur

The solution: Publisher/Subscriber Pattern

If a nfs exported filesystem is removed from the nfs server, it must also be removed on all computers that are connected to the nfs server.

The server therefore has to have a list of these computers. In speach of the publisher/subscriber pattern he is the publisher.

Network with NFS Server abnd Clients

The nfs clients are the subscribers. They subscribe to the server. The administrator is doing this in our approach by writing them into the servers clientlist.

To provide mechanisms to add or remove files on the clients, the server must be able to connect somehow to the clients - and that's the problem of most of the current nfs servers implementations: Mostly, just the clients connect in certain time intervals to the nfs servers in the network and then wait fore a certain amount of time for the drive to reappears.

The publisher/subscriber pattern however wants the subscribers (clients) to be reported immediately, when the system state changed, e.g. a drive is unmounted.

And remember the following: any PC can be publisher and/or subscriber. So a mixed setup with more than one nfs server is no problem. Each server cares for it's own clients, no matter what other servers do.

On my home's setup, my server PC exports shares covering a backup drive via USB and a home directory. The second one is a laptop and exports his desktop. The third one is the working horse and exports music and video data. Each PC cares for the others, no matter which on will boot or shutdown - he takes care for unmounting his exported shared on its addicted clients.

First open an editor

Login as root open the following files with your favorite editor.

gedit /etc/fstab \
      /etc/exports \
      /root/bin/nfs_clientlist.sh \
      /root/bin/mount_nfs_exports_on_attached_clients \
      /root/bin/udevmount.sh

These are the files we will alter/write. The last two have to be made executable via chmod 755.

All informations in the /etc/exports

The /etc/exports delivers every information we need.

  • nfs share's mountpoints
  • possibly connected clients

Shares

It is easy to extract all exported shares of the nfs server with a little bash and awk charm and store it in a bash variable called EXPORTS

EXPORTS=`cat /etc/exports | grep -v ^# | grep " " \
| cut -f 1 | awk '{system("echo -n " $1 "----")}' \
| sed s/----/"  "/g`

It filters out all commentary lines, all empty lines, and writes the extracted directory names - devided by spaces - into the string $EXPORTS.

Clients

It could be possible, to extract these information for each share of this file with a little bash magic. Feel free to do this. It also might be possible to write all this in perl ;)

Drop me a mail, if you have an idea..

In this solution I hold the variables of the nfs clients in a configuration file, called nfs_clientlist.sh. You can use the $HOSTNAME line, if you just have one network card and correctly configured your /etc/hosts:

############################################################
### PLEASE EDIT ############################################
############################################################

# All CLIENTLIST that use the nfs server on this machine
CLIENTLIST=" client1 client2 client3 "

# The hostname of this machine.
# Might be different with more than on network card.
#THISHOST=$HOSTNAME
THISHOST=myServerName

############################################################

Trigger the remote mount/unmount of shares

For the nfs servers machine to be able to trigger the mount/unmount on connected clients there might be two way's:

  • Writing a client/server program that communicates on a special port.
  • Issuing a bash mount/umount via ssh on the remote machine

The first approach might be possible in e.g. perl/pyhton, but for my purposes connection/reconnecting via ssh fits my needs.

Prepare the clients in the network

To be able to trigger the mount/unmount on each client we need a user has the following abilities:

  • sudo for mount/umount
  • passwordless login via ssh from server

Create an unpriviledged user with ssh

Log in as root and create an unpriviledged user with a unique user id (in my case 130) and give it the password udevmount. Because we enable passwordless login via ssh a secure password makes no sin. Afterwards we create the ssh keys. Please press just ENTER until the key generation is finished. We don not use a password!

useradd -g 0 -m -u 130 -s /bin/bash udevmount
passwd udevmount
su udevmount -c /bin/bash
ssh-keygen -t dsa

As root, add the user to the sudoers file:

visudo

Add the two lines

udevmount ALL=(ALL) NOPASSWD: /bin/mount
udevmount ALL=(ALL) NOPASSWD: /bin/umount

For now we need a password. Later we will remove it.

Prepare the server

Export the ssh keys from the nfs server to each client

On each nfs server you have to attach the generated public key to the authorized_keys2 file of the client you want to connect to.

Login as user udevmount:

su udevmount -c /bin/bash

Use a little helper script to create and distribute the ssh keys. Don't forget to edit your nfs_clientlist.sh in advance.

cd /root/bin
vim nfs_clientlist.sh
vim ssh_helper.sh
#!/bin/sh
#
# /root/bin/ssh_helper.sh

. /root/bin/nfs_clientlist.sh

for client in $CLIENTLIST
do
 ssh $client "mkdir -p ~/.ssh; chmod 700 ~/.ssh"
 scp id_dsa.pub $client:~/.ssh/id_dsa.pub_${HOSTNAME}
 ssh $client "cat ~/.ssh/id_dsa.pub_${HOSTNAME} \
   >> ~/.ssh/authorized_keys2; \
   rm ~/.ssh/id_dsa.pub_${HOSTNAME}"
 chmod 600 id_dsa.pub
done

Execute the script:

sh ssh_helper.sh

If you now try to connect to your client via ssh you should not be prompted for a password.

Back on prepared clients

Remove the password of user udevmount

Since we have passwordless ssh connection from the server to each client, we don't need and don't want a login for the user udevmount on a local client.

So on every client do (as root)

sudo passwd -d udevmount

This assures, that only servers, that are authenticated via the ssh-keychain can contact the client.

If you need to recover the ssh authentication you have to add a pasword (sudo passwd udevmount again and proceed as before.

Handle mount/umount on the client

A script will trigger the mount/umount on each client.

It is called with the parameter [mount|umount]. The callers are e.g. an init script, udev or it will be called manually on login/logout actions or whatever you can expect. We will see this later.

For now lets write the script down

cd /root/bin
vim mount_nfs_exports_on_attached_clients
#!/bin/sh
#
# /root/bin/mount_nfs_exports_on_attached_clients.

# Umount/mount all exported nfs mount on attached clients.
# Do an appropriate fstab entry on these clients.
#
# The used mount directories AND client names MUST 
# match the udev and fstab entries 
# on thes machines - which should be the same - exactly.

export HOME=/root
export USER=root
export PATH=/usr/local/bin:/usr/bin:/usr/X11R6/bin:/bin:/opt/gnome/bin
. /root/bin/nfs_clientlist.sh

# Add some extra space to end and begin
CLIENTLIST=" ${CLIENTLIST} "

# Next line grabs all nfs exports
EXPORTS=`cat /etc/exports | grep -v ^# | grep " " | \
   cut -f 1 | awk '{system("echo -n " $1 "----")}' | \
   sed s/----/"  "/g`
EXPORTS=" $EXPORTS "
MOUNT=$1
ME=$0

echo $0 on $THISHOST.
echo CLIENTLIST: $CLIENTLIST 
echo exports: $EXPORTS

# force remount on network CLIENTLIST
for client in $CLIENTLIST
do
 if ping -c2 $client
 then
  logger -t $ME "Trying to do $MOUNT on $client - `date`"
  for export in $EXPORTS
  do
   if [ "$MOUNT" == "mount" ]
   then
     # For removable devices check, if they are really mounted
     if mount | grep ${export}
     then
       su udevmount -c "/usr/bin/ssh udevmount@${client} \
       'sudo /bin/$MOUNT ${THISHOST}:${export}'" \
       | logger -t $ME 
     fi
   else
     su udevmount -c "/usr/bin/ssh udevmount@${client} \
     'sudo /bin/$MOUNT ${THISHOST}:${export}'" \
     | logger -t $ME
   fi
  done
 fi
done


Make the script executable by everyone but just editable by root:

chmod 755 mount_nfs_exports_on_attached_clients
chmod root:root mount_nfs_exports_on_attached_clients

Call the script on shutdown/boot

To trigger all attached clients of the system, you should call the script in your systems shutdown or boot scripts. On my ubuntu system this works like this:

Boot

Just call the script in the file /etc/init.d/rmnologin. It is the last script called on the system before the login prompt appears:

vim /etc/init.d/rmnologin

Add the line

# Mount nfs shares on connected pcs
sh /root/bin/mount_nfs_exports_on_attached_clients mount

Pay attention to the parameter mount to call the script. This mounts all shares on this PC according to fstab. The script /etc/rc.local didn't work for me, because network connections where not aware when calling the script.

Edit a shutdown runlevel script

Unfortunatly there is not runlevel script for this on ubuntu. So I devired the rc.local accordingly:

vim /etc/init.d/rc.shutdown
#! /bin/sh

PATH=/sbin:/bin:/usr/sbin:/usr/bin
[ -f /etc/default/rcS ] && . /etc/default/rcS
. /lib/lsb/init-functions

do_stop() {
       if [ -x /etc/rc.shutdown ]; then
               log_begin_msg "Running local shutdown scripts (/etc/rc.shutdown)"
               /etc/rc.shutdown
               log_end_msg $?
       fi
}

case "$1" in
   start)
       # DO nothing
       ;;
   restart|reload|force-reload)
       echo "Error: argument '$1' not supported" >&2
       exit 3
       ;;
   stop)
       do_stop
       ;;
   *)
       echo "Usage: $0 stop" >&2
       exit 3
       ;;
esac

Now edit the script file, which will be called from the above init script when shutdown occurs:

vim /etc/rc.shutdown
#!/bin/sh -e
#
# rc.local
#
# This script is executed when machine is shutting down or entering an runlevel 
# without network support.
#
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

# Unmount nfs shares on attached clients
/bin/sh /root/bin/mount_nfs_exports_on_attached_clients umount

exit 0

Update the runlevels

And add it to the runlevels

update-rc.d -f rc.shutdown defaults

A more restrictive way to do this is

update-rc.d rc.shutdown start 01 2 3 4 5 . stop 01 0 1 6 .

The last approach assures that the script starts before others.

This works out of the box. If not see section troubleshooting at the end of this article.

Using desktop manager login/logout

Please see in the in the troubleshoot section at the end of this article.

Using udev to mount/umount drives

This chapter has additional prerequisites:

  • Editing special udev rules for the actions add/remove
  • Unique naming conventions for udev-rules and fstab

If you don't use removable devices, like an USB backup drive or hot swappable drives, than you can skip the next paragraphs.

Udev Rule for an external USB drive

The udev rule we intend to write has to take care of

  • Giving the device to be mounted a unique name
  • Triggering our mount_nfs_shares... script according to a remove or adding of the device
  • Rejecting a buggy udev behaviour, that unpredictably fires on remove actions

The following udev rule suffices this:

vim /etc/udev/rules.d/10-local.rules

Unfortunately there is no variable handling in udev rules. So we have to specify the name of our device two or three times. This name must exactly match the entry in the /etc/fstab !!!

# LOKAL UDEV-RULES
#
# You can get infos about devices with
#
#   e.g. udevinfo  -a -p $(udevinfo -q path -n /dev/sda)

# External USB 3,5 MySpecialHarddisk
ACTION=="add",   SUBSYSTEMS=="usb", KERNEL=="sd?1", \
ATTRS{serial}=="200501CAF55", NAME="MySpecialHarddisk", \
SYMLINK:="disk/MySpecialHarddisk", GROUP="users", MODE="660", \
RUN+="/root/bin/udev_mount.sh $env{ACTION} 'MySpecialHarddisk' &"

ACTION=="remove", ENV{PRODUCT}=="67b/3507/1", \
RUN+="/root/bin/udev_mount.sh $env{ACTION} 'MySpecialHarddisk' &", \
OPTIONS=="ignore_remove"

The add action is no problem, but on remove action, the ATTRS{} key for the device is not avaiable anymore.

But with

udevmonitor --env

you will get the PRODUCT_ID - comes to ENV{PRODUCT} in the udev rule - that matches the disconnected device. It took me over a year to figure this trick out!

Reload the rules

 udevcontrol reload_rules

On the clients the /etc/fstab should look like this

# on client
vim /etc/fstab
[...]
nfs_server:/media/MySpecialHarddisk  /media/MySpecialHarddisk  nfs  defaults        0 0

The /etc/fstab on the server accordingly should mount the device the same way to /media/MySpecialHarddisk with noauto

# on nfs server
vim /etc/fstab

/dev/disk/MySpecialHarddisk  /media/MySpecialHarddisk  ext3 \
                rw,noauto,sync,noexec,users  0 0

Triggering the clients

As you can see, this rule triggers a script called udev_mount.sh. It is responsible for

  • mount/umount local devices according to the above udev rule
  • mount/umount shares on remote clients

The following script does all this. Since udev/hal fires unpredictable on removing a device, I blocked the call of the script for 15 seconds (BLOCK_FOR_SECONDS). This should be enough on any system but is not a save solution for the problem. I hope further versions of udev/hal will fix this.

This script takes two parameters, that are triggerd with the udev rule:

  • the action that occurs: remove->umount, add->mount
  • the name of the device: mount/umount according to /etc/fstab
#!/bin/sh
#
# /root/bin/udev_mount.sh

# mount tevion according to /etc/udev/rules.d/10-local.rules
# Do an appropriate fstab entry, so mount -a
# will do the rest

export HOME=/root
export USER=root
export PATH="$PATH:/usr/local/bin:/sbin/:/usr/bin:/usr/sbin/:/usr/X11R6/bin:/bin:/opt/gnome/bin"
. /root/bin/nfs_clientlist.sh

# Add some extra space to end and begin
CLIENTLIST=" ${CLIENTLIST} "

ACTION=$1
NAME=$2
ME=$0
LOCKFILE=/tmp/udev_mount.lock
TIMESTAMP=`cat $LOCKFILE`
myTime=`date +%s`
BLOCK_FOR_SECONDS=15

# Troubleshooting of fireing hal - udev events
test -f $LOCKFILE || echo $myTime > $LOCKFILE
if [ $myTime -lt $TIMESTAMP ]
then
  #logger -t udev_mount.sh "exiting ..."
  exit 1
else
  #logger -t udev_mount.sh "setting new timestamp ..."
  expr $myTime + $BLOCK_FOR_SECONDS > $LOCKFILE
fi


###################################################
# NOW LETS go

logger -t $ME "${NAME} ${ACTION}ed" 

# --------------------------------------------------
# mount local devices
if [ "$ACTION" == "add" ]
then
 if mount | grep /media/$NAME 
 then
  echo already mounted ...
 else
  mkdir -p /media/$NAME
  mount /media/$NAME

  # force remount on network clients
  for client in $CLIENTLIST
  do
    if ping -c2 $client
    then
      logger -t $ME "Trying to mount $NAME on ${client}."
      su udevmount -c "/usr/bin/ssh udevmount@${client} \
                      'sudo /bin/mount  ${THISHOST}:/media/${NAME}'" \
      | logger -t $ME
    fi
  done

 fi
fi

if [ "$ACTION" == "remove" ]
then
  
  # force remount on network clients
  for client in $CLIENTLIST 
  do
    if ping -c2 $client
    then
     logger -t $ME "Trying to do unmount ${NAME} on ${client}."
     su udevmount -c "/usr/bin/ssh udevmount@${client} \
                      'sudo /bin/umount ${THISHOST}:/media/${NAME}'" \
     | logger -t $ME
    fi
  done

  #The following is a bad idea, cause udev handles this
  #umount -f /media/${NAME}

fi

logger -t $ME "$ACTION of $NAME finished."

For security reasons do

chmod 755 udev_mount.sh
chmod root:root udev_mount.sh

Test it

If you now should add or remove a device on the server, and look what your clients are doing. Open several terminals.

Do your shares appear?

watch -n2 df -h

What says syslog

tail -f /var/log/syslog

What says udev

udevmonitor --env

Shutdown or reboot your nfs server, on all your clients should seamlessly appear/disappear the devices.

No desktop manager should hang anymore when shutting down your server ;)

Discussion

I know that some aspects of this implementation could cause problems.

Clientlist

There is just one nfs clientlist for each server. This issue might be solved, if I find the time to extract each client out of the exports file.

Time critical

The scripts are a little bit time critical. It assumes, that all clients/servers are running and the network connections are o.k. There are no further checks.

Each client is pinged two times before the ssh command will be executed. Each client is probed after the other. These processes are NOT forked/threaded.

Ping security risc

One is the fact, that ping is used detect running clients. This might be a problem in secure networks, where ping is not enabled, or firewalls are blocking it.

User udevmount security risc

Another is, that the udevmount user could be a security whole. The only way to decouple this, is to write a real wrapper or client/server application, that runs's as root and tunnels this aspect.

But consider

  • udevmount has no password, only ssh authenticated servers can connect. For install ssh authentication you need root access on the client/server
  • all scripts are possessed by root:root and can only be written by root:root

Not handy for big networks

For the system administrator it might be a little bit complicated to handle all the files, but with just one nfs server it boiles down to

  • Add a custom udev rule
  • Unify the fstab mountpoints and udev names for the devices
  • edit the nfs_clients.sh with clientlist and local hostname.

But a good system administrator can handle all this with a little bash magic ;)

Naming convention for fstab, exports, udev

The naming convention for equal names in fstab, exports and udev could lead to less flexible mounts. Dealing with symlinks could fix the problem.

On the other side this could lead to a network wide unification, which isn't such bad thing.

Someone could bother about the file location in /root/bin. Everything could be moved to the home of udevmount's home and be protected speacially, so just root is able to read and execute the files.

The next script will help install a new drive on the server:

#!/bin/sh
#
# Add a new udev device on the nfs server
# that matches the naming conventions

# Used for devicename, mountpoint, export
NAME_OF_DEVICE="Tevion_1_240MB"

# Udev params
UDEV_SUBSYSTEMS="usb"
UDEV_KERNEL="sd?a"
ATTRS_SERIAL="200501CAF55"
ENV_PRODUCT="67b/3507/1"
FILE_UDEV=/etc/udev/rules.d/10-local.rules

# local fstab
FILE_FSTAB=/etc/fstab
FSTAB_MOUNT_DIR=/media
FSTAB_FILESYSTEM="ext3"
FSTAB_MOUNT_OPTS="rw,noauto,sync,dirsync,noexec,nodev,noatime,users"
FSTAB_BOOT_OPTS="0 0"

# nfs exports
FILE_EXPORTS=/etc/exports
NFS_OPTIONS="rw,sync,no_root_squash,no_subtree_check"
CLIENTLIST="ibmr31" # Overwritten, when /root/bin/nfs_clientlist.sh exists


#################################################
#### Lets go ####################################
#################################################
echo "#-----------------------------------------"
echo "# Writing the udev rule"
echo "#-----------------------------------------"
cat - <<EOF
# added by `pwd`/$0
# External ${UDEV_SUBSYSTEMS} device - ${NAME_OF_DEVICE}
ACTION=="add",  SUBSYSTEMS=="${UDEV_SUBSYSTEMS}", KERNEL=="${UDEV_KERNEL}", ATTRS{serial}=="${ATTRS_SERIAL}", NAME="${NAME_OF_DEVICE}", SYMLINK:="disk/${NAME_OF_DEVICE}", GROUP="users", MODE="660", RUN+="/root/bin/udev_mount.sh $env{ACTION} '${NAME_OF_DEVICE}' &"
ACTION=="remove", ENV{PRODUCT}=="${ENV_PRODUCT}", RUN+="/root/bin/udev_mount.sh $env{ACTION} '${NAME_OF_DEVICE}' &", OPTIONS=="ignore_remove"
EOF

echo "#-----------------------------------------"
echo "# fstab"
echo "#-----------------------------------------"
cat - <<EOF
# added by `pwd`/$0
# External ${UDEV_SUBSYSTEMS} device - ${NAME_OF_DEVICE}
/dev/disk/${NAME_OF_DEVICE}  ${FSTAB_MOUNT_DIR}/${NAME_OF_DEVICE}  ${FSTAB_FILESYSTEM} ${FSTAB_MOUNT_OPTS} ${FSTAB_BOOT_OPTS}
EOF

echo "#-----------------------------------------"
echo "# exports"
echo "#-----------------------------------------"
# source CLIENTLIST if exit
if test -e /root/bin/nfs_clientlist.sh
then
 . /root/bin/nfs_clientlist.sh
fi

NFS_STRING=${FSTAB_MOUNT_DIR}/${NAME_OF_DEVICE}

for client in $CLIENTLIST
do
       NFS_STRING="${NFS_STRING} ${client}(${NFS_OPTIONS})"
done

# write the file
cat - <<EOF
# added by `pwd`/$0
# External ${UDEV_SUBSYSTEMS} device - ${NAME_OF_DEVICE}
${NFS_STRING}
EOF

What remains now is to add the device into the fstab on the attached nfs client.

Troubleshooting

System hangs nevertheless

Probably you disconnected the network. A permannt network connection is one of the prerequisites for a nfs network (see intro of this article).

First reconnect or restart the nfs server that causes the problem. Then wait a little bit - the lost share should reappear, the hung system work.

Then shutdown the system normally and your scripts should work.

Path

On some systems, your environment will not match the requirements.

  • Is your shell /bin/sh
  • Is your mount command in /bin/mount, same for umount

General

Open all this files in one editor on your nfs server as root:

gedit /etc/udev/rules.d/10-local.rules \
      /etc/fstab \
      /etc/exports \
      /etc/hosts \
      /root/bin/nfs_clientlist.sh \
      /root/bin/mount_nfs_exports_on_attached_clients \
      /root/bin/udevmount.sh

On your nfs client open

gedit /etc/fstab
  • The mountpoints under /media, the device names for /etc/udev/rules.d/10-local.rules and /etc/fstab are unique
  • Are all nfs impors named correct in /etc/fstab according to the naming conventions on the server. Same unique device name, same mountpoint.
  • When you are using more than on network card (think about the wlan card on your laptop), check if you are using the correct IP/name mapping.

PC too fast

Use gdm logout/login

My laptop is shutting down too fast, so the network connection is lost, before the nfs clients could be contacted. The problem also occurs, when using NetworkManager for network connections (e.g. wireless card).

You can then try to trigger the mount/umount script via the gnome login/logout scripts which you find in the following directories

# Run something ...

# ...before the desktop mangager session starts:
/etc/gdm/PreSession/Default

# ... after the login:
/etc/gdm/PostLogin/Default

# ... after the gnome session ends
# e.g. pressing the logout button
/etc/gdm/PostSession/Default

Put the following command at the beginning of these files and change use mount and umount for whatever is needed:

/bin/sh /root/bin/mount_nfs_exports_on_attached_clients [mount|umount]

Alter the runlevels

Here another way for initialisation of the rc.shutdown script:

Remove the old links: $ update-rc.d -f rc.shutdown remove

Removing any system startup links for /etc/init.d/rc.shutdown ...
  /etc/rc0.d/K20rc.shutdown
  /etc/rc1.d/K20rc.shutdown
  /etc/rc2.d/S20rc.shutdown
  /etc/rc3.d/S20rc.shutdown
  /etc/rc4.d/S20rc.shutdown
  /etc/rc5.d/S20rc.shutdown
  /etc/rc6.d/K20rc.shutdown

Add rc.shutdown to be started at first:

$ update-rc.d rc.shutdown start 01 2 3 4 5 . stop 01 0 1 6 .
Adding system startup for /etc/init.d/rc.shutdown ...
  /etc/rc0.d/K01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc1.d/K01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc6.d/K01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc2.d/S01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc3.d/S01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc4.d/S01rc.shutdown -> ../init.d/rc.shutdown
  /etc/rc5.d/S01rc.shutdown -> ../init.d/rc.shutdown