This is a step-by-step guide to installing OpenAFS and setting up an AFS cell on CentOS 7, and presumably RedHat Enterprise Linux 7, and anything from that same family. It is current as of OpenAFS version 1.8.3 on CentOS 7.

This document is based on InstallingOpenAFSonRHEL and includes information from the Unix Quick Start Guide.

Naming conventions

When setting up an AFS cell on the internet, the convention is to user your internet domain name for your Kerberos realm and AFS cell name. The Kerberos realm name should be uppercase and the AFS cell name should be lowercase.

Note, it is possible to create a AFS cell with a different name than the Kerberos realm (or even use a single Kerberos realm in multiple cells). See the documentation for the OpenAFS krb.conf server configuration file for details on mapping realms to cell names.

Server setup

A minimal OS install is sufficient.

For a simple installation, you may use a single server to host the Kerberos KDC, OpenAFS database server, and OpenAFS fileserver. For a production environment, it is recommended that the Kerberos KDC be deployed on a dedicated, secure server, the OpenAFS database servers be deployed on three separate machines, and multiple file servers deployed as needed.

Disk Partitions

An important thing to keep in mind is that you'll need at least one partition on the file server to store volumes for AFS. This will be mounted at /vicepa. If you have multiple partitions they can be mounted at /vicepb, /vicepc, etc. The file server uses file-based storage (not block based). ext3, ext4, and xfs are commonly used filesystems on the vicep partitions.

Clients should have a dedicated partition for the file cache. The cache partition is traditionally mounted at /usr/vice/cache.

Networking

DNS should be working correctly for forward and reverse name lookups before you begin the Kerberos and OpenAFS installation. bind can be installed if you need a local DNS server. Use system-config-bind to add a zone and entries.

Servers need at least one IPv4 interface that is accessible by the AFS clients. IPv6 interfaces are not yet supported.

Time keeping

Kerberos, and therefore OpenAFS, requires good clock synchronization between clients and servers. As CentOS 7 enables chronyd for time synchronization out of the box, it is unlikely you will need to make a change.

Firewall

The default firewall settings on RHEL will block the network ports used by Kerberos and OpenAFS. You will need to adjust the firewall rules on the servers to allow traffic on these ports.

On the Kerberos server, open udp ports 88 and 646:

# firewall-cmd --zone=public --add-port=88/udp
# firewall-cmd --zone=public --add-port=646/udp
# firewall-cmd --runtime-to-permanent

On the OpenAFS database servers, open the udp ports 7002, 7003, and 7007:

# firewall-cmd --zone=public --add-port=7002/udp
# firewall-cmd --zone=public --add-port=7003/udp
# firewall-cmd --zone=public --add-port=7007/udp
# firewall-cmd --runtime-to-permanent

On the OpenAFS file servers, open the udp ports 7000, 7005, and 7007:

# firewall-cmd --zone=public --add-port=7000/udp
# firewall-cmd --zone=public --add-port=7005/udp
# firewall-cmd --zone=public --add-port=7007/udp
# firewall-cmd --runtime-to-permanent

OpenAFS clients use udp port 7001. Open udp port 7001 on any system that will have the OpenAFS client installed.

# firewall-cmd --zone=public --add-port=7001/udp
# firewall-cmd --runtime-to-permanent

Installing Kerberos

Install the Kerberos server and client packages with the command:

# yum install -y krb5-server krb5-workstation krb5-libs

Replace every instance of EXAMPLE.COM with your realm name in the following configuration files:

  • /etc/krb5.conf
  • /var/kerberos/krb5kdc/kdc.conf
  • /var/kerberos/krb5kdc/kadm5.acl

Replace every instance of the example hostname kerberos.example.com with the actual hostname of your Kerberos server in the file /etc/krb5.conf.

Create the Kerberos database using the krb5_util command. You will be prompted for a master principal password. Choose a password, keep it secret, and keep it safe.

# /usr/sbin/kdb5_util create -s

Start the Kerberos servers.

# systemctl start krb5kdc
# systemctl start kadmin
# systemctl enable krb5kdc
# systemctl enable kadmin

Installing OpenAFS servers

Installing servers

The OpenAFS source tarballs are available on the OpenAFS website. You will need to build the source RPM with a script provided in the source tarball, and then build the RPMs using the rpmbuild command. There are third party sources for pre-built packages, in particular the CentOS Storage SIG, but note that at least with the Storage SIG's packages, configuration files are located in /etc/openafs and servers /usr/libexec/openafs instead of the traditional paths.

$ sudo yum install rpm-build yum-utils make perl libtool bzip2 wget

$ wget https://www.openafs.org/dl/openafs/<version>/openafs-<version>-src.tar.bz2
$ wget https://www.openafs.org/dl/openafs/<version>/openafs-<version>-doc.tar.bz2
$ wget https://www.openafs.org/dl/openafs/<version>/RELNOTES-<version>
$ wget https://www.openafs.org/dl/openafs/<version>/ChangeLog

$ tar xf openafs-<version>-src.tar.bz2 --strip-components=4 '*/makesrpm.pl'
$ perl makesrpm.pl openafs-<version>-src.tar.bz2 openafs-<version>-doc.tar.bz2 RELNOTES-<version> ChangeLog

$ sudo yum-builddep openafs-<version>-1.src.rpm

$ rpmbuild --rebuild \
    --define "build_userspace 1" \
    --define "build_modules 0" \
    openafs-<version>-1.src.rpm

where <version> is the OpenAFS version you wish to install, e.g. "1.8.5".

Use yum to install the OpenAFS server packages from your your rpmbuild RPMS directory:

# yum install -y openafs-<version>-1.el7.x86_64.rpm openafs-server-<version>-1.el7.x86_64.rpm openafs-docs-<version>-1.el7.x86_64.rpm openafs-krb5-<version>-1.el7.x86_64.rpm

Create the Kerberos AFS service key and export it to a keytab file:

# cellname=<cellname>
# kadmin.local -q "addprinc -randkey -e aes256-cts-hmac-sha1-96:normal,aes128-cts-hmac-sha1-96:normal afs/${cellname}"
# kadmin.local -q "ktadd -k /usr/afs/etc/rxkad.keytab -e aes256-cts-hmac-sha1-96:normal,aes128-cts-hmac-sha1-96:normal afs/${cellname}"

where <cellname> is the name of your cell. Make note of the key version number (kvno) as it is needed for the next step where it shows <kvno>.

# asetkey add rxkad_krb5 <kvno> 18 /usr/afs/etc/rxkad.keytab afs/${cellname}
# asetkey add rxkad_krb5 <kvno> 17 /usr/afs/etc/rxkad.keytab afs/${cellname}

If your Kerberos REALM name is different from your cell name add your upper case REALM name in /usr/afs/etc/krb.conf, else you will not know why your cell does not work!

Start the OpenAFS servers:

# systemctl start openafs-server
# systemctl enable openafs-server

Check the server log /usr/afs/logs/BosLog to verify the OpenAFS bosserver process started. Set the cell name with the command:

# bos setcellname localhost ${cellname} -localauth

Starting the database services

The ptserver process stores the AFS users and group names in your cell. The vlserver process stores the file server locations of the AFS volumes in your cell. Start the OpenAFS database processes with the commands:

# bos create localhost ptserver simple -cmd /usr/afs/bin/ptserver -localauth
# bos create localhost vlserver simple -cmd /usr/afs/bin/vlserver -localauth

Check the log files BosLog, PTLog, VLLog in the /usr/afs/logs directory to verify the ptserver and vlserver started.

Starting the file server

Start the file server. This is a rather long command line.

# bos create localhost \
   dafs dafs -cmd \
   "/usr/afs/bin/dafileserver -L" \
   "/usr/afs/bin/davolserver -p 64 -log" \
   "/usr/afs/bin/salvageserver" \
   "/usr/afs/bin/dasalvager -parallel all32" \
   -localauth

Check the servers logs BosLog, FileLog, VolserLog, and SalsrvLog, in `/usr/afs/logs' to verify the file server started. At this point the OpenAFS server processes should be running.

Creating the admin account

Create a user account for Kerberos and AFS administration.

# myname=<username>
# kadmin.local -q "addprinc ${myname}/admin"
Enter password: <password>
Re-enter password: <password>
# pts createuser ${myname}.admin -localauth
# pts adduser ${myname}.admin system:administrators -localauth
# bos adduser localhost ${myname}.admin -localauth

where <myname> is your user name and <password> is your chosen password.

The admin principal can be any name you want. The recommended practice is to create two principals for admins: one for your normal user, and an additional admin account. For example, I may have steve and steve/admin. Note that for Kerberos 5, the name is steve/admin@REALM, whereas in AFS, the name is steve.admin. Use steve.admin for all AFS commands. Since this is to be an administrator we will also register it as such with the bos server. We can give it administrator rights by adding it to the group system:administrators. This is an AFS default group. The pts membership command will list all the groups that your user is a member of. Verify that it lists system:administrators.

Create the root volumes

At this point we need our /vicepa partition. You should have done this when installing the operating system. If you have not, do it now, then restart the fileserver with systemctl restart openafs-server. (If this is only a test system you may create a pseudo partition without needing to create an actual separate filesystem. To do this, create an empty directory called /vicepa and then create an empty file called /vicepa/AlwaysAttach, then restart the file server with systemctl restart openafs-server.)

Create the root volumes with the commands:

# vos create localhost a root.afs -localauth
# vos create localhost a root.cell -localauth

Check the volume location database to verify the two volumes are listed.

# vos listvldb

Finally, now that the server configuration is done, put the bosserver into the more secure restricted mode, which disables several bos commands which are strictly not needed for normal operation.

# bos setrestricted localhost -mode 1 -localauth

This completes the server side setup. At this point will need to install the OpenAFS cache manager (client), setup the top level directories, and then start adding files to your new cell. The cache manager may be installed on a separate machine (for example, your laptop.) Also, you will no longer be using the root user to run OpenAFS commands, but instead from this point forward you should use your Kerberos credentials.

Installing OpenAFS Client

Kernel Module

If installing the cache manager on an OpenAFS server, first remove the symlinks created by bosserver. These will be in the way if the client is installed.

# test -h /usr/vice/etc/ThisCell && rm /usr/vice/etc/ThisCell
# test -h /usr/vice/etc/CellServDB && rm /usr/vice/etc/CellServDB

The OpenAFS kernel module must match your kernel version. Unless you are maintaining a local yum repository that tracks every single kernel release and updates its kmod builds, you will want to use the DKMS mechanism for installing the the kernel module. If you are installing on a freshly patched machine, be sure to reboot before installing the OpenAFS kernel module.

$ sudo yum install rpm-build yum-utils make perl libtool bzip2 wget

$ wget https://www.openafs.org/dl/openafs/<version>/openafs-<version>-src.tar.bz2
$ wget https://www.openafs.org/dl/openafs/<version>/openafs-<version>-doc.tar.bz2
$ wget https://www.openafs.org/dl/openafs/<version>/RELNOTES-<version>
$ wget https://www.openafs.org/dl/openafs/<version>/ChangeLog

$ tar xf openafs-<version>-src.tar.bz2 --strip-components=4 '*/makesrpm.pl'
$ perl makesrpm.pl openafs-<version>-src.tar.bz2 openafs-<version>-doc.tar.bz2 RELNOTES-<version> ChangeLog

$ sudo yum-builddep openafs-<version>-1.src.rpm
$ sudo yum install "kernel-devel-uname-r == $(uname -r)"
$ sudo yum install elfutils-devel

$ rpmbuild --rebuild openafs-<version>-1.src.rpm

$ sudo yum install -y dkms gcc kernel-devel kernel-headers
$ cd ~/rpmbuild/RPMS/x86_64
$ sudo yum install -y \
    openafs-<version>-1.el7.x86_64.rpm \
    openafs-client-<version>-1.el7.x86_64.rpm \
    openafs-krb5-<version>-1.el7.x86_64.rpm \
    dkms-openafs-<version>-1.el7.x86_64.rpm

Client side configuration

/usr/afs/etc is the location for the server files. We also need to configure the client. The client files are located in /usr/vice/etc. RPM based OpenAFS packages are set up in such a way that there are two CellServDB client files in /usr/vice/etc: CellServDB.dist and CellServDB.local. We will copy ours to the local list.

# cp /usr/afs/etc/CellServDB /usr/vice/etc/CellServDB.local
# cp /usr/afs/etc/ThisCell /usr/vice/etc/ThisCell

The RPM based openafs-client init script will combine the CellServDB.dist and CellServDB.local files into the CellServDB file, which the cache manager reads on startup.

Start the cache manager

Start the cache manager with the command:

# systemctl start openafs-client
# systemctl enable openafs-client

Run the mount command to verify the AFS filesystem is mounted at /afs.

Try logging in to AFS. kinit logs you into Kerberos (this is the normal Kerberos utility). aklog gets you an AFS token. The tokens command lists the tokens you have. You should see afs@<cellname>. If you run into problems, you can use klist to list your Kerberos tickets, or aklog with the -d flag.

$ kinit <username>/admin
<password>
$ aklog
$ tokens

Setting up the cell root directory

Now we will set up the root directories. The root directory for the AFS namespace is in the volume called root.afs. The root directory of your cell should be in a volume called root.cell. You will need to set the ACLs for these directories. AFS access rights are rather different from those in UNIX. I suggest reading the IBM documentation for this; it still applies.

The cache manager is started in -dynroot mode on RPM-based installations. This allows the cache manager to mount the AFS filesystem without the need to contact the OpenAFS servers. The side-effect of -dynroot is the root.afs volume cannot be accessed directly. Fortunately, we can use "magic" .:mount directory to access the root.afs volume.

Set up the top level directories.

$ cellname=$(cat /usr/vice/etc/ThisCell)

$ cd /afs/.:mount/${cellname}:root.afs/
$ fs mkmount ${cellname} root.cell -cell ${cellname}
$ fs mkmount .${cellname} root.cell -cell ${cellname} -rw
$ fs setacl . system:anyuser read

$ cd /afs/.:mount/${cellname}:root.cell/
$ fs setacl . system:anyuser read

Replicate the root volumes so that you have read only copies. Later, if more file servers are added to the cell, additional read-only copies should be made.

$ server=<hostname of the fileserver>
$ vos addsite ${server} a root.afs
$ vos release root.afs
$ vos addsite ${server} a root.cell
$ vos release root.cell

Adding users and volumes

Now that OpenAFS is installed, the site specific AFS volumes and directory structure can be set up. Users should be made, along with their home volumes. ACLs for the volume directories should be established.

This section provides an example setup. The names of the volumes and directories can be specific to your needs.

You must first authenticate as a Kerberos/AFS admin to run the commands shown in this section.

$ kadmin <username>/admin
$ aklog

Creating user accounts

We can create a user by registering it to Kerberos and the ptserver database. If you use integrated login, make sure that the users' UNIX uids and pts ids match.

$ kadmin -q "addprinc <username>"
<enter password for username>
$ pts createuser <username> -id <numeric uid>

If you use integrated login, make sure that you add an entry to /etc/passwd or whatever means you use of distributing user information.

Setting up volumes for users

First, we can make a top level volume to contain the mount points to volumes for individuals. The IBM documentation suggests making a directory /afs/<cellname>/user with volume name user for all of your AFS users. Some sites have adopted the directory home instead of user. If you use home, your users may feel more comfortable, as this is the convention in Linux and most UNIXes.

The following commands create the home volume and make a read-only replica:

$ vos create <fileserver> a home
$ cd /afs/.<cellname>
$ fs mkmount home home
$ vos addsite <fileserver> a home
$ vos release root.cell
$ vos release home

Now you can create directories for any of your users. We will not replicate these volumes. By not replicating them, we force the cache manager to access a read/write volume. This means that even if we access the cell through the read-only volume we can still access our read/write user directories (this is what you want). Maxquota 0 means that there is no size restriction to the volume. You can give it a restriction if you like (the default is 5mb). Do these commands for every directory you like.

$ vos create <fileserver> a home.<uid> -maxquota 0
$ cd /afs/.<cellname>/home
$ fs mkmount <user> home.<uid>
$ vos release home

The home volume is released to make the new directories are visible from the read only mount point.

Setting ACLs

Now that we have volumes, we should set some restrictions on those volumes. If you trust the users not to make directories world writable, you can give the user of the directory full rights.

$ cd /afs/.<cellname>/home
$ fs setacl -dir <user> -acl <user> all

To give the users read and write access, but not rights to change ACLs,

$ cd /afs/.<cellname>/home
$ fs setacl -dir <user> -acl <user> write