Configuring Server Machines

This section discusses some issues to consider when configuring server machines, which store AFS data, transfer it to client machines on request, and house the AFS administrative databases. To learn about client machines, see Configuring Client Machines.

If your cell has more than one AFS server machine, you can configure them to perform specialized functions. A machine can assume one or more of the roles described in the following list. For more details, see The Four Roles for File Server Machines.

The OpenAFS Quick Beginnings explains how to configure your cell's first file server machine to assume all four roles. The OpenAFS Quick Beginnings chapter on installing additional server machines also explains how to configure them to perform one or more roles.

Replicating the OpenAFS Administrative Databases

The AFS administrative databases are housed on database server machines and store information that is crucial for correct cell functioning. Both server processes and Cache Managers access the information frequently:

  • Every time a Cache Manager fetches a file from a directory that it has not previously accessed, it must look up the file's location in the Volume Location Database (VLDB).

  • Every time a user obtains an AFS token from the Authentication Server, the server looks up the user's password in the Authentication Database.

  • The first time that a user accesses a volume housed on a specific file server machine, the File Server contacts the Protection Server for a list of the user's group memberships as recorded in the Protection Database.

  • Every time you back up a volume using the AFS Backup System, the Backup Server creates records for it in the Backup Database.

Maintaining your cell is simplest if the first machine has the lowest IP address of any machine you plan to use as a database server machine. If you later decide to use a machine with a lower IP address as a database server machine, you must update the CellServDB file on all clients before introducing the new machine.

If your cell has more than one server machine, it is best to run more than one as a database server machine (but more than three are rarely necessary). Replicating the administrative databases in this way yields the same benefits as replicating volumes: increased availability and reliability. If one database server machine or process stops functioning, the information in the database is still available from others. The load of requests for database information is spread across multiple machines, preventing any one from becoming overloaded.

Unlike replicated volumes, however, replicated databases do change frequently. Consistent system performance demands that all copies of the database always be identical, so it is not acceptable to record changes in only some of them. To synchronize the copies of a database, the database server processes use AFS's distributed database technology, Ubik. See Replicating the OpenAFS Administrative Databases.

If your cell has only one file server machine, it must also serve as a database server machine. If you cell has two file server machines, it is not always advantageous to run both as database server machines. If a server, process, or network failure interrupts communications between the database server processes on the two machines, it can become impossible to update the information in the database because neither of them can alone elect itself as the synchronization site.

AFS Files on the Local Disk

It is generally simplest to store the binaries for all AFS server processes in the /usr/afs/bin directory on every file server machine, even if some processes do not actively run on the machine. This makes it easier to reconfigure a machine to fill a new role.

For security reasons, the /usr/afs directory on a file server machine and all of its subdirectories and files must be owned by the local superuser root and have only the first w (write) mode bit turned on. Some files even have only the first r (read) mode bit turned on (for example, the /usr/afs/etc/KeyFile file, which lists the AFS server encryption keys). Each time the BOS Server starts, it checks that the mode bits on certain files and directories match the expected values. For a list, see the OpenAFS Quick Beginnings section about protecting sensitive AFS directories, or the discussion of the output from the bos status command in To display the status of server processes and their BosConfig entries.

For a description of the contents of all AFS directories on a file server machine's local disk, see Administering Server Machines.

Configuring Partitions to Store AFS Data

The partitions that house AFS volumes on a file server machine must be mounted at directories named

/vicepindex

where index is one or two lowercase letters. By convention, the first AFS partition created is mounted at the /vicepa directory, the second at the /vicepb directory, and so on through the /vicepz directory. The names then continue with /vicepaa through /vicepaz, /vicepba through /vicepbz, and so on, up to the maximum supported number of server partitions, which is specified in the OpenAFS Release Notes.

Each /vicepx directory must correspond to an entire partition or logical volume, and must be a subdirectory of the root directory (/). It is not acceptable to configure part of (for example) the /usr partition as an AFS server partition and mount it on a directory called /usr/vicepa.

Also, do not store non-AFS files on AFS server partitions. The File Server and Volume Server expect to have available all of the space on the partition. Sharing space also creates competition between AFS and the local UNIX file system for access to the partition, particularly if the UNIX files are frequently used.

Monitoring, Rebooting and Automatic Process Restarts

AFS provides several tools for monitoring the File Server, including the scout and afsmonitor programs. You can configure them to alert you when certain threshold values are exceeded, for example when a server partition is more than 95% full. See Monitoring and Auditing AFS Performance.

Rebooting a file server machine requires shutting down the AFS processes and so inevitably causes a service outage. Reboot file server machines as infrequently as possible. For instructions, see Rebooting a Server Machine.

The BOS Server checks each morning at 5:00 a.m. for any newly installed binary files in the /usr/afs/bin directory. It compares the timestamp on each binary file to the time at which the corresponding process last restarted. If the timestamp on the binary is later, the BOS Server restarts the corresponding process to start using it.

The BOS server also supports performing a weekly restart of all AFS server processes, including itself. This functionality is disabled on new installs, but historically it was set to 4:00am on Sunday. Administrators may find that installations predating OpenAFS 1.6.0 have weekly restarts enabled.

The default times are in the early morning hours when the outage that results from restarting a process is likely to disturb the fewest number of people. You can display the restart times for each machine with the bos getrestart command, and set them with the bos setrestart command. The latter command enables you to disable automatic restarts entirely, by setting the time to never. See Setting the BOS Server's Restart Times.