Participating in the AFS global namespace makes your cell's local file tree visible to AFS users in foreign cells and makes other cells' file trees visible to your local users. It makes file sharing across cells just as easy as sharing within a cell. This section outlines the procedures necessary for participating in the global namespace.
Participation in the global namespace is not mandatory. Some cells use AFS primarily to facilitate file sharing within the cell, and are not interested in providing their users with access to foreign cells.
Making your file tree visible does not mean making it vulnerable. You control how foreign users access your cell using the same protection mechanisms that control local users' access. See Granting and Denying Foreign Users Access to Your Cell.
The two aspects of participation are independent. A cell can make its file tree visible without allowing its users to see foreign cells' file trees, or can enable its users to see other file trees without advertising its own.
You make your cell visible to others by advertising your database server machines and allowing users at other sites to access your database server and file server machines. See Making Your Cell Visible to Others.
You control access to foreign cells on a per-client machine basis. In other words, it is possible to make a foreign cell accessible from one client machine in your cell but not another. See Making Other Cells Visible in Your Cell.
The AFS global namespace appears the same to all AFS cells that participate in it, because they all agree to follow a small set of conventions in constructing pathnames.
The first convention is that all AFS pathnames begin with the string /afs to indicate that they belong to the AFS global namespace.
The second convention is that the cell name is the second element in an AFS pathname; it indicates where the file resides (that is, the cell in which a file server machine houses the file). As noted, the presence of a cell name in pathnames makes the global namespace possible, because it guarantees that all AFS pathnames are unique even if cells use the same directory names at lower levels in their AFS filespace.
What appears at the third and lower levels in an AFS pathname depends on how a cell has chosen to arrange its filespace. There are some suggested conventional directories at the third level; see The Third Level.
You make your cell visible to others by advertising your cell name and database server machines. Just like client machines in the local cell, the Cache Manager on machines in foreign cells use the information to reach your cell's Volume Location (VL) Servers when they need volume and file location information. For authenticated access, foreign clients must be configured with the necessary Kerberos version 5 domain-to-realm mappings and Key Distribution Center (KDC) location information for both the local and remote Kerberos version 5 realms.
There are two places you can make this information available:
In the global CellServDB file maintained by the AFS Registrar. This file lists the name and database server machines of every cell that has agreed to make this information available to other cells. This file is available at http://grand.central.org/csdb.html
To add or change your cell's listing in this file, follow the instructions at http://grand.central.org/csdb.html. It is a good policy to check the file for changes on a regular schedule. An updated copy of this file is included with new releases of OpenAFS.
A file called CellServDB.local in the /afs/
cellname/service/etc directory of your cell's
filespace. List only your cell's database server
Update the files whenever you change the identity of your cell's database server machines. Also update the copies of the CellServDB files on all of your server machines (in the /usr/afs/etc directory) and client machines (in the /usr/vice/etc directory). For instructions, see Maintaining the Server CellServDB File and Maintaining Knowledge of Database Server Machines.
Once you have advertised your database server machines, it can be difficult to make your cell invisible again. You can remove the CellServDB.local file and ask the AFS Registrar to remove your entry from the global CellServDB file, but other cells probably have an entry for your cell in their local CellServDB files already. To make those entries invalid, you must change the names or IP addresses of your database server machines.
Your cell does not have to be invisible to be inaccessible, however. To make your cell completely inaccessible to foreign users, remove the system:anyuser group from all ACLs at the top three levels of your filespace; see Granting and Denying Foreign Users Access to Your Cell.
To make a foreign cell's filespace visible on a client machine in your cell that is not configured for Freelance Mode or Dynamic Root mode, perform the following three steps:
Mount the cell's root.cell volume at the second level in your cell's filespace just below the /afs directory. Use the fs mkmount command with the -cell argument as instructed in To create a cellular mount point.
Mount AFS at the /afs directory on the client machine. The afsd program, which initializes the Cache Manager, performs the mount automatically at the directory named in the first field of the local /usr/vice/etc/cacheinfo file or by the command's -mountdir argument. Mounting AFS at an alternate location makes it impossible to reach the filespace of any cell that mounts its root.afs and root.cell volumes at the conventional locations. See Displaying and Setting the Cache Size and Location.
Create an entry for the cell in the list of database server machines which the Cache Manager maintains in kernel memory.
The /usr/vice/etc/CellServDB file on every client machine's local disk lists the database server machines for the local and foreign cells. The afsd program reads the contents of the CellServDB file into kernel memory as it initializes the Cache Manager. You can also use the fs newcell command to add or alter entries in kernel memory directly between reboots of the machine. See Maintaining Knowledge of Database Server Machines.
Non-windows client machines may enable Dynamic Root Mode by using the -dynroot option to afsd. When this option is enabled, all cells listed in the CellServDB file will appear in the /afs directory. The contents of the root.afs volume will be ignored.
Windows client machines may enable Freelance Mode during client installation or
by setting the FreelanceClient
setting under Service Parameters in
the Windows Registry as mentioned in the Release
Notes. When this option is enabled, the root.afs volume is ignored and a mounpoint
for each cell is automatically created in the the \\AFS directory when the folder \\AFS\
accessed and the foreign Volume Location servers can be reached.
Note that making a foreign cell visible to client machines does not guarantee that your users can access its filespace. The ACLs in the foreign cell must also grant them the necessary permissions.
Making your cell visible in the AFS global namespace does not take away your control over the way in which users from foreign cells access your file tree.
By default, foreign users access your cell as the user anonymous, which means they have only the permissions granted to the system:anyuser group on each directory's ACL. Normally these permissions are limited to the l (lookup) and r (read) permissions.
There are three ways to grant wider access to foreign users:
Grant additional permissions to the system:anyuser group on certain ACLs. Keep in mind, however, that all users can then access that directory in the indicated way (not just specific foreign users you have in mind).
Enable automatic registration for users in the foreign
cell. This may be done by creating a cross-realm trust in the
Kerberos Database. Then add a
PTS group named system:authuser
and give it a group quota greater than the number of foreign
users expected to be registered. After the cross-realm trust
and the PTS group are created, the aklog
command will automatically register foreign users as
needed. Consult the documentation for your Kerberos Server for instructions on how
to establish a cross-realm trust.
Create a local authentication account for specific foreign users, by creating entries in the Protection Database, the Kerberos Database, and the local password file.