[Contents] [Prev. Chapter] [Next Section] [Next Chapter] [Index] [Help]

5    Setting Up an NFS Service

A Network File System (NFS) service includes one or more file systems, Advanced File System (AdvFS) filesets, or Logical Storage Manager (LSM) volumes that a member system exports to clients, making the data highly available. NFS services can also include highly available applications.

An NFS service name is assigned its own Internet address. The member system that runs the service responds to this address. This makes the service autonomous and not dependent on the availability of any particular member system. Clients access the service by including the service name and the exported directory path in their /etc/fstab file. If the service stops on a member system, it fails over to a viable system, and clients only experience a short timeout.

The NFS service name also allows you to use the POLYCENTER NetWorker Save and Restore (NetWorker) to back up the service's storage. NetWorker treats the NFS service as an independent client and stores the storage indexes under the name of the service. This enables you to back up and recover the service's storage independent of the member system running the service. See the NetWorker documentation for information about using NetWorker to back up an NFS service's storage.

To set up an NFS service, you should be familiar with setting up NFS in general, the /etc/exports file, and the /etc/fstab file.

Before you set up your NFS service, both the client and member systems must be running NFS Version 2.0 or Version 3.0 and use the Address Resolution Protocol (ARP). You also must prepare the shared disks that will be used in the service and install any application used in the service.

To fail over an application, in addition to disks, at a minimum, you must create a user-defined start action script that includes the commands to start the application, and create a user-defined stop action script that includes the commands to stop the application. See Chapter 4 for more information about preparing disks, applications, and action scripts for a service.


[Contents] [Prev. Chapter] [Next Section] [Next Chapter] [Index] [Help]

5.1    NFS Service Requirements

Network Files System (NFS) services have the following requirements:


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.2    NFS Service Components

When you add a Network File System (NFS) service to an available server environment (ASE), the asemgr utility prompts you for service-specific information, in addition to information that is similar to what you specify with the nfssetup script. See nfssetup(8) for more information.

You can specify the following NFS service information:

If you also want to fail over an application, you must modify the NFS service and specify the action scripts. See Chapter 4 for information about action scripts. See Chapter 10 for information about modifying services.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.3    Understanding the Service Exports File

When you add a Network File System (NFS) service to an available server environment (ASE), the TruCluster software edits the /etc/exports.ase file on each member system and includes an entry that specifies the service's exports file. For example:

 # more exports.ase
 
 .INCLUDE /etc/exports.ase.aseba1
 .INCLUDE /etc/exports.ase.aseba2
 #

Service exports file names have the following syntax:

/etc/exports.ase.service

The service variable specifies the service name.

A service exports file contains a list of all the file systems and filesets in the service and their mount points, using a format that is similar to the /etc/exports file. It can include the remote hosts, network groups, or Internet Protocol (IP) addresses to which the service's file systems or filesets are restricted. If none are specified in the file, then all remote hosts can mount the directory. See exports(4) for information about the file format.

Entries in service exports files include a -m option, which specifies the actual mount point for a file system or fileset.

Note

Do not manually edit the /etc/exports.ase.service file to modify services; instead, use the asemgr utility to make modifications.

To delete a file system or fileset from an NFS service, use the asemgr utility to remove its entry from the /etc/exports.ase.service file. When deleting a file system or fileset, the asemgr utility prompts you to invoke an editor, providing the opportunity to delete the entry at this time. If you choose to not run an editor at this time, then you must remember to do so later.

The following example shows an exports file for an NFS service with two file systems:

#
#  ASE exports file for service aseba2 (edit only with asemgr)
#
 
#/dev/rz25c exports (after this line) - DO NOT DELETE THIS LINE
/ase/aseba2 -m=/var/ase/mnt/aseba2/ase/aseba2 -ro=0
 
#/dev/rz26c exports (after this line) - DO NOT DELETE THIS LINE
/ase/aseusr -m=/var/ase/mnt/aseba2/ase/aseusr  testit milan tabby
#


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.4    Adding a Basic NFS Service

To add a Network File System (NFS) service to an available server environment (ASE), choose the "Adding a new service" item from the Service Configuration menu and provide the appropriate information for the service at the prompts. Example 5-1 shows an example of adding a basic NFS service that includes a UNIX file system and a Logical Storage Manager (LSM) volume.

Example 5-1:  Adding a Basic NFS Service

# asemgr
 

.
.
.
Adding a service   Select the type of service:   1) NFS service 2) Disk service 3) User-defined service 4) DRD service 5) Tape service   q) Quit without adding a service x) Exit ?) Help   Enter your choice [1]: 1   You are now adding a new NFS service to the ASE.   An NFS service consists of an IP host name and disk configuration that are failed over together. The disk configuration can include UFS file systems, AdvFS filesets, and LSM disk groups.   NFS Service Name   The name of an NFS service is a unique IP host name that has been set up for this service. This host name must exist in the local hosts database on all ASE members.   Enter the NFS service name: ase3   Checking to see if ase3 is a valid host...   Specifying Disk Information   Enter one or more UFS device special files, AdvFS filesets, or LSM volumes to define the disk storage for this service.   For example: Device special file: /dev/rz3c AdvFS fileset: domain1#set1 LSM volume: /dev/vol/dg1/vol01   To end the list, press the Return key at the prompt.   Enter a device special file, an AdvFS fileset, or an LSM volume as storage for this service (press 'Return' to end): /dev/rz25c   Enter the directory pathname(s) to be NFS exported from the storage area "/dev/rz25c". Press 'Return' when done.   Enter a directory pathname: /ase_dir Enter a host name, NIS netgroup, or IP address for the NFS exports list (press 'Return' for all hosts): [Return]   Enter a directory pathname: [Return]   UFS File System Read-Write Access   Mount /dev/rz25c file system with read-write or read-only access?   1) Read-write 2) Read-only   Enter your choice [1]:[Return]   You may enable user and group quotas on this file system by specifying full path names for the quota files. If you place the files within the service's file systems, the quota assignments you make with edquota will relocate with the service. Enter "none" to disable quotas.   User quota file [/var/ase/mnt/ase3/ase_dir/quota.user]: [Return]   Group quota file [/var/ase/mnt/ase3/ase_dir/quota.group]: [Return]   UFS Mount Options Modification   Enter a comma-separated list of any mount options you want to use for "/dev/rz25c" (in addition to the UFS-specific defaults listed in the mount.8 reference page). If none are given, only the default mount options are used.   Enter options (Return for none): noexec   Enter a device special file, an AdvFS fileset, or an LSM volume as storage for this service (press 'Return' to end): /dev/vol/dg3/vol04   Enter the directory pathname(s) to be NFS exported from the storage area "/dev/vol/dg3/vol04". Press 'Return' when done.   Enter a directory pathname: /ase_data   Enter a host name, NIS netgroup, or IP address for the   NFS exports list (press 'Return' for all hosts): net_staff Enter a directory pathname: [Return]   The following is a list of device(s) and pubpath(s) for disk group dg3:   DEVICE PUBPATH   rz20c /dev/rz20c   Is this correct (y/n) [y]: y   UFS File System Read-Write Access   Mount /dev/vol/dg3/vol04 file system with read-write or read-only access?   1) Read-write 2) Read-only   Enter your choice [1]: 2   UFS Mount Options Modification   Enter a comma-separated list of any mount options you want to use for "/dev/vol/dg3/vol03" (in addition to the UFS-specific defaults listed in the mount.8 reference page). If none are given, only the default mount options are used.   Enter options (Return for none): nosuid   Enter a device special file, an AdvFS fileset, or an LSM volume as storage for this service (press 'Return' to end): [Return]   NFS needs a disk area that is writable to keep some state information for NFS locking during ASE operation. Choose a disk area that is writable and will not fill up.   Select the disk area to use for the NFS locking information:   1) /dev/rz25c (UFS) 2) /dev/vol/dg3/vol04 (UFS) x) Exit   Enter your choice [1]: 1     Selecting an Automatic Service Placement (ASP) Policy   Select the policy you want ASE to use when choosing a member to run this service:   b) Balanced Service Distribution f) Favor Members r) Restrict to Favored Members   x) Exit to Service Configuration ?) Help   Enter your choice [b]: b     Selecting an Automatic Service Placement (ASP) Policy   Do you want ASE to relocate this service to a more highly favored member if one becomes available while the service is running (y/n/?):y   Enter 'y' to add Service 'ase3' (y/n):y   Adding service... Starting service... Saving the updated database... Service successfully added...


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5    Adding an NFS Mail Service

You can use the TruCluster software to set up a mail system and make it highly available. The sendmail program uses the Simple Mail Transfer Protocol (SMTP) to deliver mail messages between users, systems, and networks. You can set up member systems as mail hubs (servers) so that other systems in your mail environment send mail to and through the mail hubs. If a problem occurs in a mail hub, the TruCluster software can fail over the mail to another hub and reroute incoming mail to the new hub.

Before setting up a mail service, you should understand how the sendmail program works. The sendmail program can receive mail from an SMTP connection or directly from a process; that is, from the mail system or some user interaction. For example:

  1. The sendmail program writes the message to a mail queue area, which is the /var/spool/mqueue directory by default.

  2. After the entire mail message is written to the mail queue area, sendmail tells the sending process that the mail was received, so the sending process is assured that the mail was delivered. If the machine crashes at this point, a copy of mail remains in the mail queue area, so the mail is not lost.

  3. After a secure copy of the message is in the mail queue area, sendmail parses the address and delivers the message according to the instructions in the sendmail configuration file, /var/adm/sendmail/sendmail.cf.

  4. The sendmail program passes the mail to another delivery agent, such as DECnet, UNIX-to-UNIX Copy Program (UUCP) or another SMTP. In addition, local mail is passed to the local mailer (/usr/bin/mail) which, by default, delivers it to the system mailbox, /var/spool/mail/username. If the address is not local, sendmail passes the mail to the mail delivery agent on the remote machine. If sendmail cannot pass the mail (for example, the remote machine is down), the mail remains in the queue area to be processed at a later time.

To set up a highly available mail service with the TruCluster software, the file systems or filesets containing the /var/spool/mail mailbox directory and the /var/spool/mqueue mail queue area must be shared between two or more systems. The /var/spool/mail mailbox directory must be shared so that the mail drop is available for mail delivery and processing on all the member systems that are set up as mail hubs. The /var/spool/mqueue queue area must be shared to ensure that any mail that remains in the queue area can be processed even if the hub that queued the mail is not available. In addition, both areas must have a common floating network connection to which mail can be sent.

You can share the mailbox directory and the queue area by using the TruCluster software to set up an NFS service to share the file systems or filesets and by modifying all the mail hubs' sendmail.cf configuration files.

The following steps describe how to set up two member systems as mail hubs (server1 and server2) using an NFS service named mail_hub:

  1. Set up the disk or disks that will contain the /var/spool/mail and /var/spool/mqueue file systems.

  2. Use the asemgr utility to set up the NFS service mail_hub that will export the /var/spool/mail and /var/spool/mqueue file systems.

    Note

    NFS locking (the lockd daemon) must be set up and running on the member systems that are mail hubs because locking will be done on both exported spool areas.

  3. Modify the mail_hub service's exports file. Use the asemgr utility to make the /var/spool/mail and /var/spool/mqueue file systems accessible by root.

  4. Set up the sendmail.cf configuration files on the member systems that will be mail hubs, server1 and server2. You must set up the sendmail.cf files to ensure that mail addressed to user@server1, user@server2, or user@mail_hub is delivered locally.

  5. NFS-mount the file systems on the mail hub member systems. On both mail hub member systems, server1 and server2, use the mount command to NFS-mount /var/spool/mail and /var/spool/mqueue from the service (Internet host name) mail_hub and then add this mount point to the /etc/fstab file on each member system.

  6. Optionally, define a Berkeley Internet Name Domain (BIND) mail exchanger (MX) record to point to all mail hubs. You can define an MX record so that if server1 is inaccessible, mail sent to server1 is forwarded to server2 and vice versa.

After you complete these steps, the mail service is ready to use. You can send mail to server1, server2, or mail_hub, and your mail will be delivered to the shared local /var/spool/mail area on the server1 and server2 mail hub member systems.

You can log in to server1, server2, or mail_hub and access your mail. You can also mount /var/spool/mail@mail_hub on another system and access your mail from that system. However, if one of the mail hub member systems goes down, mail sent directly to that mail hub member system will not be delivered until it reboots. You can fix this problem by defining the BIND MX records.

The following sections describe the steps in detail.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.1    Preparing Disks for a Mail Service

To prepare the disks that will contain the shared /var/spool/mail and /var/spool/mqueue file systems, follow the guidelines specified in Chapter 4.

If you are using one disk partition for both the /var/spool/mqueue and /var/spool/mail directories, create an mqueue directory and a mail directory. You perform these tasks on only one mail hub member system.

The following example sets up a UNIX file system on an entire RZ10 disk and creates two directories:

# newfs /dev/rrz10c
# mount /dev/rz10c /mnt
# mkdir /mnt/mail
# chmod 1777 /mnt/mail
# mkdir /mnt/mqueue
# chmod 755 /mnt/mqueue
# umount /mnt


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.2    Using the asemgr Utility to Add a Mail Service

To add a mail service to your available server environment (ASE), run the asemgr utility on one mail hub member system, choose the "Add a new service" item from the Service Configuration menu and provide the information appropriate for your configuration at the prompts.

Example 5-2 shows how to add a service named mail_hub, which consists of the /dev/rz10c file system and the /var/spool/mqueue and /var/spool/mail directories. The example also shows how to restrict access to the service to the server1 and server2 mail hub member systems.

Example 5-2:  Adding a Mail Service

# asemgr
 

.
.
.
Adding a service   Select the type of service:   1) NFS service 2) Disk service 3) User configured service 4) DRD service 5) Tape service   q) Quit without adding a service x) Exit ?) Help   Enter your choice [1]: 1   You are now adding a new NFS service to the ASE.   An NFS service consists of an IP host name and disk configuration that are failed over together. The disk configuration can include UFS file systems, AdvFS filesets, and LSM disk groups.   NFS Service Name   The name of an NFS service is a unique IP host name that has been set up for this service. This host name must exist in the local hosts database on all ASE members.   Enter the NFS service name: mail_host   Checking to see if mail_host is a valid host...     Specifying Disk Information   Enter one or more UFS device special files, AdvFS filesets, or LSM volumes to define the disk storage for this service.   For example: Device special file: /dev/rz3c AdvFS fileset: domain1#set1 LSM volume: /dev/vol/dg1/vol01   To end the list, press the Return key at the prompt.   Enter a device special file, an AdvFS fileset, or an LSM volume as storage for this service (press 'Return' to end): /dev/rz10c   Enter the directory pathname(s) to be NFS exported from the storage area /dev/rz10c. Press 'Return' when done.   Enter a directory pathname: /var/spool/mail Enter a host name, NIS netgroup, or IP address for the NFS exports list (press 'Return' for all hosts):server1 server2   Enter a directory pathname: /var/spool/mqueue Enter a host name, NIS netgroup, or IP address for the NFS exports list (press 'Return' for all hosts):server1 server2   Enter a directory pathname:[Return]   UFS File System Read-Write Access   Mount /dev/rz10c file system with read-write or read-only access?   1) Read-write 2) Read-only   Enter your choice [1]: 1   You may enable user and group quotas on this file system by specifying full path names for the quota files. If you place the files within the service's file systems, the quota assignments you make with edquota will relocate with the service. Enter "none" to disable quotas.   User quota file [/var/ase/mnt/mail_host/var/mail/quota.user]: none Group quota file [/var/ase/mnt/mail_host/var/mail/quota.group]: none     UFS Mount Options Modification   Enter a comma-separated list of any mount options you want to use for /dev/rz10c (in addition to the UFS-specific defaults listed in the mount.8 reference page). If none are given, only the default mount options are used.   Enter options (Return for none): [Return]   Enter a device special file, an AdvFS fileset, or an LSM volume as storage for this service (press 'Return' to end): [Return]   Selecting an Automatic Service Placement (ASP) Policy   Select the policy you want ASE to use when choosing a member to run this service:   b) Balanced Service Distribution f) Favor Members r) Restrict to Favored Members   x) Exit to service config menu ?) Help   Enter your choice [b]: b   Selecting an Automatic Service Placement (ASP) Policy   Do you want ASE to relocate this service if a more highly favored member becomes available while the service is running (y/n/?):n   Enter 'y' to add Service 'mail_host' (y/n): y Adding service... Starting service... Saving the updated database... Service successfully added...  


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.3    Modifying the Mail Service's Exports File

On one mail hub member system, use the asemgr utility to edit the mail_hub service's exports file and make the /var/spool/mail and /var/spool/mqueue directories accessible by root on all the mail hub member systems. You must add the -root=0 option to the entries for the /var/spool/mqueue and /var/spool/mail directories in the /etc/exports.ase.mail_hub file.

To edit the mail_hub service's ASE exports file, follow these steps:

  1. Invoke the asemgr utility and choose the "Modify a service" menu item from the Service Configuration menu.

  2. Choose the name of the service you want to modify. In this example, choose mail_hub.

  3. Choose the "General service information" menu item when prompted for what you want to modify.

  4. Choose the disk area that contains the mail areas to modify. In this example, choose /dev/rz10c.

  5. Choose the "Modify the NFS exports list" menu item. The asemgr utility invokes an editor (as defined by the EDITOR system variable) so you can edit the mail service's ASE exports file.

  6. Edit the exports file to include the -root=0 option. For example, the file should look like the following:

    #
    #  ASE exports file for service mail_hub
    #
     
    /dev/rz10c exports (after this line) - DO NOT DELETE THIS LINE
    /var/spool/mqueue -root=0 server1 server2
    /var/spool/mail -root=0 server1 server2
    

  7. Exit the asemgr utility. The mail_hub service's exports file, /etc/exports.ase.mail_hub, is updated on all the mail hub member systems.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.4    Setting Up the sendmail.cf Configuration File

On each mail hub member system, you must set up the sendmail.cf configuration file to handle mail sent directly to the mail hub member systems and to the mail service name as local mail. For example, if server1 and server2 are mail hub member systems for the NFS mail service mail_hub, you must set up the sendmail.cf configuration file on both mail hub member systems to ensure that mail sent to server1, server2, and mail_hub is handled as local mail.

Because server1, server2, and mail_hub share the /var/spool/mail area, mail sent to any of the three addresses is delivered to the shared local /var/spool/mail area.

You can use several methods to configure sendmail to do this:

You must configure the sendmail.cf file on all the mail hub member systems. To do this, invoke the mailsetup command and choose the option to perform an advanced mail setup. Add server1, server2, and mail_host to the NICKNAMES FOR THIS MACHINE section. Example 5-3 shows how to use the mailsetup program.

Example 5-3:  Using mailsetup to Configure the sendmail.cf File

# mailsetup


.
.
.
NICKNAMES FOR THIS MACHINE   Are there any other names that are used to send mail to this machine? For instance, if you have changed this host's name (or plan to in the near future), a nickname allows sendmail to recognize both names, "pearly" and the nickname, as synonyms for this machine.   Another good use for nicknames occurs when a host receives mail from multiple different networks. A host's name may not be the same on all of the different networks. Again, nicknames allows sendmail to recognize these different names as synonyms for this host.   Do you wish to enter nicknames for this machine (y/[n])? y   The following have been defined for the nicknames for server1 class:   add to list, delete from list, or continue on (a/d/c)? a   Enter additions to class (space or <cr> separated) - end list with a <cr>   ? server1 server2 mail_hub ? [Return]   The following have been defined for the nicknames for server1 class:   server1 server2 mail_hub   add to list, delete from list, or continue on (a/d/c)? c  
.
.
.

If you have already set up mail using the mailsetup program, follow these steps to manually configure the /var/adm/sendmail.cf file:

  1. Change your directory to /var/adm/sendmail.

  2. Edit the server1.m4 file and add server1, server2, and mail_hub to the definition of _MyNicknames:

    dnl -- Other names for me - aliases of my machine
    define(_MyNicknames,    {server1 server2 mail_hub})dnl
    

  3. Use the make command to update the server1.cf file:

    # make -f Makefile.cf.server1
     
    # mv sendmail.cf sendmail.cf.sav
    
    # cp server1.cf sendmail.cf
    

  4. Restart the sendmail program:

    # /sbin/init.d/sendmail restart
    

To directly edit the sendmail.cf file, follow these steps:

  1. Change to the /var/adm/sendmail directory.

  2. Edit the sendmail.cf file and add the following line:

    Cw server1 server2 mail_hub
    

  3. Restart the sendmail program:

    # /sbin/init.d/sendmail restart
    


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.5    Mounting the Disks

After you complete the preliminary steps, you can use the mail service. The final step is to mount the /var/spool/mqueue and /var/spool/mail directories on the server1 and server2 mail hub member systems. Perform the following steps on both mail hub member systems:

  1. Disable the sendmail program:

    # /sbin/init.d/sendmail stop
    

  2. If your mail hub member systems are active servers, you must save the old /var/spool/mqueue and /var/spool/mail areas so you do not lose any mail or queue files:

    # cd /var/spool
     
    # mv mqueue mqueue.old
     
    # mv mail mail.old
    

    You can move the old mail files to the new /var/spool/mail area after it is set up. You can run any queue files later by using the following command:

    # sendmail -q -oQ/var/spool/mqueue.old
    

  3. Re-create the directories:

    # mkdir mqueue
     
    # mkdir mail
    

  4. Mount the mail service spool areas:

    # mount mail_hub:/var/spool/mqueue /var/spool/mqueue
    # mount mail_hub:/var/spool/mail /var/spool/mail
    

  5. Start the sendmail program:

    # /sbin/init.d/sendmail start
    

  6. Add the loopback mounts to the /etc/fstab file so they will mount on the next reboot. The lines in the /etc/fstab file should resemble the following:

    /var/spool/mqueue@mail_hub /var/spool/mqueue nfs rw,fg 0 0
    /var/spool/mail@mail_hub /var/spool/mail nfs rw,fg 0 0
    

    You must specify the fg option to ensure that /var/spool/mqueue is NFS-mounted before the sendmail program starts. Do not put the mount command into the background to retry the mount if the original mount fails, because the sendmail program could start before the mqueue area is mounted. This situation causes problems because sendmail tries to use the mount point for the mqueue area instead of the mounted file system.


[Contents] [Prev. Chapter] [Prev. Section] [Next Section] [Next Chapter] [Index] [Help]

5.5.6    Defining a BIND Mail Exchange Record

You can define a BIND mail exchanger (MX) record in a database file, such as the /etc/namedb/hosts.db file, on the primary BIND server to point to all your mail hubs. The sendmail program uses the BIND MX record to define a list of mail machines that can receive mail sent to a specific address. See the DIGITAL UNIX Network Administration manual for detailed information about BIND MX records.

The sendmail program delivers the mail to the machine with the lowest specified preference, if possible. If that machine is not available, it tries the machine with the next lowest preference, and so on.

You can specify both mail hub member systems as the mail exchange for each system. The following example shows the mail exchange resource records:

;name		ttl	class	type	preference	deliver-to
 
server1.foo.com	IN	MX	1	server1.foo.com
 
			IN	MX	100	server2.foo.com
 
server2.foo.com	IN	MX	1	server2.foo.com
 
			IN	MX	100	server1.foo.com
 
mail_server.foo.com	IN	MX	100	server1.foo.com
 
			IN	MX	100	server2.foo.com

In this example, all mail going to server1 goes to server1 if it is available, because it has a preference of 1. If server1 is unavailable, then the mail goes to server2, which delivers the mail to the shared /var/spool/mail area. Using this configuration, mail continues to be delivered to either mail hub member system internal address as long as at least one mail hub member system is available.


[Contents] [Prev. Chapter] [Prev. Section] [Next Chapter] [Index] [Help]

5.6    Accessing NFS Services from Client Systems

To access a Network File System (NFS) service from a client system, you must edit two system files:


[Contents] [Prev. Chapter] [Prev. Section] [Next Chapter] [Index] [Help]