10    Optimizing Techniques

Optimizing an application program can involve modifying the build process, modifying the source code, or both.

In many instances, optimizing an application program can result in major improvements in run-time performance. Two preconditions should be met, however, before you begin measuring the run-time performance of an application program and analyzing how to improve the performance:

After you verify that these conditions have been met, you can begin the optimization process.

The process of optimizing an application can be divided into two separate, but complementary, activities:

The following sections provide details that relate to these two aspects of the optimization process.

10.1    Guidelines to Build an Application Program

Opportunities to improve an application's run-time performance exist in all phases of the build process. The following sections identify some of the major opportunities that exist in the areas of compiling, linking and loading, preprocessing and postprocessing, and library selection.

10.1.1    Compilation Considerations

Compile your application with the highest optimization level possible, that is, the level that produces the best performance and the correct results. In general, applications that conform to language-usage standards should tolerate the highest optimization levels, and applications that do not conform to such standards may have to be built at lower optimization levels.See cc(1) or Chapter 2 for more information.

If your application will tolerate it, compile all of the source files together in a single compilation. Compiling multiple source files increases the amount of code that the compiler can examine for possible optimizations. This can have the following effects:

To take advantage of these optimizations, use the -ifo and either -O3 or -O4 compilation options.

To determine whether the highest level of optimization benefits your particular program, compare the results of two separate compilations of the program, with one compilation at the highest level of optimization and the other compilation at the next lower level of optimization. Some routines may not tolerate a high level of optimization; such routines will have to be compiled separately.

Other compilation considerations that can have a significant impact on run-time performance include the following:

Option Description
-ansi_alias Specifies whether source code observes ANSI C aliasing rules. ANSI C aliasing rules allow for more aggressive optimizations.
-ansi_args Specifies whether source code observes ANSI C rules about arguments. If ANSI C rules are observed, special argument-cleaning code does not have to be generated.
-fast

Turns on the optimizations for the following options for increased performance:

-ansi_alias
-ansi_args
-assume trusted_short_alignment
-D_FASTMATH
-float
-fp_reorder
-ifo
-D_INLINE_INTRINSICS
-D_INTRINSICS
-intrinsics
-O3
-readonly_strings

-feedback Specifies that the compiler should use the profile information contained in the specified file when performing optimizations. For more information, see Section 8.4.2.2.
-fp_reorder Specifies whether certain code transformations that affect floating-point operations are allowed.
-G Specifies the maximum byte size of data items in the small data sections (sbss or sdata).
-inline Specifies whether to perform inline expansion of functions.
-ifo Provides improved optimization (interfile optimization) and code generation across file boundaries that would not be possible if the files were compiled separately.
-O Specifies the level of optimization that is to be achieved by the compilation.
-om Performs a variety of code optimizations for programs compiled with the -non_shared option.
-preempt_module Supports symbol preemption on a module-by-module basis.
-speculate Enables work (for example, load or computation operations) to be done in running programs on execution paths before the paths are taken.
-tune Selects processor-specific instruction tuning for specific implementations of the Alpha architecture.
-unroll Controls loop unrolling done by the optimizer at levels -O2 and above.

Using the preceding options may cause a reduction in accuracy and adherence to standards. For more information on these options, see cc(1).

10.1.2    Linking and Loading Considerations

If your application does not use many large libraries, consider linking it nonshared. This allows the linker to optimize calls into the library, which decreases your application's startup time and improves run-time performance (if calls are made frequently). Nonshared applications, however, can use more system resources than call-shared applications. If you are running a large number of applications simultaneously and the applications have a set of libraries in common (for example, libX11 or libc), you may increase total system performance by linking them as call-shared. See Chapter 4 for details.

For applications that use shared libraries, ensure that those libraries can be quickstarted. Quickstarting is a Tru64 UNIX capability that can greatly reduce an application's load time. For many applications, load time is a significant percentage of the total time that it takes to start and run the application. If an object cannot be quickstarted, it still runs, but startup time is slower. See Section 4.7 for details.

10.1.3    Using the Postlink Optimizer

You perform postlink optimizations by using the -om option on the cc command line. This option must be used with the -non_shared option and must be specified when performing the final link. For example:


% cc -om -non_shared prog.c

The -om option can also benefit from the -feedback option, as discussed in Section 8.4.2.2.

The postlink optimizer performs the following code optimizations:

When you use the -om option, you get the full range of postlink optimizations. To specify a specific postlink optimization, use the -WL compiler option, followed by one of these options:

-om_compress_lita

This option removes unused .lita entries after optimization, and then compresses the .lita section.

-om_dead_code

This option removes dead code (unreachable options) generated after optimizations have been applied. The .lita section is not compressed by this option.

-om_feedback

This option directs the compiler to use the pixie-produced information stored in the augmented executable by means of the cc command's -feedback option and the pixie (or prof) command's -update option.

-om_ireorg_feedback,file

This option directs the compiler to use the pixie-produced information in file.Counts and file.Addrs to reorganize the instructions to reduce cache thrashing.

-om_no_inst_sched

This option turns off instruction scheduling.

-om_no_align_labels

This option turns off alignment of labels. Normally, the -om option will align the targets of all branches on quadword boundaries to improve loop performance.

-om_Gcommon,num

This option sets the size threshold of common symbols. Every common symbol whose size is less than or equal to num will be allocated close together.

For more information, see cc(1).

10.1.4    Preprocessing and Postprocessing Considerations

Preprocessing options and postprocessing (run-time) options that can affect performance include the following:

10.1.5    Library Routine Selection

Library routine options that can affect performance include the following:

10.2    Application Coding Guidelines

If you are willing to modify your application, use the profiling tools to determine where your application spends most of its time. Many applications spend most of their time in a few routines. Concentrate your efforts on improving the speed of those heavily used routines.

Tru64 UNIX provides several profiling tools that work for programs written in C and other languages. See Chapter 7, Chapter 8, Chapter 9, prof_intro(1), hiprof(1), pixie(1), prof(1), third(1), uprofile(1), and atom(1) for more information.

After you identify the heavily used portions of your application, consider the algorithms used by that code. Is it possible to replace a slow algorithm with a more efficient one? Replacing a slow algorithm with a faster one often produces a larger performance gain than tweaking an existing algorithm.

When you are satisfied with the efficiency of your algorithms, consider making code changes to help the compiler optimize the object code that it generates for your application. High Performance Computing by Kevin Dowd (O'Reilly & Associates, Inc., ISBN 1-56592-032-5) is a good source of general information on how to write source code that maximizes optimization opportunities for compilers.

The following sections identify performance opportunities involving data types, I/O handling, cache usage and data alignment, and general coding issues.

10.2.1    Data-Type Considerations

Data-type considerations that can affect performance include the following:

10.2.2    Using Direct I/O on AdvFS Files

Direct I/O allows an application to use the file-system features that the Advanced File System (AdvFS) provides, such as file management, online backup, and online recovery, while eliminating the overhead of copying user data into the AdvFS cache. Direct I/O uses Direct Memory Access (DMA) commands to copy the user data directly between an application's buffer and a disk.

Normal file-system I/O maintains file pages in a cache. This allows the I/O to be completed asynchronously; once the data is in the cache and scheduled for I/O, the application does not need to wait for the data to be transferred to disk. In addition, because the data is already in the cache, subsequent accesses to this page do not need to read the data from disk. Most applications use normal file-system I/O.

Normal file-system I/O is not suited for applications that access the data on disk infrequently and manage inter-thread competition themselves. Such applications can take advantage of the reduced overhead of direct I/O. However, because data is not cached, access to a given page must be serialized among competing threads. To do this, direct I/O enforces synchronous I/O as the default. This means that when the read() routine returns to the application, the I/O has completed and the data is on disk. Any subsequent retrieval of that data will also incur an I/O operation to retrieve the data from disk.

An application can take advantage of asynchronous I/O (AIO), but still use the underlying direct I/O mechanism, by using the aio_read() and aio_write() system routines. These routines will return to the application before the data has been transferred to disk, and the aio_error() routine allows the application to poll for the completion of the I/O. (The kernel synchronizes the access to file pages so that two threads cannot concurrently write the same page.)

Threads using direct I/O to access a given file will be able to do so concurrently, provided that they do not access the same range of pages. For example, if thread A is writing pages 10 through 19 and thread B is writing pages 20 through 39, these operations will occur simultaneously. Continuing this example, if thread B attempts to write pages 15 through 39 in a single direct I/O transfer, it will be forced to wait until thread A completes its write because their page ranges overlap.

When using direct I/O, the best performance occurs when the requested transfer is aligned on a disk sector boundary and the transfer size is an even multiple of the underlying sector size. Larger transfers are generally more efficient than smaller ones, although the optimal transfer size depends on the underlying storage hardware.

NOTE

Direct I/O mode and the use of mapped file regions (mmap) are exclusive operations. You cannot set direct I/O mode on a file that uses mapped file regions. Mapping a file will also fail if the file is already open for direct I/O.

Direct I/O and atomic data logging modes are also mutually exclusive. If a file is open in one of these modes, subsequent attempts to open the file in the other mode will fail.

You can activate the direct I/O feature for use on an AdvFS file for both AIO and non-AIO applications. To activate the feature, use the open function in an application, setting the O_DIRECTIO file access flag. For example:

 open ("file", O_DIRECTIO | O_RDWR, 0644)

Direct I/O mode remains in effect until the file is closed by all users.

The fcntl() function with the parameter F_GETCACHEPOLICY can be used to return the caching policy of a file, either FCACHE or FDIRECTIO mode. For example:

int fcntlarg = 0;
ret = fcntl( filedescriptor, F_GETCACHEPOLICY, &fcntlarg );
if ( ret != -1 && fcntlarg == FDIRECTIO ) {
.
.
.

For details on the use of direct I/O and AdvFS, see fcntl(2) and open(2).

10.2.3    Cache Usage and Data Alignment Considerations

Cache usage patterns can have a critical impact on performance:

Data alignment can also affect performance. By default, the C compiler aligns each data item on its natural boundary; that is, it positions each data item so that its starting address is an even multiple of the size of the data type used to declare it. Data not aligned on natural boundaries is called misaligned data. Misaligned data can slow performance because it forces the software to make necessary adjustments at run time.

In C programs, misalignment can occur when you type cast a pointer variable from one data type to a larger data type; for example, type casting a char pointer (1-byte alignment) to an int pointer (4-byte alignment) and then dereferencing the new pointer may cause unaligned access. Also in C, creating packed structures using the #pragma  pack directive can cause unaligned access. (See Chapter 3 for details on the #pragma  pack directive.)

To correct alignment problems in C programs, you can use the -align option or you can make necessary modifications to the source code. If instances of misalignment are required by your program for some reason, use the _ _unaligned data-type qualifier in any pointer definitions that involve the misaligned data. When data is accessed through the use of a pointer declared _ _unaligned, the compiler generates the additional code necessary to copy or store the data without generating alignment errors. (Alignment errors have a much more costly impact on performance than the additional code that is generated.)

Warning messages identifying misaligned data are not issued during the compilation of C programs.

During execution of any program, the kernel issues warning messages ("unaligned access") for most instances of misaligned data. The messages include the program counter (PC) value for the address of the instruction that caused the misalignment.

You can use either of the following two methods to access code that causes the unaligned access fault:

For more information on data alignment, see Appendix A in the Alpha Architecture Reference Manual. See cc(1) for information on alignment-control options that you can specify on compilation command lines.

10.2.4    General Coding Considerations

General coding considerations specific to C applications include the following:

Also, avoid aliases where possible by introducing local variables to store dereferenced results. (A dereferenced result is the value obtained from a specified address.) Dereferenced values are affected by indirect operations and calls, but local variables are not; local variables can be kept in registers. Example 10-1 shows how the proper placement of pointers and the elimination of aliasing enable the compiler to produce better code.

Example 10-1:  Pointers and Optimization

Source Code:
int len = 10;
char a[10];
 
void
zero()
  {
  char *p;
  for (p = a; p != a + len; ) *p++ = 0;
  }

Consider the use of pointers in Example 10-1. Because the statement *p++ = 0 might modify len, the compiler must load it from memory and add it to the address of a on each pass through the loop, instead of computing a  +  len in a register once outside the loop.

You can use two different methods to increase the efficiency of the code used in Example 10-1: