Section 21.4. Useful Utilities for C Programmers

Along with languages and compilers, there is a plethora of programming tools out there, including libraries, interface builders, debuggers , and other utilities to aid the programming process. In this section, we talk about some of the most interesting bells and whistles of these tools to let you know what's available.

21.4.1. Debuggers

Several interactive debuggers are available for Linux. The de facto standard debugger is gdb, which we just covered in detail.

In addition to gdb, there are several other debuggers, each with features very similar to gdb. DDD (Data Display Debugger) is a version of gdb with an X Window System interface similar to that found on the xdbx debugger on other Unix systems. There are several panes in the DDD debugger's window. One pane looks like the regular gdb text interface, allowing you to input commands manually to interact with the system. Another pane automatically displays the current source file along with a marker displaying the current line. You can use the source pane to set and select breakpoints, browse the source, and so on, while typing commands directly to gdb. The DDD window also contains several buttons that provide quick access to frequently used commands, such as step, next, and so on. Given the buttons, you can use the mouse in conjunction with the keyboard to debug your program within an easy-to-use X interface. Finally, DDD has a very useful mode that lets you explore data structures of an unknown program.

KDevelop, the IDE, comes with its own, very convenient gdb frontend; it is also fully integrated into the KDE Desktop. We cover KDevelop at the end of this chapter.

21.4.2. Profiling and Performance Tools

Several utilities exist that allow you to monitor and rate the performance of your program. These tools help you locate bottlenecks in your codeplaces where performance is lacking. These tools also give you a rundown on the call structure of your program, indicating what functions are called, from where, and how often (in other words, everything you ever wanted to know about your program, but were afraid to ask).

gprof is a profiling utility that gives you a detailed listing of the running statistics for your program, including how often each function was called, from where, the total amount of time that each function required, and so forth.

To use gprof with a program, you must compile the program using the -pg option with gcc. This adds profiling information to the object file and links the executable with standard libraries that have profiling information enabled.

Having compiled the program to profile with -pg, simply run it. If it exits normally, the file gmon.out will be written to the working directory of the program. This file contains profiling information for that run and can be used with gprof to display a table of statistics.

As an example, let's take a program called getstat, which gathers statistics about an image file. After compiling getstat with -pg, we run it:

papaya$ getstat image11.pgm > stats.dat
papaya$ ls -l gmon.out
-rw-------   1 mdw      mdw         54448 Feb  5 17:00 gmon.out

Indeed, the profiling information was written to gmon.out.

To examine the profiling data, we run gprof and give it the name of the executable and the profiling file gmon.out:

papaya$ gprof getstat gmon.out

If you do not specify the name of the profiling file, gprof assumes the name gmon.out. It also assumes the executable name a.out if you do not specify that, either.

gprof output is rather verbose, so you may want to redirect it to a file or pipe it through a pager. It comes in two parts. The first part is the "flat profile," which gives a one-line entry for each function, listing the percentage of time spent in that function, the time (in seconds) used to execute that function, the number of calls to the function, and other information. For example:

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
 45.11     27.49    27.49       41   670.51   903.13  GetComponent
 16.25     37.40     9.91                             mcount
 10.72     43.93     6.54  1811863     0.00     0.00  Push
 10.33     50.23     6.30  1811863     0.00     0.00  Pop
  5.87     53.81     3.58       40    89.50   247.06  stackstats
  4.92     56.81     3.00  1811863     0.00     0.00  TrimNeighbors

If any of the fields are blank in the output, gprof was unable to determine any further information about that function. This is usually caused by parts of the code that were not compiled with the -pg option; for example, if you call routines in nonstandard libraries that haven't been compiled with -pg, gprof won't be able to gather much information about those routines. In the previous output, the function mcount probably hasn't been compiled with profiling enabled.

As we can see, 45.11% of the total running time was spent in the function GetComponent--which amounts to 27.49 seconds. But is this because GetComponent is horribly inefficient[*] or because GetComponent itself called many other slow functions? The functions Push and Pop were called many times during execution: could they be the culprits?

[*] Always a possibility where this author's code is concerned!

The second part of the gprof report can help us here. It gives a detailed "call graph" describing which functions called other functions and how many times they were called. For example:

index % time    self  children    called     name
[1]     92.7    0.00   47.30                 start [1]
                0.01   47.29       1/1           main [2]
                0.00    0.00       1/2           on_exit [53]
                0.00    0.00       1/1           exit [172]

The first column of the call graph is the index: a unique number given to every function, allowing you to find other functions in the graph. Here, the first function, start, is called implicitly when the program begins. start required 92.7% of the total running time (47.30 seconds), including its children, but required very little time to run itself. This is because start is the parent of all other functions in the program, including main; it makes sense that start plus its children requires that percentage of time.

The call graph normally displays the children as well as the parents of each function in the graph. Here, we can see that start called the functions main, on_exit, and exit (listed below the line for start). However, there are no parents (normally listed above start); instead, we see the ominous word <spontaneous>. This means that gprof was unable to determine the parent function of start; more than likely because start was not called from within the program itself but kicked off by the operating system.

Skipping down to the entry for GetComponent, the function under suspicion, we see the following:

index % time    self  children    called     name
                0.67    0.23       1/41          GetFirstComponent [12]
               26.82    9.30      40/41          GetNextComponent [5]
[4]     72.6   27.49    9.54      41         GetComponent [4]
                6.54    0.00 1811863/1811863     Push [7]
                3.00    0.00 1811863/1811863     TrimNeighbors [9]
                0.00    0.00       1/1           InitStack [54]

The parent functions of GetComponent were GetFirstComponent and GetNextComponent, and its children were Push, TRimNeighbors, and InitStack. As we can see, GetComponent was called 41 timesone time from GetFirstComponent and 40 times from GetNextComponent. The gprof output contains notes that describe the report in more detail.

GetComponent itself requires more than 27.49 seconds to run; only 9.54 seconds are spent executing the children of GetComponent (including the many calls to Push and trimNeighbors). So it looks as though GetComponent and possibly its parent GetNextComponent need some tuning; the oft-called Push function is not the sole cause of the problem.

gprof also keeps track of recursive calls and "cycles" of called functions and indicates the amount of time required for each call. Of course, using gprof effectively requires that all code to be profiled is compiled with the -pg option. It also requires a knowledge of the program you're attempting to profile; gprof can tell you only so much about what's going on. It's up to the programmer to optimize inefficient code.

One last note about gprof: running it on a program that calls only a few functionsand runs very quicklymay not give you meaningful results. The units used for timing execution are usually rather coarsemaybe one one-hundredth of a secondand if many functions in your program run more quickly than that, gprof will be unable to distinguish between their respective running times (rounding them to the nearest hundredth of a second). In order to get good profiling information, you may need to run your program under unusual circumstancesfor example, giving it an unusually large data set to churn on, as in the previous example.

If gprof is more than you need, calls is a program that displays a tree of all function calls in your C source code. This can be useful to either generate an index of all called functions or produce a high-level hierarchical report of the structure of a program.

Use of calls is simple: you tell it the names of the source files to map out, and a function-call tree is displayed. For example:

papaya$ calls scan.c
    1   level1 [scan.c]
    2           getid [scan.c]
    3                   getc
    4                   eatwhite [scan.c]
    5                           getc
    6                           ungetc
    7                   strcmp
    8           eatwhite [see line 4]
    9           balance [scan.c]
   10                   eatwhite [see line 4]

By default, calls lists only one instance of each called function at each level of the tree (so that if printf is called five times in a given function, it is listed only once). The -a switch prints all instances. calls has several other options as well; using calls -h gives you a summary.

21.4.3. Using strace

strace is a tool that displays the system calls being executed by a running program. This can be extremely useful for real-time monitoring of a program's activity, although it does take some knowledge of programming at the system-call level. For example, when the library routine printf is used within a program, strace displays information only about the underlying write system call when it is executed. Also, strace can be quite verbose: many system calls are executed within a program that the programmer may not be aware of. However, strace is a good way to quickly determine the cause of a program crash or other strange failure.

Take the "Hello, World!" program given earlier in the chapter. Running strace on the executable hello gives us the following:

papaya$ strace hello
execve("./hello", ["hello"], [/* 49 vars */]) = 0
 -1, 0) = 0x40007000
mprotect(0x40000000, 20881, PROT_READ|PROT_WRITE|PROT_EXEC) = 0
mprotect(0x8048000, 4922, PROT_READ|PROT_WRITE|PROT_EXEC) = 0
stat("/etc/", {st_mode=S_IFREG|0644, st_size=18612,\
 ...}) = 0
open("/etc/", O_RDONLY)      = 3
mmap(0, 18612, PROT_READ, MAP_SHARED, 3, 0) = 0x40008000
close(3)                                = 0
stat("/etc/", 0xbffff52c)  = -1 ENOENT (No such\
 file or directory)
open("/usr/local/KDE/lib/", O_RDONLY) = -1 ENOENT (No\
 such file or directory)
open("/usr/local/qt/lib/", O_RDONLY) = -1 ENOENT (No\
 such file or directory)
open("/lib/", O_RDONLY)        = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3"..., 4096) = 4096
mmap(0, 770048, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = \
mmap(0x4000d000, 538959, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_\
FIXED, 3, 0) = 0x4000d000
mmap(0x40091000, 21564, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_\
FIXED, 3, 0x83000) = 0x40091000
mmap(0x40097000, 204584, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_\
FIXED|MAP_ANONYMOUS, -1, 0) = 0x40097000
close(3)                                = 0
mprotect(0x4000d000, 538959, PROT_READ|PROT_WRITE|PROT_EXEC) = 0
munmap(0x40008000, 18612)               = 0
mprotect(0x8048000, 4922, PROT_READ|PROT_EXEC) = 0
mprotect(0x4000d000, 538959, PROT_READ|PROT_EXEC) = 0
mprotect(0x40000000, 20881, PROT_READ|PROT_EXEC) = 0
personality(PER_LINUX)                  = 0
geteuid(  )                               = 501
getuid(  )                                = 501
getgid(  )                                = 100
getegid(  )                               = 100
fstat(1, {st_mode=S_IFCHR|0666, st_rdev=makedev(3, 10), ...}) = 0
 -1, 0) = 0x40008000
ioctl(1, TCGETS, {B9600 opost isig icanon echo ....}) = 0
write(1, "Hello World!\n", 13Hello World!
)          = 13
_exit(0)                                = ?

This may be much more than you expected to see from a simple program. Let's walk through it, briefly, to explain what's going on.

The first call, execve, starts the program. All the mmap, mprotect, and munmap calls come from the kernel's memory management and are not really interesting here. In the three consecutive open calls, the loader is looking for the C library and finds it on the third try. The library header is then read and the library mapped into memory. After a few more memory-management operations and the calls to geteuid, getuid, getgid, and getegid, which retrieve the rights of the process, there is a call to ioctl. The ioctl is the result of a tcgetattr library call, which the program uses to retrieve the terminal attributes before attempting to write to the terminal. Finally, the write call prints our friendly message to the terminal and exit ends the program.

strace sends its output to standard error, so you can redirect it to a file separate from the actual output of the program (usually sent to standard output). As you can see, strace tells you not only the names of the system calls, but also their parameters (expressed as well-known constant names, if possible, instead of just numerics) and return values.

You may also find the ltrace package useful. It's a library call tracer that tracks all library calls, not just calls to the kernel. Several distributions already include it; users of other distributions can download the latest version of the source at

21.4.4. Using Valgrind

Valgrind is a replacement for the various memory-allocation routines, such as malloc, realloc, and free, used by C programs, but it also supports C++ programs. It provides smarter memory-allocation procedures and code to detect illegal memory accesses and common faults, such as attempting to free a block of memory more than once. Valgrind displays detailed error messages if your program attempts any kind of hazardous memory access, helping you to catch segmentation faults in your program before they happen. It can also detect memory leaksfor example, places in the code where new memory is malloc'd without being free'd after use.

Valgrind is not just a replacement for malloc and friends. It also inserts code into your program to verify all memory reads and writes. It is very robust and therefore considerably slower than the regular malloc routines. Valgrind is meant to be used during program development and testing; once all potential memory-corrupting bugs have been fixed, you can run your program without it.

For example, take the following program, which allocates some memory and attempts to do various nasty things with it:

#include <malloc.h>
int main(  ) {
  char *thememory, ch;

  thememory=(char *)malloc(10*sizeof(char));

  ch=thememory[1];     /* Attempt to read uninitialized memory */
  thememory[12]=' ';   /* Attempt to write after the block */
  ch=thememory[-2];    /* Attempt to read before the block */

To find these errors, we simply compile the program for debugging and run it by prepending the valgrind command to the command line:

owl$ gcc -g -o nasty nasty.c
owl$ valgrind nasty
=  =18037=  = valgrind-20020319, a memory error detector for x86 GNU/Linux.
=  =18037=  = Copyright (C) 2000-2002, and GNU GPL'd, by Julian Seward.
=  =18037=  = For more details, rerun with: -v
=  =18037=  =
=  =18037=  = Invalid write of size 1
=  =18037=  =    at 0x8048487: main (nasty.c:8)
=  =18037=  =    by 0x402D67EE: _ _libc_start_main (in /lib/
=  =18037=  =    by 0x8048381: _ _libc_start_main@@GLIBC_2.0 (in /home/kalle/tmp/nasty)
=  =18037=  =    by <bogus frame pointer> ???
=  =18037=  =    Address 0x41B2A030 is 2 bytes after a block of size 10 alloc'd
=  =18037=  =    at 0x40065CFB: malloc (vg_clientmalloc.c:618)
=  =18037=  =    by 0x8048470: main (nasty.c:5)
=  =18037=  =    by 0x402D67EE: _ _libc_start_main (in /lib/
=  =18037=  =    by 0x8048381: _ _libc_start_main@@GLIBC_2.0 (in /home/kalle/tmp/nasty)
=  =18037=  =
=  =18037=  = Invalid read of size 1
=  =18037=  =    at 0x804848D: main (nasty.c:9)
=  =18037=  =    by 0x402D67EE: _ _libc_start_main (in /lib/
=  =18037=  =    by 0x8048381: _ _libc_start_main@@GLIBC_2.0 (in /home/kalle/tmp/nasty)
=  =18037=  =    by <bogus frame pointer> ???
=  =18037=  =    Address 0x41B2A022 is 2 bytes before a block of size 10 alloc'd
=  =18037=  =    at 0x40065CFB: malloc (vg_clientmalloc.c:618)
=  =18037=  =    by 0x8048470: main (nasty.c:5)
=  =18037=  =    by 0x402D67EE: _ _libc_start_main (in /lib/
=  =18037=  =    by 0x8048381: _ _libc_start_main@@GLIBC_2.0 (in /home/kalle/tmp/nasty)
=  =18037=  =
=  =18037=  = ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=  =18037=  = malloc/free: in use at exit: 10 bytes in 1 blocks.
=  =18037=  = malloc/free: 1 allocs, 0 frees, 10 bytes allocated.
=  =18037=  = For a detailed leak analysis,  rerun with: --leak-check=yes
=  =18037=  = For counts of detected errors, rerun with: -v

The figure at the start of each line indicates the process ID; if your process spawns other processes, even those will be run under Valgrind's control.

For each memory violation, Valgrind reports an error and gives us information on what happened. The actual Valgrind error messages include information on where the program is executing as well as where the memory block was allocated. You can coax even more information out of Valgrind if you wish, and, along with a debugger such as gdb, you can pinpoint problems easily.

You may ask why the reading operation in line 7, where an initialized piece of memory is read, has not led Valgrind to emit an error message. This is because Valgrind won't complain if you pass around initialized memory, but it still keeps track of it. As soon as you use the value (e.g., by passing it to an operating system function or by manipulating it), you receive the expected error message.

Valgrind also provides a garbage collector and detector you can call from within your program. In brief, the garbage detector informs you of any memory leaks: places where a function malloc'd a block of memory but forgot to free it before returning. The garbage collector routine walks through the heap and cleans up the results of these leaks. Here is an example of the output:

owl$ valgrind --leak-check=yes --show-reachable=yes nasty
=  =18081=  = ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
=  =18081=  = malloc/free: in use at exit: 10 bytes in 1 blocks.
=  =18081=  = malloc/free: 1 allocs, 0 frees, 10 bytes allocated.
=  =18081=  = For counts of detected errors, rerun with: -v
=  =18081=  = searching for pointers to 1 not-freed blocks.
=  =18081=  = checked 4029376 bytes.
=  =18081=  =
=  =18081=  = definitely lost: 0 bytes in 0 blocks.
=  =18081=  = possibly lost:   0 bytes in 0 blocks.
=  =18081=  = still reachable: 10 bytes in 1 blocks.
=  =18081=  =
=  =18081=  = 10 bytes in 1 blocks are still reachable in loss record 1 of 1
=  =18081=  =    at 0x40065CFB: malloc (vg_clientmalloc.c:618)
=  =18081=  =    by 0x8048470: main (nasty.c:5)
=  =18081=  =    by 0x402D67EE: _ _libc_start_main (in /lib/
=  =18081=  =    by 0x8048381: _ _libc_start_main@@GLIBC_2.0 (in /home/kalle/tmp/nasty)
=  =18081=  =
=  =18081=  = LEAK SUMMARY:
=  =18081=  =    possibly lost:   0 bytes in 0 blocks.
=  =18081=  =    definitely lost: 0 bytes in 0 blocks.
=  =18081=  =    still reachable: 10 bytes in 1 blocks.
=  =18081=  =

By the way, Valgrind is not just a very useful memory debugger; it also comes with several other so-called skins. One of these is cachegrind, a profiler that, together with its graphical frontend kcachegrind, has become the profiler of choice for many. cachegrind is slowly replacing gprof in many projects.

21.4.5. Interface Building Tools

A number of applications and libraries let you easily generate a user interface for your applications under the X Window System. If you do not want to bother with the complexity of the X programming interface, using one of these simple interface-building tools may be the answer for you. There are also tools for producing a text-based interface for programs that don't require X.

The classic X programming model has attempted to be as general as possible, providing only the bare minimum of interface restrictions and assumptions. This generality allows programmers to build their own interface from scratch, as the core X libraries don't make any assumptions about the interface in advance. The X Toolkit Intrinsics (Xt) toolkit provides a rudimentary set of interface widgets (such as simple buttons, scrollbars, and the like), as well as a general interface for writing your own widgets if necessary. Unfortunately this can require a great deal of work for programmers who would rather use a set of premade interface routines. A number of Xt widget sets and programming libraries are available for Linux, all of which make the user interface easier to program.

Qt , a C++ GUI toolkit written by the Norwegian company Trolltech, is an excellent package for GUI development in C++. It sports an ingenious mechanism for connecting user interaction with program code, a very fast drawing engine, and a comprehensive but easy-to-use API. Qt is considered by many as the de facto GUI programming standard because it is the foundation of the KDE desktop (see "The K Desktop Environment" in Chapter 3), which is the most prominent desktop on today's Linux systems.

Qt is a commercial product, but it is also released under the GPL, meaning that you can use it for free if you write software for Unix (and hence Linux) that is licensed under the GPL as well. In addition, (commercial) Windows and Mac OS X versions of Qt are available, which makes it possible to develop for Linux, Windows, and Mac OS X at the same time and create an application for another platform by simply recompiling. Imagine being able to develop on your favorite Linux operating system and still being able to target the larger Windows market! One of the authors, Kalle, uses Qt to write both free software (the KDE just mentioned) and commercial software (often cross-platform products that are developed for Linux, Windows, and Mac OS X). Qt is being very actively developed; for more information, see Programming with Qt (O'Reilly).

Another exciting recent addition to Qt is that it can run on embedded systems, without the need for an X server. And which operating system would it support on embedded systems if not Embedded Linux? Many embedded devices with graphical screens already run Embedded Linux and Qt/Embedded for example, some cellular telephones, car computers, medical devices, and many more. It won't say "Linux" in large letters on the box, but that's what is inside!

Qt also comes with a GUI builder called Qt Designer that greatly facilitates the creation of GUI applications. It is included in the GPL version of Qt as well, so if you download Qt (or simply install it from your distribution CDs), you have the Designer right away. Finally, Python bindings for Qt let you employ its attractive and flexible interface without having to learn a low-level language.

For those who do not like to program in C++, GTK might be a good choice. GTK was originally written for the image manipulation program GIMP. GTK programs usually offer response times that are just as good as those of Qt programs, but the toolkit is not as complete. Documentation is especially lacking. For C-based projects, though, GTK is a good alternative if you do not need to be able to recompile your code on Windows. A Windows port has been developed as well. Good C++ bindings have also been created, so more and more GTK developers are choosing to develop their software in C++ instead of in C.

Many programmers are finding that building a user interface, even with a complete set of widgets and routines in C, requires much overhead and can be quite difficult. This is a question of flexibility versus ease of programming: the easier the interface is to build, the less control the programmer has over it. Many programmers are finding that prebuilt widgets are adequate for their needs, so the loss in flexibility is not a problem.

One of the most popular toolkits in the 1980s and 1990s was the commercial Motif library and widget set, available from several vendors for an inexpensive single-user license fee. Many applications are available that utilize Motif. Binaries statically linked with Motif may be distributed freely and used by people who don't own Motif.

Motif seems to be in use mostly for legacy projects; most programmers have moved on to the newer Qt or GTK toolkits. However, Motif is still being actively developed (albeit rather slowly). It also has some problems. First, programming with Motif can be frustrating. It is difficult, error-prone, and cumbersome because the Motif API was not designed according to modern GUI API design principles. Also, Motif programs tend to run very slowly.

One of the problems with interface generation and X programming is that it is difficult to generalize the most widely used elements of a user interface into a simple programming model. For example, many programs use features such as buttons, dialog boxes, pull-down menus, and so forth, but almost every program uses these widgets in a different context. In simplifying the creation of a graphical interface, generators tend to make assumptions about what you'll want. For example, it is simple enough to specify that a button, when pressed, should execute a certain procedure within your program, but what if you want the button to execute some specialized behavior the programming interface does not allow for? For example, what if you want the button to have a different effect when pressed with mouse button 2 instead of mouse button 1? If the interface-building system does not allow for this degree of generality, it is not of much use to programmers who need a powerful, customized interface.

The Tcl /Tk combo, consisting of the scripting language Tcl and the graphical toolkit Tk, has won some popularity, partly because it is so simple to use and provides a good amount of flexibility. Because Tcl and Tk routines can be called from interpreted "scripts" as well as internally from a C program, it is not difficult to tie the interface features provided by this language and toolkit to functionality in the program. Using Tcl and Tk is, on the whole, less demanding than learning to program Xlib and Xt (along with the myriad of widget sets) directly. It should be noted, though, that the larger a project gets, the more likely it is that you will want to use a language such as C++ that is more suited for large-scale development. For several reasons, larger projects tend to become very unwieldy with Tcl: the use of an interpreted language slows the execution of the program, Tcl/Tk design is hard to scale up to large projects, and important reliability features such as compile- and link-time type checking are missing. The scaling problem is improved by the use of namespaces (a way to keep names in different parts of the program from clashing) and an object-oriented extension called [incr Tcl].

Tcl and Tk allow you to generate an X-based interface complete with windows, buttons, menus, scrollbars, and the like, around your existing program. You may access the interface from a Tcl script (as described in "Other Languages," later in this chapter) or from within a C program.

If you require a nice text-based interface for a program, several options are available. The GNU getline library is a set of routines that provide advanced command-line editing, prompting, command history, and other features used by many programs. As an example, both bash and gdb use the getline library to read user input. getline provides the Emacs- and vi-like command-line editing features found in bash and similar programs. (The use of command-line editing within bash is described in "Typing Shortcuts" in Chapter 4.)

Another option is to write a set of Emacs interface routines for your program. An example of this is the gdb Emacs interface, which sets up multiple windows, special key sequences, and so on, within Emacs. The interface is discussed in "Using Emacs with gdb," earlier in this chapter. (No changes were required to gdb code in order to implement this: look at the Emacs library file gdb.el for hints on how this was accomplished.) Emacs allows you to start up a subprogram within a text buffer and provides many routines for parsing and processing text within that buffer. For example, within the Emacs gdb interface, the gdb source listing output is captured by Emacs and turned into a command that displays the current line of code in another window. Routines written in Emacs LISP process the gdb output and take certain actions based on it.

The advantage of using Emacs to interact with text-based programs is that Emacs is a powerful and customizable user interface within itself. The user can easily redefine keys and commands to fit her needs; you don't need to provide these customization features yourself. As long as the text interface of the program is straightforward enough to interact with Emacs, customization is not difficult to accomplish. In addition, many users prefer to do virtually everything within Emacsfrom reading electronic mail and news to compiling and debugging programs. Giving your program an Emacs frontend allows it to be used more easily by people with this mind-set. It also allows your program to interact with other programs running under Emacsfor example, you can easily cut and paste between different Emacs text buffers. You can even write entire programs using Emacs LISP, if you wish.

21.4.6. Revision Control Tools: RCS

Revision Control System (RCS) has been ported to Linux. This is a set of programs that allow you to maintain a "library" of files that records a history of revisions, allows source-file locking (in case several people are working on the same project), and automatically keeps track of source-file version numbers. RCS is typically used with program source code files, but is general enough to be applicable to any type of file where multiple revisions must be maintained.

Why bother with revision control ? Many large projects require some kind of revision control in order to keep track of many tiny complex changes to the system. For example, attempting to maintain a program with a thousand source files and a team of several dozen programmers would be nearly impossible without using something like RCS. With RCS, you can ensure that only one person may modify a given source file at any one time, and all changes are checked in along with a log message detailing the change.

RCS is based on the concept of an RCS file, a file that acts as a "library" where source files are "checked in" and "checked out." Let's say that you have a source file importrtf.c that you want to maintain with RCS. The RCS filename would be importrtf.c,v by default. The RCS file contains a history of revisions to the file, allowing you to extract any previous checked-in version of the file. Each revision is tagged with a log message that you provide.

When you check in a file with RCS, revisions are added to the RCS file, and the original file is deleted by default. To access the original file, you must check it out from the RCS file. When you're editing a file, you generally don't want someone else to be able to edit it at the same time. Therefore, RCS places a lock on the file when you check it out for editing. Only you, the person who checked out this locked file, can modify it (this is accomplished through file permissions). Once you're done making changes to the source, you check it back in, which allows anyone working on the project to check it back out again for further work. Checking out a file as unlocked does not subject it to these restrictions; generally, files are checked out as locked only when they are to be edited but are checked out as unlocked just for reading (for example, to use the source file in a program build).

RCS automatically keeps track of all previous revisions in the RCS file and assigns incremental version numbers to each new revision that you check in. You can also specify a version number of your own when checking in a file with RCS; this allows you to start a new "revision branch" so that multiple projects can stem from different revisions of the same file. This is a good way to share code between projects but ensure that changes made to one branch won't be reflected in others.

Here's an example. Take the source file importrtf.c, which contains our friendly program:

#include <stdio.h>

int main(void) {
  printf("Hello, world!");

The first step is to check it into RCS with the ci command:

papaya$ ci importrtf.c
importrtf.c,v  <--  importrtf.c
enter description, terminated with single '.' or end of file:
NOTE: This is NOT the log message!
>> Hello world source code
>> .
initial revision: 1.1

The RCS file importrtf.c,v is created, and importrtf.c is removed.

To work on the source file again, use the co command to check it out. For example:

papaya$ co -l importrtf.c
importrtf.c,v  -->  importrtf.c
revision 1.1 (locked)

will check out importrtf.c (from importrtf.c,v) and lock it. Locking the file allows you to edit it and to check it back in. If you only need to check the file out in order to read it (for example, to issue a make), you can leave the -l switch off of the co command to check it out unlocked. You can't check in a file unless it is locked first (or if it has never been checked in before, as in the example).

Now you can make some changes to the source and check it back in when done. In many cases, you'll want to keep the file checked out and use ci to merely record your most recent revisions in the RCS file and bump the version number. For this, you can use the -l switch with ci, as so:

papaya$ ci -l importrtf.c
importrtf.c,v  <--  importrtf.c
new revision: 1.2; previous revision: 1.1
enter log message, terminated with single '.' or end of file:
>> Changed printf call
>> .

This automatically checks out the file, locked, after checking it in. This is a useful way to keep track of revisions even if you're the only one working on a project.

If you use RCS often, you may not like all those unsightly importrtf.c,v RCS files cluttering up your directory. If you create the subdirectory RCS within your project directory, ci and co will place the RCS files there, out of the way of the rest of the source.

In addition, RCS keeps track of all previous revisions of your file. For instance, if you make a change to your program that causes it to break in some way and you want to revert to the previous version to "undo" your changes and retrace your steps, you can specify a particular version number to check out with co. For example:

papaya$ co -l1.1 importrtf.c
importrtf.c,v  -->  importrtf.c
revision 1.1 (locked)
writable importrtf.c exists; remove it? [ny](n): y

checks out Version 1.1 of the file importrtf.c. You can use the program rlog to print the revision history of a particular file; this displays your revision log entries (entered with ci) along with other information such as the date, the user who checked in the revision, and so forth.

RCS automatically updates embedded "keyword strings " in your source file at checkout time. For example, if you have the string:

/* $Header: /work/linux/running5/safarixml/RCS/ch21.xml,v 1.1 2005/12/28 18:42:18 */

in the source file, co will replace it with an informative line about the revision date, version number, and so forth, as in the following example:

/* $Header: /work/linux/hitch/programming/tools/RCS/rcs.tex 1.2 1994/12/04 15:19:31
 mdw Exp mdw $ */

Other keywords exist as well, such as $Author$, $Date$, and .

Many programmers place a static string within each source file to identify the version of the program after it has been compiled. For example, within each source file in your program, you can place the line:

static char rcsid[  ] = "\@(#)$Header: 
/work/linux/running5/safarixml/RCS/ch21.xml,v 1.1 2005/12/28 18:42:18 $

co replaces the keyword $Header: /work/linux/running5/safarixml/RCS/ch21.xml,v 1.1 2005/12/28 18:42:18 ellie Exp ellie $ with a string of the form given here. This static string survives in the executable, and the what command displays these strings in a given binary. For example, after compiling importrtf.c into the executable importrtf, we can use the following command:

papaya$ what importrtf
        $Header: /work/linux/hitch/programming/tools/RCS/rcs.tex
                      1.2 1994/12/04 15:19:31 mdw Exp mdw $

what picks out strings beginning with the characters @(#) in a file and displays them. If you have a program that has been compiled from many source files and libraries, and you don't know how up-to-date each component is, you can use what to display a version string for each source file used to compile the binary.

RCS has several other programs in its suite, including rcs, used for maintaining RCS files. Among other things, rcs can give other users permission to check out sources from an RCS file. See the manual pages for ci(1), co(1), and rcs(1) for more information.

21.4.7. Revision Control Tools: CVS

CVS, the Concurrent Versioning System, is more complex than RCS and thus perhaps a little bit oversized for one-person projects. But whenever more than one or two programmers are working on a project or the source code is distributed over several directories, CVS is the better choice. CVS uses the RCS file format for saving changes, but employs a management structure of its own.

By default, CVS works with full directory trees. That is, each CVS command you issue affects the current directory and all the subdirectories it contains, including their subdirectories and so on. You can switch off this recursive traversal with a command-line option, or you can specify a single file for the command to operate on.

CVS has formalized the sandbox concept that is used in many software development shops. In this concept, a so-called repository contains the "official" sources that are known to compile and work (at least partly). No developer is ever allowed to directly edit files in this repository. Instead, he checks out a local directory tree, the so-called sandbox. Here, he can edit the sources to his heart's delight, make changes, add or remove files, and do all sorts of things that developers usually do (no, not playing Quake or eating marshmallows). When he has made sure that his changes compile and work, he transmits them to the repository again and thus makes them available for the other developers.

When you as a developer have checked out a local directory tree, all the files are writable. You can make any necessary changes to the files in your personal workspace. When you have finished local testing and feel sure enough of your work to share the changes with the rest of the programming team, you write any changed files back into the central repository by issuing a CVS commit command. CVS then checks whether another developer has checked in changes since you checked out your directory tree. If this is the case, CVS does not let you check in your changes, but asks you first to take the changes of the other developers over to your local tree. During this update operation, CVS uses a sophisticated algorithm to reconcile ("merge") your changes with those of the other developers. In cases in which this is not automatically possible, CVS informs you that there were conflicts and asks you to resolve them. The file in question is marked up with special characters so that you can see where the conflict has occurred and decide which version should be used. Note that CVS makes sure conflicts can occur only in local developers' trees. There is always a consistent version in the repository. Setting up a CVS repository

If you are working in a larger project, it is likely that someone else has already set up all the necessary machinery to use CVS. But if you are your project's administrator or you just want to tinker around with CVS on your local machine, you will have to set up a repository yourself.

First, set your environment variable CVSROOT to a directory where you want your CVS repository to be. CVS can keep as many projects as you like in a repository and makes sure they do not interfere with each other. Thus, you have to pick a directory only once to store all projects maintained by CVS, and you won't need to change it when you switch projects. Instead of using the variable CVSROOT, you can always use the command-line switch -d with all CVS commands, but since this is cumbersome to type all the time, we will assume that you have set CVSROOT.

Once the directory exists for a repository, you can create the repository with the following command (assuming that CVS is installed on your machine):

$tigger cvs init

There are several different ways to create a project tree in the CVS repository. If you already have a directory tree but it is not yet managed by RCS, you can simply import it into the repository by calling:

$tigger cvs import  directory manufacturer tag 

where directory is the name of the top-level directory of the project, manufacturer is the name of the author of the code (you can use whatever you like here), and tag is a so-called release tag that can be chosen at will. For example:

$tigger cvs import dataimport acmeinc initial
... lots of output ....

If you want to start a completely new project, you can simply create the directory tree with mkdir calls and then import this empty tree as shown in the previous example.

If you want to import a project that is already managed by RCS, things get a little bit more difficult because you cannot use cvs import. In this case, you have to create the needed directories directly in the repository and then copy all RCS files (all files that end in ,v) into those directories. Do not use RCS subdirectories here!

Every repository contains a file named CVSROOT/modules that lists the names of the projects in the repository. It is a good idea to edit the modules file of the repository to add the new module. You can check out, edit, and check in this file like every other file. Thus, in order to add your module to the list, do the following (we will cover the various commands soon):

$tigger cvs checkout CVSROOT/modules
$tigger cd CVSROOT
$tigger emacs modules
... or any other editor of your choice, see below for what to enter ...
$tigger cvs commit modules
$tigger cd ..
$tigger cvs release -d CVSROOT

If you are not doing anything fancy, the format of the modules file is very easy: each line starts with the name of module, followed by a space or tab and the path within the repository. If you want to do more with the modules file, check the CVS documentation at There is also a short but very comprehensive book about CVS, the CVS Pocket Reference by Gregor N. Purdy (O'Reilly). Working with CVS

In this section, we assume that either you or your system administrator has set up a module called dataimport. You can now check out a local tree of this module with the following command:

$tigger cvs checkout dataimport

If no module is defined for the project you want to work on, you need to know the path within the repository. For example, something like the following could be needed:

$tigger cvs checkout  clients/acmeinc/dataimport 

Whichever version of the checkout command you use, CVS will create a directory called dataimport under your current working directory and check out all files and directories from the repository that belong to this module. All files are writable, and you can start editing them right away.

After you have made some changes, you can write the changed files back into the repository with one command:

$tigger cvs commit

Of course, you can also check in single files:

$tigger cvs commit importrtf.c

Whatever you do, CVS will ask youas RCS doesfor a comment to include with your changes. But CVS goes a step beyond RCS in convenience. Instead of the rudimentary prompt from RCS, you get a full-screen editor to work in. You can choose this editor by setting the environment variable CVSEDITOR; if this is not set, CVS looks in EDITOR, and if this is not defined either, CVS invokes vi. If you check in a whole project, CVS will use the comment you entered for each directory in which there have been no changes, but will start a new editor for each directory that contains changes so that you can optionally change the comment.

As already mentioned, it is not necessary to set CVSROOT correctly for checking in files, because when checking out the tree, CVS has created a directory CVS in each work directory. This directory contains all the information that CVS needs for its work, including where to find the repository.

While you have been working on your files, a coworker might have checked in some of the files that you are currently working on. In this case, CVS will not let you check in your files but asks you to first update your local tree. Do this with the following command:

$tigger cvs update
M importrtf.c
A exportrtf.c
? importrtf
U importword.c

(You can specify a single file here as well.) You should carefully examine the output of this command: CVS outputs the names of all the files it handles, each preceded by a single key letter. This letter tells you what has happened during the update operation. The most important letters are shown in Table 21-1.

Table 21-1. Key letters for files under CVS




The file has been updated. The P is shown if the file has been added to the repository in the meantime or if it has been changed, but you have not made any changes to this file yourself.


You have changed this file in the meantime, but nobody else has.


You have changed this file in the meantime, and somebody else has checked in a newer version. All the changes have been merged successfully.


You have changed this file in the meantime, and somebody else has checked in a newer version. During the merge attempt, conflicts have arisen.


CVS has no information about this filethat is, this file is not under CVS's control.

The C is the most important of the letters in Table 21-1. It signifies that CVS was not able to merge all changes and needs your help. Load those files into your editor and look for the string <<<<<<<. After this string, the name of the file is shown again, followed by your version, ending with a line containing = = = = = = =. Then comes the version of the code from the repository, ending with a line containing >>>>>>>. You now have to find outprobably by communicating with your coworkerwhich version is better or whether it is possible to merge the two versions by hand. Change the file accordingly and remove the CVS markings <<<<<<<, == = = = = =, and >>>>>>>. Save the file and once again commit it.

If you decide that you want to stop working on a project for a time, you should check whether you have really committed all changes. To do this, change to the directory above the root directory of your project and issue the command:

$tigger cvs release dataimport

CVS then checks whether you have written back all changes into the repository and warns you if necessary. A useful option is -d, which deletes the local tree if all changes have been committed. CVS over the Internet

CVS is also very useful where distributed development teams are concerned because it provides several possibilities to access a repository on another machine.[*]

[*] The use of CVS has burgeoned along with the number of free software projects developed over the Internet by people on different continents.

Today, both free (like SourceForge) and commercial services are available that run a CVS server for you so that you can start a distributed software development project without having to have a server that is up 24/7.

If you can log into the machine holding the repository with rsh, you can use remote CVS to access the repository. To check out a module, do the following:

cvs -d :ext: :/path/to/repository  checkout dataimport 

If you cannot or do not want to use rsh for security reasons, you can also use the secure shell ssh. You can tell CVS that you want to use ssh by setting the environment variable CVS_RSH to ssh.

Authentication and access to the repository can also be done via a client/server protocol. Remote access requires a CVS server running on the machine with the repository; see the CVS documentation for how to do this. If the server is set up, you can log in to it with:

cvs -d :pserver: :path/to/repository  login 
CVS password:

As shown, the CVS server will ask you for your CVS password, which the administrator of the CVS server has assigned to you. This login procedure is necessary only once for every repository. When you check out a module, you need to specify the machine with the server, your username on that machine, and the remote path to the repository; as with local repositories, this information is saved in your local tree. Since the password is saved with minimal encryption in the file .cvspass in your home directory, there is a potential security risk here. The CVS documentation tells you more about this.

When you use CVS over the Internet and check out or update largish modules, you might also want to use the -z option, which expects an additional integer parameter for the degree of compression, ranging from 1 to 9, and transmits the data in compressed form.

As was mentioned in the introduction to this chapter, another tool, Subversion, is slowly taking over from CVS, even though CVS is still used by the majority of projects, which is why we cover CVS in detail here. But one of the largest open source projects, KDE, has switched to Subversion, and many others are expected to follow. Many commands are very similar: for example, with Subversion, you register a file with svn add instead of cvs add. One major advantage of Subversion over CVS is that it handles commits atomically: either you succeed in commiting all files in one go, or you cannot commit any at all (whereas CVS only guarantees that for one directory). Because of that, Subversion does not keep per-file version numbers like CVS, but rather module-global version numbers that make it easier to refer to one set of code. You can find more information about Subversion at

21.4.8. Patching Files

Let's say you're trying to maintain a program that is updated periodically, but the program contains many source files, and releasing a complete source distribution with every update is not feasible. The best way to incrementally update source files is with patch , a program by Larry Wall, author of Perl.

patch is a program that makes context-dependent changes in a file in order to update that file from one version to the next. This way, when your program changes, you simply release a patch file against the source, which the user applies with patch to get the newest version. For example, Linus Torvalds usually releases new Linux kernel versions in the form of patch files as well as complete source distributions.

A nice feature of patch is that it applies updates in context; that is, if you have made changes to the source yourself, but still wish to get the changes in the patch file update, patch usually can figure out the right location in your changed file to which to apply the change. This way, your versions of the original source files don't need to correspond exactly to those against which the patch file was made.

To make a patch file, the program diff is used, which produces "context diffs" between two files. For example, take our overused "Hello World" source code, given here:

/* hello.c version 1.0 by Norbert Ebersol */
#include <stdio.h>

int main(  ) {
  printf("Hello, World!");

Let's say you were to update this source, as in the following:

/* hello.c version 2.0 */
/* (c)1994 Norbert Ebersol */
#include <stdio.h>

int main(  ) {
  printf("Hello, Mother Earth!\n");
  return 0;

If you want to produce a patch file to update the original hello.c to the newest version, use diff with the -c option:

papaya$ diff -c hello.c.old hello.c > hello.patch

This produces the patch file hello.patch that describes how to convert the original hello.c (here, saved in the file hello.c.old) to the new version. You can distribute this patch file to anyone who has the original version of "Hello, World," and they can use patch to update it.

Using patch is quite simple; in most cases, you simply run it with the patch file as input:[*]

[*] The output shown here is from the last version that Larry Wall has released, Version 2.1. If you have a newer version of patch, you will need the -- verbose flag to get the same output.

papaya$ patch < hello.patch
Hmm...  Looks like a new-style 

Part I: Enjoying and Being Productive on Linux
Part II: System Administration