4.3 Case Studies

In the following sections, we show the real-world consequences of some of implementation flaws we've seen over the years. We look at the flaws themselves, analyze their causes, and point out ways in which they might have been avoided.

4.3.1 Case 1: Misuse of White Noise Source

Not all random numbers are created equal. In fact, as we mentioned earlier, choosing the right source for random numbers can be a vital step in implementing secure software. In one such publicized case,[6] the MIT Kerberos 4 authentication protocol was implemented using a bad choice of random numbers. This resulted in an authentication protocol that could be quite easily compromised, simply by predicting subsequent random numbers from the source. In fact, this seemingly simple implementation problem?the result of a developer's making a poor judgment call?enabled an attacker to completely circumvent the otherwise well-designed cryptographic security of the Kerberos protocol.

[6] Refer to "Misplaced Trust: Kerberos 4 Session Keys," by Bryn Dole, Steve Lodin, and Eugene Spafford. (See Appendix A for details.)

This is a case where the design was sound, but the implementation was not. No doubt the design of the Kerberos session key generator specified the use of a random number in the algorithm that calculated each session key. However, what the design couldn't anticipate was that the team implementing the software used a random number generator that was never intended to be cryptographically sound. Sure enough, it generated statistically random numbers, but unfortunately those numbers were predictable.

Further, by the time that this vulnerability was discovered by a team at Purdue University the Kerberos system had been available in source code for several years. Even though dozens, if not hundreds, of software developers had reviewed the open system's design and source code, no one had noticed this vulnerability?even though Kerberos was designed to be a secure infrastructure component of MIT's Project Athena network system.

This case study teaches several lessons. The following are especially important:

  • Be very careful when selecting functions that your software depends on, such as random number generators.

  • If you're working with open source software, don't presume that the open source community has exhaustively examined your design and implementation details. The mere fact that no vulnerability has yet been discovered in a program does not make it secure.

  • When implementing software, the programmer must clearly and thoroughly understand the designer's assumptions. (It's also true that the designer must clearly articulate all of his assumptions, especially if a different team of people is going to implement the program.)

4.3.2 Case 2: File Parsing Vulnerability

In another incident we've been privy to, a vulnerability was discovered several years ago in the anonymous FTP implementation of a major Unix vendor. Most Unix vendors follow the common practice of implementing an anonymous FTP sandbox in a chroot environment, effectively compartmentalizing the anonymous FTP process from the rest of the filesystem. Thus, even if an attacker succeeds in compromising the anonymous FTP service, he can only access the files within the chroot "jail" (as it is commonly called). This particular vendor, though, decided to go a different route; rather than using chroot, it decided to implement a system of screening filenames or pathnames to ensure that the anonymous FTP client could only download files from a set of authorized directories.

Predictably, someone discovered that the filename parser could be tricked into allowing carefully formed requests to pass. It would seem that the implementation of the screening code did not foresee every possibility, such as get /pub/neat-folder/../../../etc/passwd. At this point, the FTP daemon allowed the indirect request to pass, because it was simply prefixed with /pub, without regard for parsing the subsequent ../.. in the request. So, the vendor was forced to enhance its screening code, more than once.

This case study teaches several lessons. The following are especially important:

  • Don't reinvent the wheel. At the time this happened, many Unix vendors had already implemented completely acceptable anonymous FTP environments. Further, any programmer could readily find an ample supply of examples of these other implementations. Look at others' approaches to the same or similar problems before implementing your solution, whenever possible or feasible.

  • Parsing user input is not as trivial as it may seem. In this case, the programmers who implemented the screening software made certain assumptions about user actions that proved to be false. Don't treat something as critical as user input lightly. Users can be a crafty lot; treat their inputs with all the care of a technician handling an explosive device.

4.3.3 Case 3: Privilege Compartmentalization Flaw

Most modern, multiuser operating systems implement some form of user/process privileges. Implementing the subtleties of privilege handling has led to many vulnerabilities in systems. One common means of effectively handling privileged operations is to compartmentalize the use of privileges. Thus, only use administrative privileges in the programs, modules, or processes that absolutely need those privileges, and operate (by default) with the lowest privilege possible. (We discussed this principle in greater detail in Chapter 2.)

Unfortunately, in many cases, privilege compartmentalization is not used adequately. One such problem occurred in Sun's chesstool, a graphic chessboard game that was distributed with early versions of the SunOS operating system. The programmers who implemented chesstool decided to run it with a type of Unix privilege; specifically, it was configured to be setgid bin in its original distributed form. The problem were twofold:

  • chesstool didn't need to run with the group identification ("gid") of bin.

  • chesstool could actually be invoked with a command-line parameter that would allow a user to run an arbitrary program of his choosing. At some point, someone figured out that he could run programs with the gid of bin and exploit that to gain additional privileges on the system.

Although this is a rather egregious example of how not to use system privileges securely, this case study teaches several important lessons:

  • Only use privileges when there is no other way to accomplish what needs to be done.

  • When you must use privileges, keep them compartmentalized to the smallest possible code segment that needs to be privileged.

4.3.4 Case 4: CGI Phonebook Program Flaw

CGI (Common Gateway Interface) programs are used by web servers to provide interactive services, such as the ability to query for particular information. In many cases, the CGI program does not actually service a request directly, but hands off the request to some back-end database and returns any results back to the requestor's browser. As such, the CGI program is the system's front line of defense and must sanity-check all requests for safety.

Consider an example CGI program that provides a phonebook lookup service: the user enters a name, and the CGI program returns the phone number. This program assumes that there is a web page that gives a "name" entry text field, and that the user "posts" this query to the CGI program. Thus the CGI program is expecting a query of the form name=foo to come in on the standard input stream, from the user, via the web server. It then constructs a simple database query (using the Unix grep pattern-matching utility) and returns the result of the query.

The CGI program known as phone demonstrates four major vulnerabilities: a stack buffer overflow, a static buffer overflow, a parsing error, and a C format string vulnerability.[7]

[7] It also invokes a utility program to do pattern-matching instead of using library calls, so it violates our advice against invoking command lines, too.

See if you can find the vulnerabilities in the following code before we describe them:

/* phone - a really bad telephone number lookup CGI program! 

           It expects a single "name=foo" value on stdin     */

static char cmd[128];

static char format[] = "grep %s phone.list\n";

int main(int argc, char *argv[])


      char buf[256];




      write(1,"Content-Type: text/plain\n\n",27);


Stack overflow

This vulnerability is created by gets(buf); buf has 256 bytes of storage, but a malicious user can simply send more data in the name=foo input. The gets( ) function then writes data past the end of the buffer and overwrites the return address on the stack. If the input is carefully constructed, the attacker can cause the new return address to return control to binary commands he just wrote into the buffer.

Static buffer overflow

This vulnerability is caused by the sprintf( ) function, which is trying to build the database query command in the cmd[] buffer. A long input name will cause sprintf() to write past the end of cmd. The excess information is likely to overwrite the format buffer.


This error occurs in the sprintf( ) function, which builds up the command to be executed by the system( ) function. In the normal case, sprintf() creates a command of the form grep name phone.list, which returns the name and phone number from the phone database. However, suppose that an attacker sends in a query of the form name=.</etc/passwd. The resulting command will become grep .</etc/passwd; phone.list. This string (which exploits the "redirection" feature of Unix command-line parsing) will cause the system to return the entire contents of the password file to the attacker. Clearly, the CGI program must parse the input data more carefully to prevent such attacks.

C format string

This vulnerability comes from the audit-logging step: syslog(36,cmd); . While it is good practice to log requests, the function logs the full command, which contains unchecked user input. If the attacker embeds a series of "%s" format fields in the data, the syslog function will interpret these as format commands, and will try to get the corresponding data values off the stack. With enough "%s" fields, syslog() will eventually dereference a null and crash. If the attacker includes "%n" fields, syslog will write values to memory, which may be exploitable.

This case study teaches several lessons. One that is especially important is that CGI programs, like any other type of software, must expect and be able to handle malicious input data.