Chapter 16. Secure Programming Techniques

The underlying security model of the Unix operating system is brittle. The Unix security model?a privileged kernel, user processes, and the superuser who can perform any system management function?is certainly a workable framework. But it is a framework in which even minor bugs or implementation errors can be subverted by an attacker to provide him with system-wide control.

Most security flaws in Unix arise from bugs and design errors in programs that run as root or with other privileges, from SUID programs or network servers that are incorrectly configured, and from unanticipated interactions among such programs.

It is exceptionally important to use secure programming techniques when writing software that is used in a network server. By definition, servers receive connections and data from unknown and possibly hostile hosts on a network. Attackers are frequently able to use bugs in these programs as a point of entry into otherwise secure systems.

This chapter contains a collection of secure programming techniques that we have developed for use on Unix systems. Much of the emphasis is on writing secure servers using the C programming language. However, most of the concepts apply to any other language, including C++ and Java. If you are writing a web-based application, you may wish to review Chapter 16, Securing Web Applications, of our book Web Security, Privacy and Commerce (O'Reilly). That chapter discusses many additional issues that come into play when developing web-based servers and application programs. That chapter also discusses many issues that arise when using scripting languages. Some other useful references are noted in Appendix C.

The Seven Design Principles of Computer Security

In 1975, Jerome Saltzer and M. D. Schroeder described seven criteria for building secure computing systems.[1] These criteria are still noteworthy today. They are:

Least privilege

Every user and process should have the minimum amount of access rights necessary. Least privilege limits the damage that can be done by malicious attackers and errors alike. Access rights should be explicitly required, rather than given to users by default.

Economy of mechanism

The design of the system should be small and simple so that it can be verified and correctly implemented.

Complete mediation

Every access should be checked for proper authorization.

Open design

Security should not depend upon the ignorance of the attacker. This criterion precludes back doors in the system, which give access to users who know about them.

Separation of privilege

Where possible, access to system resources should depend on more than one condition being satisfied.

Least common mechanism

Users should be isolated from one another by the system. This limits both covert monitoring and cooperative efforts to override system security mechanisms.

Psychological acceptability

The security controls must be easy to use so that they will be used and not bypassed.

Use these principles when you design and implement your own computer software.

[1] Saltzer, J. H. and Schroeder, M. D., "The Protection of Information in Computer Systems," Proceedings of the IEEE, September 1975. As reported in Denning, Dorothy, Cryptography and Data Security (Addison-Wesley).

    Part VI: Appendixes