In a general sense, an attack on a system is any maliciously intended act against a system or a population of systems. There are two very important concepts in this definition that are worth pointing out. First, we only say that the act is performed with malicious intent, without specifying any goals or objectives. Second, some attacks are directed at a particular system, while others have no particular target or victim. Let's look at these concepts and terms one by one:
 Note also that in this definition we don't limit ourselves to events that take place in an electronic realm. An attack against an application could well involve a physical act, such as carrying a hard drive out of a data center in a briefcase. For the most part, though, we'll concentrate on electronic attacks in this book.
The immediate goal of an attack can vary considerably. Most often, though, an attack goal is to damage or otherwise hurt the target, which may include stealing money, services, and so on.
Achieving one or more of the goals above may require first reaching a subgoal, such as being granted elevated privileges or authorizations on the system.
The activities that an attacker engages in are the things that he does that could help him achieve one or more of his subgoals. These could include using stolen login credentials (e.g., username and password); masquerading as a different computer, user, or device; flooding a network with malformed packets; and so on.
The activities mentioned above may result in attack events?improper access could be granted, request processing could be suspended, storage space could be exhausted, or a system or program could be halted.
A further concept, often confused with an attack event, is the business consequence. By this term we mean the direct result of the events, such as financial balance sheets being incorrect, or a computer system being unavailable for a business process.
Lastly, the impact of the attack is the business effect. This could include the tarnishing of a company's reputation, lost revenue, and other effects.
The distinction between the attack event and its business consequence is an important one. The business consequence of an attack depends on the business purpose of the affected system, not on the specific attack actions or events. A direct consequence, for example, might be an inability to process a business transaction, resulting in an impact of loss of revenue. An indirect impact might be the tarnishing of the reputation of the business owner, resulting in an erosion of customer confidence. Figure 1-3 illustrates an example attack, showing the goals, subgoals, and activities of the attacker, along with the events, consequences, and impacts from the perspective of the target enterprise.
We've trotted out this terminology because we've found that it's critical to think clearly and precisely about attacks if we are to prevent them. Does it surprise you to hear that the potential business impact of an attack may be relevant to its prevention? It is. Knowing what is at stake is an essential part of making good design decisions about which defense mechanisms you will use.
How do attackers attack systems? Part of the how depends on the why. Some want to probe, but do no harm. Others are out to steal. Some seek to embarrass. A few want only to destroy or win bragging rights with their cronies. While we can't anticipate all possible motivations, we will try to think with you about how someone only moderately clever might approach the problem of compromising the security of your application.
Consider a safecracker. If he is a professional, he probably owns specialized safecracking tools (a stethoscope?we are told?comes in handy). He probably also has a thorough knowledge of each target vault's construction and operation, and access to useful technical documentation. He uses that knowledge to his advantage in manipulating the safe's combination knob, its internal tumblers, and so on, until he manages (or fails) to open the safe.
In an analogous attack on an application system, the miscreant arms himself with knowledge of a system (and tools to automate the application of the knowledge) and attempts to crack the target system.
Ah, but there are so many ways into a safe! A bank robber who doesn't have the finesse of a safecracker can still put on a ski mask and enter the bank during business hours with a gun. If we were planning such an attack, we might masquerade as a pair of armored car security officers and enter the vault with the full (albeit unwitting) assistance of the bank staff. Bribery appeals to us?hypothetically, of course?as well. How about you? Would you blast your way in?
There have been many good studies about the motivations and mind-sets of the kinds of people and personalities who are likely to attack your software. That's a book in itself, and in Appendix A, we point you to a few good ones. In this chapter, we'll simply encourage you to keep in mind the many facets of software and of the minds of your attackers. Once you have begun to ask what can happen, and how (and maybe why), we believe you're on your way to enforcing application security. In the case studies at the end of this chapter, we'll provide examples of constructive worrying we encourage you to do, as well as examples of what could happen if you don't worry enough.
In the following sections, we'll list quite a few types of attacks that your applications may have to withstand. We've divided the discussion into three categories, according to which stage of development the vulnerability relates:
While you are thinking about the application
While you are writing the application
After the application is in production
The attacks, of course, will usually?not always?take place while the program is running. In each case, we'll try to make a clear distinction.
At the end of each description, we'll discuss briefly how an application developer might approach defending against the attack. We'll develop these ideas in greater depth in subsequent chapters, as we make our case that application security is an essential element at every stage of development.
As a general rule, the hardest vulnerabilities to fix are those resulting from architectural or design decisions. You may be surprised at how many of the vulnerabilities you have heard of we ascribe to errors at "pure think" time.
At the end of this section we list two types of attacks, session hijacking and session killing, that are unusually difficult to defend against from within an application. What's the point of mentioning them? As we argue in Chapter 3, the fact that you as a developer may not be able to institute adequate defenses against an attack does not relieve you of responsibility for thinking about, planning for, and considering how to minimize the impact of such an occurrence.
It's worth mentioning that architectural and design-level attacks are not always based on a mistake per se. For example, you may have used the telnet program to move from one computer to another. It does well what it was designed to do. The fact that its design causes your username and password to be sent along the cables between computers, unencrypted, is not a "flaw" in and of itself. Its design does make it an unattractive choice for using in a hostile environment, however. (We use ssh instead.)
The following are the main attacks we've observed at the architecture/design level:
A man-in-the-middle (or eavesdropping) attack occurs when an attacker intercepts a network transmission between two hosts, then masquerades as one of the parties involved in the transaction, perhaps inserting additional directives into the dialogue.
Defense: Make extensive use of encryption?in particular, strong cryptographic authentication. In addition, use session checksums and shared secrets such as cookies. (You might, for example, use ssh instead of telnet, and encrypt your files using utilities such as PGP or Entrust.)
Certain operations common to application software are, from the computer's point of view, comprised of discrete steps (though we may think of them as unitary). One example is checking whether a file contains safe shell (or "batch") commands and then (if it does) telling the host system to execute it. Sometimes, the time required to complete these separate steps opens a window during which an attacker can compromise security. In this example, there may be a very brief window of opportunity for an attacker to substitute a new file (containing attack code) for the previously checked file. Such a substitution can trick an application into running software of the attacker's choosing. Even if the resulting window of opportunity is too short for a human to reliably exploit it, a program might well be able to repeatedly test and then execute the attacker's program at just the right moment in time. Because the result often depends upon the order in which two or more parallel processes complete, this problem is known as a "race" condition.
 The example would also be described in the technical literature as a "late binding" problem.
Defense: Understand the difference between atomic (indivisible) and non-atomic operations, and avoid the latter unless you are sure there are no security implications. (Sample actions that do have security implications include opening a file, invoking a subprogram, checking a password, and verifying a username.) If you are not sure whether an operation is atomic, assume that it is not?that is, that the operating system may execute it in two or more interruptible steps.
If an attacker can capture or obtain a record of an entire transaction between, say, a client program and a server, it might be possible to "replay" part of the conversation for subversive purposes. Impersonating either the client or the server could have serious security implications.
Defense: Same as for the man-in-the-middle attack; in addition, consider introducing into any dialog between software elements some element (e.g., a sequence identifier) that must differ from session to session, so that a byte-for-byte replay will fail.
 You might consider, for example, using the (trustworthy) current time as an example of a sequence identifier, such as the Kerberos 5-minute overlap requirement.
A program that silently records all traffic sent over a local area network is called a sniffer. Sniffers are sometimes legitimate diagnostic tools, but they are also useful to attackers who wish to record usernames and passwords transmitted in the clear along a subnet.
Defense: This attack is best addressed at the network level, where its impact can be diminished (but not removed) by careful configuration and the use of "switched" network routers. Still, as an application writer, you can render sniffers fairly harmless by making maximal effective use of encryption.
By exploiting weaknesses in the TCP/IP protocol suite, an attacker inside the network might be able to hijack or take over an already established connection. Many tools have been written and distributed on the Internet to implement this rather sophisticated technical attack.
Defense: This network-level attack is quite difficult to defend against from within application software. Encryption, of course, is a help (although a discussion of its limitations is beyond our scope here). And some operational procedures can help detect a hijacking after the fact, if careful logging provides enough information about the session.
Legitimate TCP/IP sessions can be terminated when either of the communicating parties sends along a TCP reset packet. Unfortunately, an attacker inside the network might be able to forge the originating address on such a packet, prematurely resetting the connection. Such an attack could be used either to disrupt communications or, potentially, to assist in an attempt to take over part of a transaction (see the description of session hijacking above).
 Note, too, that such attacks can be successful even if the attacker is not on the local segment, if the network is not doing any ingress filtering?that is, if there's no check to see if a data packet is really coming from the destination listed inside the packet.
Defense: Like session hijacking attacks, session killing attacks are difficult to defend against from within an application. Unfortunately, we believe that deterrence from within the application is not possible. However, your application may be able to compensate after the fact by either reasserting the connection or reinitializing the interrupted transaction.
We suspect that the kinds of errors we list in this section are the ones most folks have in mind when we talk about security vulnerabilities. In general, they're easier to understand and fix than design errors. There are many varieties of implementation-level attacks. Here are three common examples:
Many programming languages (C, for example) allow or encourage programmers to allocate a buffer of fixed length for a character string received from the user, such as a command-line argument. A buffer overflow condition occurs when the application does not perform adequate bounds checking on such a string and accepts more characters than there is room to store in the buffer. In many cases, a sufficiently clever attacker can cause a buffer to be overflowed in such a way that the program will actually execute unauthorized commands or actions.
Defense: Code in a language (such as Java) that rules out buffer overflows by design. Alternatively, avoid reading text strings of indeterminate length into fixed-length buffers unless you can safely read a sub-string of a specific length that will fit into the buffer.
Many application systems have been compromised as the result of a kind of attack that might be said to take place while the software is being written! You may have read about cases in which a rogue programmer writes special code directly into an application that allows access control to be bypassed later on?for example, by using a "magic" user name. Such special access points are called back doors.
Defense: Adopt quality assurance procedures that check all code for back doors.
Applications often accept input from remote users without properly checking the input data for malicious content. The parsing, or checking, of the input data for safety is important to block attacks. (Further, industrial-strength parsing of program input with robust error checking can greatly reduce all kinds of security flaws, as well as operational software flaws.)
One famous example of parsing errors involved web servers that did not check for requests with embedded "../" sequences, which could enable the attacker to traverse up out of the allowed directory tree into a part of the filesystem that should have been prohibited.
While parsing input URLs for "../" sequences may sound simple, the developers failed repeatedly at catching all possible variants, such as hexadecimal or Unicode-encoded strings.
Defense: We recommend arranging to employ existing code, written by a specialist, that has been carefully researched, tested, and maintained. If you must write this code yourself, take our advice: it is much safer to check to see if (among other things) every input character appears on a list of "safe" characters than to compare each to a list of "dangerous" characters. (See Chapter 4, for a fuller discussion.)
Most attacks, as we have said, take place after an application has been released. But there is a class of special problems that can arise as a result of decisions made after development, while the software is in production. We will have much more to say about this subject in later chapters. Here is a preview.
An application system, a host, or even a network can often be rendered unusable to legitimate users via a cascade of service requests, or perhaps a high-frequency onslaught of input. When this happens, the attacker is said to have "denied service" to those legitimate users. In a large-scale denial-of-service attack, the malefactors may make use of previously compromised hosts on the Internet as relay platforms for the assault.
Defense: Plan and allocate resources, and design your software so that your application makes moderate demands on system resources such as disk space or the number of open files. When designing large systems, include a way to monitor resource utilization, and consider giving the system a way to shed excess load. Your software should not just complain and die in the event that resources are exhausted.
Many operating systems and application programs are configured, by default, with "standard" usernames and passwords. These default usernames and passwords, such as "guest/guest" or "field/service", offer easy entry to potential attackers who know or can guess the values.
Defense: Remove all such default accounts (or make sure that system and database administrators remove them). Check again after installing new software or new versions of existing software. Installation scripts sometimes reinstall the default accounts!
Attackers routinely guess poorly chosen passwords by using special cracking programs. The programs use special algorithms and dictionaries of common words and phrases to attempt hundreds or thousands of password guesses. Weak passwords, such as common names, birthdays, or the word "secret" or "system", can be guessed programmatically in a fraction of a second.
Defense: As a user, choose clever passwords. As a programmer, make use of any tools available to require robust passwords. Better yet, try to avoid the whole problem of reusable passwords at design time (if feasible). There are many alternative methods of authentication, including biometric devices and smart cards.
 The best passwords are easy to remember and hard to guess. Good password choices might be, for example, obscure words in uncommon languages you know, or letter combinations comprised of the initial (or final) letters of each word in a phrase you can remember. Consider also including punctuation marks and numbers in your passwords.