Rather than dive straight into the methods for implementing network security, let's take a high-level look at six principles of security thinking. You won't find these principles in a book such as How to Make Friends and Influence People; they are inevitably based on a philosophy of mistrust.
Don't talk to anyone you don't know.
Accept nothing without a guarantee.
Treat everyone as an enemy until proved otherwise.
Don't trust your friends for long.
Use well-tried solutions.
Watch the ground you are standing on for cracks.
The sixth principle is a bit cryptic. The "ground" in this context refers to the pile of assumptions we all stand on. As you will see shortly, this sixth principle is the real danger zone in security and one of the most fruitful for the enemy.
In the context of security, this means you must be 100% certain about the identity of a device or person before you communicate. Security gurus point out that it is impossible to be 100% certain of anything, but it is the job of security designers to bring you as close to 100% as you need.
To understand this principle even better, consider this analogy. Imagine you are at a wild party. You are strapped to a chair in the middle of the room and blindfolded. Nobody touches you and your nose is covered so you can't smell peoples' perfume. Well, we never get invited to these sorts of parties anyway; but if you did, you would know what it feels like to be a Wi-Fi LAN.
In this scenario, you can listen and you can speak, but you have no other means to identify the people in the room. A simpler (albeit more boring) analogy is a telephone conference call. In ordinary phone conversations, during which we can hear but not see the other person on the phone, we constantly prove to ourselves that the other person is who we think he is. In most cases, we do this subconsciously. Initially we assume the caller is who he says he is; we accept his identity as stated. However, before we open our communications channels, we test that identity. If we know the caller, we recognize the voice and we go straight to open mode. If we don't, we cautiously open up as we hear information that is consistent with the person's stated identity. Throughout the call, we continue to monitor and are alert to comments that sound strange or out of context.
Conference calls are difficult because more people are involved and you need to constantly identify who is talking. Imagine that somebody makes a comment that you don't quite hear and you say, "Could you repeat that?" The comment is repeated, but can you be sure the same person repeated it, or that what was repeated is the same as the original comment?
The only reliable solution to this quandary is to require that the identities of all the call participants be proven without a doubt for every sentence they speak.
For a Wi-Fi LAN, it is not enough to verify the identity of the other party. A Wi-Fi LAN must also verify that every message really came from that party. A simple method to authenticate someone is to require that they know a secret password or key. This can be used at the start of communication to establish identity, and then the same secret key can be incorporated into each message to ensure the message's authenticity. The idea is that, even if enemies are impersonating valid network addresses and other information, they cannot substitute rogue messages for authentic ones because they don't know the secret key, which must be incorporated into every message. This approach was the basis of the original IEEE 802.11/Wi-Fi Security protocol called WEP; but, as we will see later, it was too simple to be secure in the long run.
 A variation on this theme is when you want to be sure a group of messages all came from the same sender, even though you don't know the identity of the sender.
Like "security," the word "guarantee" means different things to different people (for instance, try taking your used car back to the dealer when things go wrong). In the context of network security, "guarantee" means a guarantee of authenticity. In other words, it is proof that the message has not been changed.
You know the sender must prove his identity before you accept his message, but you also need to be sure that what you receive is the message the sender intended to send and that the message has not been modified, delayed, or even replaced with a new message.
At first this seems like a small point and one that is essentially the same as proving the identity of the sender. After all, if the message has been altered, then surely the enemy must have intercepted and resent it.
Consider the following.
A friend sends a valid message to you.
An enemy intercepts the message before you receive it, modifies some bits, and then sends it on to you.
You receive the message and check the sender's identity; but because the enemy sent it last, you can detect the interception, right?
Well, no…there are two flaws in that conclusion, as shown in Figure 2.1. The first is that it assumes it is possible to know who sent you the message. Remember the onus is on the sender to provide proof for the receiver to check. In a wireless environment, we cannot expect the receiver to have a magic method of knowing who sent the message other than by reading its contents. Therefore, if an enemy forwards an identical copy of a message sent by a friend, how can the receiver possibly know that it has been handled in transit? Therefore, you cannot detect that a message has been handled simply by looking at it.
The second flaw is one of those hidden assumptions. We have assumed it is necessary for the enemy to receive and then resend the message. However, in a wireless environment, the enemy might discover a way to modify the message while the friend is transmitting it. Today, we don't know any way to do that. But you could imagine that a carefully timed burst of radio transmission from the enemy, colliding with the friendly transmission, might cause the receiver to interpret a bit to have a different value, even though the rest of the transmission came from the friend. In this case the enemy has tampered with a message without retransmitting it at all.
In practice many security protocols use a method that provides both identity proof and tamper-resistant packaging in the same algorithm. However, the rule still applies: Accept nothing without a guarantee.
A few years ago a story circulated about a scam involving automatic teller machines (ATMs) (Neumann, 2001). We have since heard several versions of the story, so it might be urban myth, but it's interesting nonetheless. Someone obtained an old ATM that had been taken out of service. The ATM was complete and still had its bank logo attached. This person installed the ATM in a small trailer, ran it off a generator, and parked it in a busy downtown area. Shoppers assumed the bank was being proactive by introducing mobile ATMs and went to withdraw cash. The machine displayed an error message saying it was empty of cash, but it recorded the customers' ATM card information and personal identification numbers (PINs). Each day, the criminal made copies of all the ATM cards used and withdrew the maximum allowed amount from the real bank for every card, each day, until the scam was discovered and the cards were disabled. This scam succeeded because the customers assumed only the real bank would set up an ATM. The ATM cards did not have the capability to check the machine's authenticity either.
 Modern smart card devices can check that they are inserted into a valid machine.
This example illustrates the importance of not giving information to anyone until that person has proved identity. Arguably the customers in this example followed this rule, but their standard of proof was too low?they trusted the bank sign on the ATM!
This rule is important in Wi-Fi wireless LAN applications. In a wired LAN, for example, you have a pretty good idea where you are connected because you plug the cable into a hole in the wall, which either you or an IT department maintain. Assuming you keep your wiring closet locked, you should be safe. However, by design, Wi-Fi LANs can search the airwaves looking for networks to join. Access points advertise their availability by transmitting beacon frames with their identity. It is, of course, trivial for an enemy to start up an access point from a van and falsely advertise that he is part of your network in the hope of fooling a few WLAN cards into connecting. Later we will see how the new Wi-Fi security protocols work to ensure that you are not caught in this trap.
"Make new friends but keep the old…." What does it mean to "keep" a friend? The word "keep" implies an active process, a process of affirmation. Suppose one day you are walking down the street and you meet up with your best friend from high school. This is a nice surprise because you had lost contact and you hadn't seen this person for 10 years. You grew up with this friend and shared all your secrets. After reminiscing for a while, you learn things are not going well and you hear the dreaded words, "Can you lend me some money? I absolutely promise I'll pay you back." Why do you feel uncomfortable? Ten years ago you might have forked over the money in complete confidence. Why not now? You have not reaffirmed the friendship; you don't really know who this person is anymore. You would have to take time to reestablish trust before you were comfortable again.
Applying this analogy to Wi-Fi security, friends are those devices you can communicate with and enemies are everyone else. "Friends" in a Wi-Fi LAN can be identified because they possess tokens such as a secret key that can be verified. Such tokens, whether they are keys, certificates, or passwords, need to have a limited life. You should keep reaffirming the relationship by renewing the tokens. Failure to take this step can result in unpleasant surprises.
There is a difference between policy and protocol. In simple terms, the security protocol is designed to implement the security policy. You are going to decide for your organization which people are "friends." You are also going to decide when those friends can access the network and, for multisite corporations, where they are allowed access. All these issues are part of security policy. It is then the job of the security protocol, in conjunction with hardware and software, to ensure that no one can breach the policy. For example, enemies should never get access.
In the Wi-Fi LAN context, a friend is usually a person or a computer. If you are talking to some dedicated equipment, such as a server or a network gateway, you need to establish that the equipment is considered a friend in your security policy. However, in the case of laptop or desktop computers, it might not be enough to identify the equipment. The laptop might have been stolen or left unattended. In these cases, you need to be sure the person using the computer is also legitimate. Memorizing a password is the most common way to do this.
Normally, well at least in theory, people who work for your company are friends and it is acceptable to communicate with them. In larger companies the notion of "friend" can be divided down to departments or projects. Even when you are certain of the other party's identity, you might have to check whether she has left the company or moved off the project.
Corporations have security databases that are constantly updated with the access rights or credentials of all prospective friends. Later we will look at how Wi-Fi LAN security can be linked to those databases. However, accessing such a database often requires a significant investment in time and resources, and in some cases, the database might be temporarily inaccessible.
To reduce overhead, it is common to verify another person's credentials and then assume these credentials are OK for a limited period of time before checking again. The actual amount of time can be set by the security administrator and might vary from a few minutes to a few days.
A security guru will never say that something is "totally secure." So what's the best you can do? How can you ever develop trust in a security protocol?
Part of security psychology involves developing a high level of mistrust for anything new. To see how this affects people's attitudes, let's take encryption as an example. The object of encryption is to make the encrypted data look like perfectly random noise. Suppose you take an arbitrary message, pass it through the encryption algorithm, and send it over a communications link. Then repeat the process millions of times, sending the same message over and over but encrypting it each time before sending. If the encryption algorithm is good, every transmission will be different and look totally random. If you could do this with no gaps in the transmission, no amount of analysis on the output stream would reveal any pattern?just white noise.
Now comes the hard part. If you really did convert the message to random white noise, it would not be very useful because neither the friend nor the enemy would be able to decode it. The trick is to make it look like noise to the enemy while enabling the friend to extract the original data. Many algorithms are available for achieving this goal, but how can you tell which ones really work? If the message is to be decoded by the friend, it cannot be true noise?somewhere there must be some information that allows the data to be extracted. So how can you be sure an enemy cannot eventually figure out that information and decode the message?
The answer to this question has two parts. The first involves mathematical analysis called cryptanalysis. Cryptanalysis lets you determine how hard it is to break the encryption code by conventional or well-known methods. However, weaknesses can also come from unconventional methods, such as unexpected relationships between computations in the algorithm or implicit hidden assumptions. Therefore, the second part of developing confidence in a new algorithm is the good old "test of time."
There is no shortage of encryption algorithms. Occasionally, very occasionally, an algorithm will be broken?that is, someone figures out how to decode a message without using the computing power of all the computers in the universe. However, this is not the primary motivation for research into new methods. It takes a certain amount of computing power, energy, and memory to perform encryption and decryption. Different types of devices have different capabilities. For example, the computing resources of a modern desktop computer are different from those of a mobile phone. Therefore, much of the research into new methods is directed at tailoring methods to the resources of real devices. There is no problem deploying an unbreakable encryption code if you have limitless computing power and energy, but creating a method that can be run on a battery-powered PDA is a challenge.
 We use "unbreakable" here in the real world sense. Theoretically all encryption algorithms are breakable with enough time and computing power except the Vernam cipher, which uses pure random data, different for every message.
The point here is that new methods are still invented from time to time, and the question then arises whether a new method is really secure. Initially, security gurus are likely to be skeptical about the claims of any new algorithm. That is not to say that they lack interest or enthusiasm?it just means they won't give it a seal of approval until the method has a few miles on the odometer.
If you are introducing a new method, you depend heavily on the interest of the world's security experts if you want to get the method accepted widely. First of all, the method has to be publicly available and sufficiently interesting to attract experts' attention. If it is not novel, or if it includes mistakes, your method will get nothing more than a sniff. If you are a credible guru and your method has some good new tricks, the others might walk around and kick the tires. If you are really doing well, several of them will go for a test drive. But before your method can become truly accepted, it needs to be deployed in the real world for several years, hopefully in an application that attracts attacks. When a method is deployed in the public eye, both hackers and legitimate security researchers will receive kudos if they can break the system. For example, when IEEE 802.11 WEP was broken, the story reached national newspapers, and the researchers who discovered the cracks attracted much attention. But, if you survive a few years and no one has broken your method, it can achieve the status of trusted and mature. You probably will, too.
You can see why it is so hard to get new methods accepted and adopted. But you can also see why it is necessary for this process to occur and why security gurus are correct to take a wait-and-see approach. Notice also that it is not enough to invent a great method. Unless the method can attract the interest of the cryptographic research community and be deployed to attract the interests of hackers, it can never really be tested.
So what about the new Wi-Fi security methods? How can we be sure they are safe? It is true that the new security methods for Wi-Fi have not had time in the field. However, the technology used to implement them is based wherever possible on preexisting and well-tried algorithms. It's always tempting for engineers to reinvent the wheel and come up with some grand new scheme of their own. Because of the experience of the security professional involved in the new Wi-Fi approach, this temptation has been resisted. Having said that, some new concepts have been incorporated, and although they have been reviewed around the world, the "newness" risk does still apply.
We will see later how the lack of review by the security research community was one of the factors that led to problems in the original IEEE 802.11 WEP security. By contrast, the new standard has had participation and review from world-renowned experts in the field, and the principles employed, where novel, have been presented at cryptographic conferences to stimulate review.
Every day, we make countless assumptions. From our earliest days we have learned how to look at situations and decide which ones are safe and which ones are dangerous. Over time we perform many of these skills subconsciously; we learn to trust, and for most of us, that trust is only occasionally misplaced, sometimes painfully.
Humans automatically transfer safe assumptions from conscious memory to subconscious behavior. The key word here is automatically; that is, people are not aware this transfer happens. In fact, if it didn't happen, we could not function, as our minds would be cluttered with so many checks and questions. However, this essential ability for life is the open door that has been exploited by generations of con men, pickpockets, and tricksters in performing crimes. It is also the starting point for hackers who want to attack your network.
People design software, hardware, and systems. People write and evaluate international standards. No matter how sophisticated the design tools, or what computer-aided design software is used, the designers' assumptions still come shining through. Some are valid and some false?and, more dangerously, many are applied subconsciously or implicitly.
Consider a medieval castle. The designers could specify thick walls, deep moats, and strong gates. They could require that gallons of boiling oil be kept ready at all times. But how would the castle folk fare against a modern helicopter cruising overhead, dropping boiling oil on them? They would have no defense because the designers unconsciously assumed that attacks would not come from the air. This assumption is a hidden weakness of the castle design.
How is it possible to protect against things that you can't even imagine? How can you see the implicit assumptions and bring them forward for inspection and testing? There is no certain way, but these challenges mold the way of thinking for security experts.
As a result, it can be difficult to have ordinary conversations with security experts. Here is a simple test to determine whether you are talking to a security guru: Ask him to name the security system he considers to be the strongest in the world for sending secret data by any method (wireless, wire, smoke signals, whatever). Then ask the following question, "Would I be secure if I implemented this in my system?" If the answer is "yes," you are not talking to a real security guru.
Security gurus never say, "This is completely secure." They make statements like, "Based on the assumption that attackers are limited to computational methods and processor architectures similar to today, it is computationally infeasible to mount a [certain type of attack] and no other types of attack are known to be more effective at this time." Sometimes they are prepared to say that one method is definitely as secure as another method, but the word "definite" doesn't get too many outings in the security expert's vocabulary.
Such hedging doesn't translate well to the glossy front of a product box, where customers simply look for the words "this is secure." The best approach for a customer is to understand the strengths of the security method used and, where possible, the assumptions that were made in the design. If the assumptions are reasonable, the method is well designed, and plenty of people are using it (to ensure future support), the customer can be comfortable.
The challenge for hackers, of course, is to look for the little cracks and crevices that result from hidden assumptions. Unfortunately for the rest of us, this search is an intriguing, fascinating, and motivating challenge for hackers. Some people like to do crossword puzzles, and some people like to play sophisticated problem-solving computer games, often wrapped in a fantastical visual landscape. Hacking is another form of these mind games. When inventing a new virus or a password-cracking program, the hacker is trying to see into the mind of the designer and look for false assumptions that were made subconsciously. For example, a recent virus called "Code Red" (actually a worm) worked by exploiting the fact that when internal memory buffers overflowed in a computer, information was accidentally left in memory in a place that was accessible from outside. The system's designers made the false assumption that buffers do not overflow and that, if they do, the excess buffers are properly thrown away. Almost certainly this was a subconscious assumption; it was false and an attacker found it.