If you've read this far into the book, perhaps you can anticipate our answer. For us, the purpose of testing is to determine whether software is secure enough. It's not to ensure that the application is unqualifiedly secure. And it's not to find all the vulnerabilities. The great Dr. Dijkstra said it best:
Program testing can be quite effective for showing the presence of bugs, but is hopelessly inadequate for showing their absence.
The message here is very important. Although testing is a necessary process?and indeed should be done throughout a development project?it must be handled carefully. Testing is not a substitute for sound design and implementation. It is not a cure-all to be applied at the end of a development project. And there's a trap: we have observed engineers who test a program against a publicly distributed attack tool and then declare that their code is "secure" when the attack tool fails. In fact, all that they've proven is that their program can resist one specific attack tool.
Testing is a kind of analysis, discovering what is and comparing it to what should be. The odd thing about security testing is that the yardstick we would like to hold up to our software is, in one sense, always changing in size.
Consider again the thought experiment we posed in Chapter 5: imagine that you carefully develop a complex e-commerce application and, as a last step, test it against all the savage attacks you can devise (or find out about). Let's assume that you've followed our advice, ensuring that the web server itself is patched and up to date, and that the host computer on which the application and server run is safe as well. Let's further assume that you have done all this flawlessly and everyone else has done his or her part. In short, the system is "secure."
Now imagine that the operations folks shut down the system nicely, wrap it up in plastic, and put it in a closet for six months. You fetch it out and set it to running again. Is it still secure? Or have vulnerabilities newly discovered in the operating system, web server, language library, third-party package, Internet protocols, or even a second instance of the software you wrote opened the system's resources to simple attacks? And if this is the case, what does this say about security and testing for security?
We think it means that complex systems developed by everyday methods can never really reach a sustained secure state. It means that we need to squeeze out as many vulnerabilities as we can, retest often, and plan carefully to mitigate the risks associated with the latent bugs we miss.