Sometimes it's easier for programmers to understand what they need to do if they can see clearly what not to do. The following sections list practices to avoid?security design mistakes we have either seen or, alas, made ourselves. We cover the overall design approach, as well as some specific design flaws.
The following practices describe common errors that pertain to the overall design approach:
One trap that we've seen many application designers fall into is to start selecting specific controls or technologies without first thinking through and making a design?that is, to start coding before knowing what is to be done. Some people seem to want to jump straight from the policy level (e.g., "only authorized users can read our files") to decisions about details (e.g., "we'll use hardware password tokens," or "we'll use that XYZ software that checks passwords to make sure they're 8 characters long"). This is an all-too-easy mistake for engineers, who are problem solvers by nature, to make. Sometimes, it can be hard for us to leave an unsolved problem on the table or whiteboard, and stay at the conceptual level. Resist the temptation to solve that problem as long as you can. For one thing, you may be able to design it away!
We alluded to this earlier in our discussion of mental models and metaphors. Many of the strangest vulnerabilities we've seen were built into applications at design time because the programmer indulged in the luxury of thinking "inside the box," created by the understanding of the application's purpose. Forget it.
At this point, we swallow our pride and list a number of specific design mistakes we ourselves have made. Looking back, we find that most of the design errors we've made have arisen when we allowed ourselves to be so concerned with the "security" aspects of business software that we forgot about the "business" part. So please avoid (as we failed to) the design flaws in the following list:
Sometimes our ideas are too complicated or need too much attention from overworked staff to work reliably. For example, one of our ideas required several log files to be checked and trimmed (by humans) daily. It's usually better to compromise; adopt practices (in this case, less frequent checks) that are less effective theoretically, but are more likely to really get done.
Some of our suggestions have been flawed for the practical reason that they would have annoyed a great many of our fellow employees, caused work to be lost, and cost the company even more money as a result of the workaround software written in reaction. For example, many years ago we instituted an automatic "idle terminal" timeout. After flashing several warnings on the user's screen, it would finally force the application they were running to close up work and exit. With a revolt on our hands (and "phantom keep-alive" software spreading throughout the company) we were forced to find a better approach. For the past couple of decades or so, we've simply used software to lock the terminal, leaving the applications active (or, when the operating system would allow it, quiesced).
Some of the "best" security solutions in theory cannot succeed in practice, because they interfere with the easygoing and casual atmosphere many enterprises try to foster. We once proposed the installation of biometric scanners as an everyday replacement for passwords. Upper management quite properly shot down this idea, which would have required people to put their thumbs into a little cylindrical scanner or put their eyeballs up to an eyepiece. We'll try this again someday when the technology is less cumbersome and intimidating, or perhaps when the need for authentication is so severe that our coworkers will agree that such measures are necessary.
We've been guilty of recommending the purchase or development of intrusion detection software that would have gutted the departmental budget. Fortunately, this proposal was turned down too, and we were able to invest in improving the security of the applications and practices we used every day instead.
Don't laugh: some practices that are mandatory in one country of operation are forbidden in another country. Privacy regulations are an excellent example. Export restrictions on some handy advanced technologies can be a factor, too. Considerations of legality can also arise in heavily regulated industries such as banking and medical care.
Enough assaults on our pride! In general, especially early in our careers, we proposed security techniques that were intuitively appealing to security specialists like ourselves, but were impractical because of their prospective impact on business practices or culture.
Note that such drawbacks may not be evident in the requirements gathering, during risk analysis, or even in the first round of technology selection. They may emerge only during later stages of the analysis, when experienced hands (or, as we'll discuss later, specialized software) take issue with the recommendations of the technologists.