An excellent example of a system that was intended from scratch to be secure is the Java "sandbox." Java certainly has had its share of security vulnerabilities. But it remains an excellent example of the principle that many mistakes can be designed out at by selecting an appropriate security model.
Let's let the chief security architect of Java, Sun's Li Gong, explain the idea of the sandbox:
The original security model provided by the Java platform is known as the sandbox model, which [provided] a very restricted environment in which to run untrusted code obtained from the open network... [L]ocal code is trusted to have full access to vital system resources (such as the filesystem) while downloaded remote code (an applet) is not trusted and can access only the limited resources provided inside the sandbox...
Overall security is enforced through a number of mechanisms. First of all, the language is designed to be type-safe and easy to use. The hope is that the burden on the programmer is such that the likelihood of making subtle mistakes is lessened compared with using other programming languages such as C or C++. Language features such as automatic memory management, garbage collection, and range checking on strings and arrays are examples of how the language helps the programmer to write safe code.
Second, compilers and a bytecode verifier ensure that only legitimate Java bytecodes are executed. The bytecode verifier, together with the Java Virtual Machine, guarantees language safety at run time...
Finally, access to crucial system resources is mediated by the Java Virtual Machine and is checked in advance by a SecurityManager class that restricts the actions of a piece of untrusted code to the bare minimum.
Now, as it happens, Mark can add a personal footnote about the Java sandbox. We include it as a cautionary tale, with the caveat that we are relying on his recollections for the accuracy of the story. (The sense of it is certainly right.)
At the time that Java was first released, Mark was working as Sun's security coordinator, with technical responsibility for collecting vulnerabilities, issuing security patches, and sometimes developing fixes. The Java product was outside his purview?they did their own security, and their own patches?but he decided to offer his services, to see if he could help Sun avoid the sort of "catch and patch" existence they had struggled with for so long at the operating system level.
Mark contacted the appropriate folks in the Java group, identified himself, and offered to conduct a security code review for them with a few of his friends (folks at CERT, for example). Mind you, this was before any security vulnerabilities in Java had been identified at all. His offer was politely declined. Java, he was told, was secure. Perhaps it is secure at the design level, he responded, but it had been his experience that many security errors arise at the point where the design comes in contact with the outside environment in which the software actually operates. He proposed to check, for example, for vulnerabilities resulting from the use of relative filenames (such as user-file) in places where absolute references (like /tmp/user-file) should be used instead, and vice versa.
Mark was assured that all such problems were well in hand, so his proposed code review never took place. Imagine his surprise a couple of months later; he was about to take the podium at a conference in Houston to talk about "The Java Security Model" when a member of the audience showed him a copy of USA Today announcing the first Java security bugs on the front page. To his horror, one of the bugs (concerning the class loader) turned out to relate to the same relative filename issues he had warned about.
 For a copy of this security bulletin see http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=secbull/134.
What is the point of this story? We really aren't taking Sun to task here. For many years, Sun has had one of the strongest corporate commitments to security quality of any company in the world. Further, as we argued in Chapter 1, there are serious trade-offs to be considered when balancing a desire for security reviews against the pressure to get a new product to market. (A last point in Sun's defense: they fixed the bugs quickly and well.)
This case study teaches several lessons. The following are especially important:
The best architecture in the world won't protect your applications from attack if you're not willing to look for errors.
The folks who build the software are not the best ones to check it for vulnerabilities
Finally?and we will repeat this argument at every opportunity?only a holistic approach that engages every stage of development offers a good chance for a secure application.