6.5 Case Studies

In the following sections, we describe a few situations we've dealt with in our careers that illustrate various scenarios that are relatively common in industry. We provide insight here as to how we approached the problems and the types of testing, tools, and methodologies that we used along the way. We've tried to provide some insight into the rationales we used in making our various selections.

6.5.1 Case 1: Full Service Network Review

Several years ago, we were asked by a telecommunications company to perform a "paper review" of the security architecture of a so-called full services network (FSN), a video, audio, and data network that was to run on top of an Asynchronous Transfer Mode (ATM) infrastructure. The design was intended to provide bandwidth on demand to their customers for a wide range of these different services.

In discussing the project goals and objectives with the company, we learned that their chief concern was in preventing people connected to the FSN from being able to fraudulently provision services (and not get charged for them). Because service theft represents their primary source of lost revenue, this seemed logical to them.

We started by reviewing the network and server architecture in-depth, looking for flaws in the design of how data or administrative traffic would traverse the network. Of particular attention during this part of the review was ensuring that the identification and authentication (I&A) of all customers on the network was sufficiently strong to prevent a customer from forging his identity (and thus stealing services). We spent days poring through the documentation and came up with very little.

Next, we started to concentrate on how network circuits are provisioned by diving deep into the ATM architecture. This time, we concentrated on transport-layer network protocols: could they be spoofed, forged, or otherwise compromised? Here too, we found that the company engineers who had designed the network clearly understood the technologies that they were implementing and had done a superb job.

At this point, we were nearly ready to declare defeat (at least, from our perspective), when we decided to look at the project a bit differently. Instead of looking simply for flaws in how the network technology was designed, how about looking at the situation from the outside in? How had attackers historically attacked data networks? How would that impact the ATM underpinnings? Because one of the services that would be available to the end customer was going to be data communications, we decided to assume that the customer is connected to a data circuit and is otherwise blissfully ignorant of the underlying ATM networking.

So this time, we looked at approximately ten previously observed attacks on IP networks, ranging from ICMP data flooding to denial of service attacks. From our theoretical model, we asked: what would happen to the ATM side of the network in the face of those IP attacks?

What we found (remember that this was purely theoretical) was that it was likely that many of the extant IP-level attacks would wreak havoc on the underlying ATM network. In short, the designers of the ATM infrastructure had done a great job of addressing the domain they were most familiar with but had failed to consider the ramifications outside that domain.

When we presented our findings to the design team, they were quite surprised. Some attacks that had been observed on the Internet for more than a decade were entirely new to them, and indeed, the engineers had not adequately considered them in their design of the FSN. So, they went back to the proverbial drawing board to make some adjustments to that design.

This case study teaches several lessons. The following are especially important:

  • It's important to include domain experts in the design team that can speak to all of the security threats that a design is likely to face.

  • It's equally important that the testing team be able to think like an attacker in reviewing the application.

6.5.2 Case 2: Legacy Application Review

Both of your authors were involved in a large-scale review of dozens of legacy applications at a major corporation. The object of the review was to analyze the company's production data-processing environment for security vulnerabilities. The company had recently undergone a massive restructuring of many of its business applications, transitioning them from traditional database applications into web-enabled applications with more modern front ends. The security officer of the company was (rightfully) concerned that they had inadvertently introduced vulnerabilities into their production business systems by going through this restructuring. So, with that concern in mind, we set out to review most of the applications for their levels of security.

The approach we took evolved over the life of the project for a number of valid and different reasons. We started out deciding to use these methods:

Perform network penetration testing

We undertook several external and internal network scans for OS-level vulnerabilities and misconfigurations. These scans were good at finding operations-level problems, but it turned out that they failed to address the business impacts of the applications under review.

Undertake an operating system configuration review

Similarly, we ran numerous host-level reviews of the OS configurations. These pointed out more vulnerabilities and misconfigurations in the application servers, but also failed to hit the business impacts of the applications themselves.

Do a code review

We briefly considered going through a static code review but quickly dismissed the idea for a variety of reasons. First and foremost, there were simply too many applications; the undertaking would be too enormous to even ponder. Second, the tools available for doing static code analysis were few, and the languages we needed to evaluate were many?and the tools were unlikely to find a sufficient set of real problems in the code.

The testing that we did was useful to a degree: it pointed out many vulnerabilities in the applications, but those vulnerabilities turned out to be primarily those in the operating environments of the applications, not the applications themselves. The results weren't that useful, though, because they didn't provide the application owner with a clear list of things to correct and how to correct them. Further, they didn't in any way quantify the business impact or risks to the corporation. Thus, although we could cite hundreds of vulnerabilities in the environments, we couldn't make a sufficient business case for the company to proceed with. Back to the drawing board!

Next, we decided to interview the business owners, software designers, and operations staff of each of the applications. We developed a desk check process in which we asked each of these people the same questions (see the sidebar Legacy Application Review Questions for examples) and provided normalized results in a multiple-choice format. That way, the results would be quantifiable, at least to a degree.

Legacy Application Review Questions

During our discussions with the application business owners, software designers, and operators, we asked a series of questions?some formal and some ad hoc. Here are some examples of the questions that we asked:

  • What is the value to the corporation of the business process that this application runs?

  • How much would it cost the corporation on an hourly, daily, and weekly basis if the application were unavailable?

  • If an intruder succeeded at removing all of the application's data, what would the downtime be to restore everything?

  • How does the application identify and authenticate its users?

  • What network protocols are used by the application to communicate with the user? With other applications? With the operations staff?

  • How are backups performed? How are the backup tapes protected?

  • How are new users added? How are users removed?

In conducting these interviews, we quickly recognized how important it was for us to make each interviewee feel comfortable talking with us. As we discussed earlier, it's important to create an atmosphere of wanting to find flaws in code in a way that's entirely nonjudgmental and nonconfrontational. In this project, we helped the interviewees relax by adopting a nonthreatening manner when asking questions likely to raise sensitivities. Even though our questionnaires were multiple-choice in format, we frequently went through the questions in a more narrative manner. At one point, we experimented with distributing the questionnaires and having the various participants fill in the responses and send them to us, but we found it more effective to fill in the forms ourselves during the interview process.

This approach turned out to be very useful to the corporate decision makers. With the added information coming from our interviews, we could demonstrate business impacts much more effectively, and we could essentially grade each application on its degree of security. What's more, we could provide the business owner and the software developer with a clear list of things that should be done to improve the security of the application. The lists addressed operational factors as well as design issues with regard to the application code itself. (It did, however, stop short of reviewing actual source code for implementation flaws.)

Though our business-oriented approach worked best in this case study, a more technology-oriented approach is frequently more useful to the actual code development team during application design and implementation. That's because a technology-oriented solution can provide the development team with a list of specific actions to take to secure the technology components of the system, and that's exactly what they're likely to be looking for. The business approach did a great job in this project, though, at meeting the requirements of analyzing the security of the legacy applications and assessing the potential impact to the corporation of a compromise in the security.

This case study teaches several lessons. The following are especially important:

  • When confronted with the volume of applications studied during this project, it is not always feasible to conduct reviews down at a code level. Instead, the designs can be reviewed by interviewing key personnel, and the operational configurations can be tested empirically by conducting network penetration tests. While not perfect, this approach represents a reasonable compromise of time and cost.

  • A process like the wholesale "web enabling" of older applications may lead to additional design-level vulnerabilities in an application that were absent from the original design. When making such a sweep, you should treat the changes with at least the same degree of security diligence that you applied to the original design. Don't treat such a project as a simple application maintenance procedure.

6.5.3 Case 3: Customer Portal Design

In one web portal design project we participated in, the development team had some rather substantial security hurdles to overcome. Among other things, the portal was to be used to provide highly sensitive reports to clients of the company developing the software. Furthermore, the company was a security service provider, so it had to exercise the very best in secure software practices to set an example for its clients and to protect its reputation. In the following sections, we've included the story, told by the two major developers themselves (with as little editing by this book's authors as possible) of what they actually did to develop a state-of-the-practice secure web portal. Project goal

We needed to provide a secure, reliable, and easily accessible mechanism for delivering reports to our clients. Not all of our clients had access to the encryption mechanism that we used (PGP) and, while some of our clients were Windows-based, others used Unix. We knew that all of our clients had access to the Internet, so the logical solution was a secure web-based portal; a portal would allow us to have a standard methodology for delivering reports to our clients.

In addition to being web application security testers, we had also written a few internal administrative applications ourselves. Unfortunately, none of the applications we had developed had needed the degree of security required by our proposed Internet-accessible portal. On completion, the portal would house our customers' most critical data, including reports of all of their security vulnerabilities. The fact that the application was going to be accessible from the Internet raised a big red flag for us from a security perspective. Anyone connected to the Internet could potentially attack the portal; therefore, we needed to make security a top priority. Initial project stage

Because both of us were traditional engineers, we started with an engineering approach to this process (envision, define the requirements, develop a design, implement the design, then test and retest). We wanted a web portal that securely allowed users to view reports, contact information, and other client-specific information.

First, we had a brainstorming session to identify what the project needed to encompass, who needed to have input, and what resources could be allocated. We needed to define the functionality requirements, so we obtained input from the project managers as well as feedback from our clients.

Next, we drafted a document to define clearly what we were trying to do. We then asked ourselves what the security requirements should be. Because we both had tested a number of web applications in the past, we came up with our own list of security requirements, but to be complete we also searched the web, newsgroups, and mailing lists for recommendations. The www.owasp.org site was particularly helpful. Project design

When we started to design the portal, our principal concerns were authentication, session tracking, and data protection.


Authentication is the "front door" by which a user enters an application. The authentication must be properly designed to secure the user's session and data. Our authentication design was based entirely on the security requirements that we defined in the previous stage of development. However, after a round of initial prototype testing, we found that our original requirements did not include proper error checking to avoid SQL injection attacks, so we added the required error checking to secure the authentication of the application.

Session tracking

For session tracking, we had seen a number of off-the-shelf implementations, but we felt that we could do better. We liked the idea of having the user reauthenticate on each page, so we came up with our own session tracking mechanism. Our design did require users to have cookies set on each page. Although that increased the overall workload of the application, we thought that this overhead was worth the extra security it provided. We based the design of the reauthentication mechanism entirely on avoiding the poor practices that we'd seen during prior application tests.

Data protection

Finally, we wanted to come up with a database scheme that protected our clients' data. We'd seen other web application designs that allowed one user to access another user's data, simply because the user's data resided in the same database tables. It was critical that this application protect each client's data, so we chose to isolate client-specific data into separate tables in the database. This also gave us the option to make database permissions granular to each table, and that granularity helped protect our client data. Although there is a cost of having specific tables for each client, we thought the security benefits outweighed the cost of having more tables. Project implementation

Once we had the blueprints for our portal design, we started the actual implementation. We needed to decide on the technology to use for the web server, database, and middleware. In addition, because not all web servers, databases, and middleware packages are compatible with each other, we needed to consider products that would work in concert. Choosing our components

Because the web server is the remote end that a user sees, we decided to choose that product first. We needed a product that had been proven to be secure, had been well tested, and had been used in the field for some time. Our basic options were Netscape Enterprise Server, Microsoft's Internet Information Services (IIS), and Apache's HTTPD Server. Our primary concern was security, and our secondary concern was cost. Naturally, other attributes such as product stability were also important. Because of the number of vulnerabilities and required patches associated with Microsoft's IIS server, we decided against that product. Both Netscape's Enterprise Server and Apache's HTTPD Server have a history of being quite secure and stable. Because in this case cost was a secondary issue, we chose Apache.

Next we needed a platform on which to run our web server. Fortunately Apache runs on most operating systems. So again, we returned to our priorities: security, cost, and stability. Linux offered a secure reliable platform for free, and we had ample experience with securely configuring Linux. We also considered the various BSD-based operating systems. In the end, we decided to go with Linux, primarily because we had more experience with that operating system than with any of the BSD family.

For the database implementation, we figured that there were four realistic options: Oracle, MySQL, PostgreSQL, and MS-SQL. Again our priorities were security, cost, and stability. All of these databases have the ability to be properly secured. Because PostgreSQL was a fairly new player in the large-scale database deployment arena, we decided not to use it. For consistency with our operating environment, we decided that we wanted the database to run on the same platform that our web server was running on, Linux. Because MS-SQL does not natively run on Linux, we eliminated that database as well. Now we were down to MySQL and Oracle. Fortunately, we had an Oracle infrastructure available to us, so that's what we chose. Oracle can be securely configured as a stable environment, and because we had the licensing available to us, cost was not a major issue here.

Next we needed something running on Linux that could glue the web server (Apache) and the database (Oracle) together. PHP meets these requirements; it can be securely configured and is free. In addition, we both had experience programming in Perl and PHP. PHP is derived from Perl but is tailored to work with embedded HTML tags, so it was a natural choice for us. Securely configuring our components

Once we'd chosen our implementation platforms, we needed to make sure that we could properly configure each of the elements and still implement our design.

For our PHP configuration, we cross-referenced some checklists (specifically, http://www.securereality.com.au/archives/studyinscarlet.txt) to make sure that unsecure options were disabled. Securing our code

Because there were only two of us on the development team, we both reviewed all code implemented to ensure that we were using the best security practices. We also found that the checklists for the PHP configuration had a number of PHP language do's and don't's. In implementing the code, we supplemented our own programming knowledge by following these guidelines.

During this phase, we ran our common buffer overflow tests. Even though buffer overflows aren't problematic in PHP, we wanted to test the application as a whole; even if the front end didn't overflow, the MySQL back end still could. We also configured the database to be able to handle data of a certain size and to limit users from filling the database.

We made sure to check all code exit points so that the application always terminated to a known state. If we hadn't done this, the application could have left database connections open and possibly caused a resource denial of service condition.

Luckily, the code was short enough that we could visually review the code for data validation. All input that was accepted from the user was first filtered. Had we not checked the code for data validation, the application could have been vulnerable to a SQL injection or cross-site scripting (XSS) attack. Project testing

Finally, we had our product tested by other security experts within the organization during an independent evaluation. The testing team was provided with five accounts with which to test the application. The objective of the test was to identify any vulnerabilities within the application, operating system, or network configuration. Prior to initial deployment of the application, we had the OS tested with a thorough network penetration test from a well-known and well-trusted security testing team. They identified some additional low-level security issues.

Once we'd put these additional security measures in place, we retested the entire application. Only after we'd addressed all security issues was the application deployed. Fortunately, we had the foresight to build the security requirements into the beginning of the process, which made correcting minor issues much cheaper than it would have been.

Security testing did not stop here. It continues on an ongoing basis. Every week, the application is scanned for basic vulnerabilities, and every quarter, the entire application is retested. In addition, all passwords are cracked once a week to find any weak passwords. Project conclusion

With this project we basically needed to make security decisions through all phases of the development process. We consistently had to refer to newsgroups, vendor web sites, and security web sites to make sure that we were making intelligent decisions at each step in the development process. We found that secure coding practices alone did not provide enough protection and that we needed to scrutinize all elements of the application. Lessons learned

This case study teaches several lessons. The following are especially important:

  • When security is such a high priority for a project from the onset, many of the design decisions are driven primarily by security requirements.

  • It is vital to exercise a great deal of caution in designing the identification and authentication (I&A) system, as well as the state tracking and data compartmentalization systems.

  • For this type of application, the implementation and operation of the system should not treated as static; weekly and quarterly tests ought to be conducted to test for newly discovered vulnerabilities on an ongoing basis.

  • The design team needs to consult numerous external sources for design ideas.

  • It is worthwhile to divide the engineering team so that some of the engineers concentrate on the design and implementation of the code, while others are called on to test the software from a zero-knowledge perspective. The net result is a reasonably objective testing of the application.