Construction builds the system in a series of iterations. Each iteration is a mini-project. You do analysis, design, coding, testing, and integration for the use cases assigned to each iteration. You finish the iteration with a demo to the user and perform system tests to confirm that the use cases have been built correctly.
The purpose of this process is to reduce risk. Risk often appears because difficult issues are left to the end of the project. I have seen projects in which testing and integration are left to the end. Testing and integration are big tasks, and they always take longer than people think. Left to the end, they are hard and demoralizing. That's why I always encourage my clients to develop self-testing software (see sidebar).
The iterations within construction are both incremental and iterative
The iterations are incremental in function. Each iteration builds on the use cases developed in the previous iterations.
The iterations are iterative in terms of the code base. Each iteration will involve rewriting some existing code to make it more flexible.
Refactoring (see sidebar) is a highly useful technique in iterating the code. It's a good idea to keep an eye on the amount of code thrown away in each iteration. Be suspicious if less than 10 percent of the previous code is discarded each time.
Integration should be a continuous process. For starters, full integration is part of the end of each iteration. However, integration can and should occur more frequently than that. A good practice is to do a full build and integration every day. By doing that every day, things never get so far out of sync that it becomes a problem to integrate them later.
The older I get, the more aggressive I get about testing. Testing should be a continuous process. No code should be written until you know how to test it. Once you have written it, write the tests for it. Until the tests work, you cannot claim to have finished writing the code.
Test code, once written, should be kept forever. Set up your test code so that you can run every test with a simple command line or GUI button push. The code should respond with either "OK" or a list of failures. Also, all tests should check their own results. There is nothing more time-wasting than having a test output a number, the meaning of which you have to research.
I do both unit and functional testing. Unit tests should be written by the developers, then organized on a package basis and coded to test the interfaces of all classes. I find that writing unit tests actually increases my programming speed.
Functional tests or system tests should be developed by a separate small team whose only job is testing. This team should take a black-box view of the system and take particular delight in finding bugs. (Sinister mustaches and cackling laughs are optional but desirable.)
There is a simple but powerful open source framework for unit testing: the xUnit family. For details, see the link from my home page.
A developer should integrate after every significant piece of work. Also, the full suite of unit tests should be run at each integration, to ensure full regression testing.
The only thing you know for certain about a plan is that things aren't going to go according to it. Managing the plan is all about coping with those changes effectively.
Have you come across the principle of software entropy? It suggests that programs start off in a well-designed state, but as new bits of functionality are tacked on, programs gradually lose their structure, eventually deforming into a mass of spaghetti.
Part of this is due to scale. You write a small program that does a specific job well. People ask you to enhance the program, and it gets more complex. Even if you try to keep track of the design, this can still happen.
One of the reasons that software entropy occurs is that when you add a new function to a program, you build on top of the existing program, often in a way that the existing program was not intended to support. In such a situation, you can either redesign the existing program to better support your changes, or you can work around those changes in your additions.
Although in theory it is better to redesign your program, this usually results in extra work because any rewriting of your existing program will introduce new bugs and problems. Remember the old engineering adage: "If it ain't broke, don't fix it!" However, if you don't redesign your program, the additions will be more complex than they should be.
Gradually, this extra complexity will exact a stiff penalty. Therefore, there is a trade-off: Redesigning causes short-term pain for longer-term gain. Schedule pressure being what it is, most people prefer to put their pain off to the future.
Refactoring is a term used to describe techniques that reduce the short-term pain of redesigning. When you refactor, you do not change the functionality of your program; rather, you change its internal structure in order to make it easier to understand and work with.
Refactoring changes are usually small steps: renaming a method, moving a field from one class to another, consolidating two similar methods into a superclass. Each step is tiny, yet a couple of hours' worth of performing these small steps can do a world of good to a program.
Refactoring is made easier by the following principles.
You should refactor when you are adding a new function or fixing a bug. Don't set aside specific time for refactoring; instead, do a little every day.
For more information on refactoring, see Fowler (1999).
A key feature of iterative development is that it is time-boxedyou are not allowed to slip any dates. Instead, use cases can be moved to a later iteration via negotiation and agreement with the customer. The point of this is to maintain a regular habit of hitting dates and to avoid the bad habit of slipping dates.
If you find yourself deferring too many use cases, it's time to redo the plan, including reestimating use case effort levels. By this stage, the developers should have a better idea of how long things will take. You should expect to alter the plan every two or three iterations.
All UML techniques are useful during this stage. Since I am going to refer to techniques I haven't had a chance to talk about yet, feel free to skip this section and come back to it later.
As you look to add a given use case, you first use it to determine what your scope is. A conceptual class diagram (see Chapter 4) can be useful to rough out some concepts for the use case and see how these concepts fit with the software that has already been built.
The advantage of these techniques at this stage is that they can be used in conjunction with the domain expert. As Brad Kain says: Analysis occurs only when the domain expert is in the room (otherwise it is pseudo-analysis).
To make the move to design, walk through how the classes will collaborate to implement the functionality required by each use case. I find that CRC cards and interaction diagrams are useful in exploring these interactions. These will expose responsibilities and operations that you can record on the class diagram.
Treat these designs as an initial sketch and as a tool with which to discuss design approaches with your colleagues. Once you are comfortable, it is time to move to code.
Inevitably, the unforgiving code will expose weaknesses in the design. Don't be afraid to change the design in response to this learning. If the change is serious, use the notations to discuss ideas with your colleagues.
Once you have built the software, you can use the UML to help document what you have done. For this, I find UML diagrams useful for getting an overall understanding of a system. In doing this, however, I should stress that I do not believe in producing detailed diagrams of the whole system. To quote Ward Cunningham (1996):
Carefully selected and well-written memos can easily substitute for traditional comprehensive design documentation. The latter rarely shines except in isolated spots. Elevate those spots... and forget about the rest.
I believe that detailed documentation should be generated from the code (like, for instance, JavaDoc). You should write additional documentation to highlight important concepts. Think of these as comprising a first step for the reader before he or she goes into the code-based details. I like to structure these as prose documents, short enough to read over a cup of coffee, using UML diagrams to help illustrate the discussion.
I use a package diagram (see Chapter 7) as my logical road map of the system. This diagram helps me understand the logical pieces of the system and see the dependencies (and keep them under control). A deployment diagram (see Chapter 10), which shows the high-level physical picture, may also prove useful at this stage.
Within each package, I like to see a specification-perspective class diagram. I don't show every operation on every class. I show only the associations and key attributes and operations that help me understand what is in there. This class diagram acts as a graphical table of contents.
If a class has complex lifecycle behavior, I draw a state diagram (see Chapter 8) to describe it. I do this only if the behavior is sufficiently complex, which I find doesn't happen often. More common are complicated interactions among classes, for which I draw interaction diagrams.
I'll often include some important code, written in a literate program style. If a particularly complex algorithm is involved, I'll consider using an activity diagram (see Chapter 9), but only if it gives me more understanding than the code alone.
If I find concepts that are coming up repeatedly, I use patterns (see sidebar) to capture the basic ideas.