As an organization transitions to Scrum, the Team bears the brunt of the change. Whereas before the project manager told the Team what to do, now the Team has to figure out what to do on its own. In the past, team members worked on their own, but now they work with each other. Before Scrum, team members had lots of time to complete a release, but now they are asked to pull together potentially releasable software at the end of each Sprint. We’ve looked at several instances of Service1st using Scrum in previous chapters. In this chapter, we’ll see the trials and tribulations the team went through as Service1st learned the ins and outs of Scrum.
One hundred and twenty people worked in the development organization. Service1st used a sequential, or waterfall, life cycle, and the staff was organized accordingly, with designers reporting to a design manager, coders reporting to a programming manager, testers reporting to a quality assurance (QA) manager, and writers reporting to a documentation manager. Service1st releases a new version of its software approximately every six months. When I arrived to implement Scrum, the next planned release involved an aggressive integration into Service1st’s main product line of workflow and collaboration software built by a new partner.
The vice president of development, Hal, was dissatisfied with the waterfall process; he was particularly displeased by the crunch that happened during the last two months of every release cycle. It appeared to him that his development organization thought about the work for four months, eventually felt the pressure of the nearing release date, and then worked days, nights, and weekends to code, test, and document. The result was an exhausted staff in no shape for the next release cycle.
After extensive investigation by Hal and his managers, Hal decided to try Scrum. Scrum’s iterative, incremental practices would provide regular progress throughout the release cycle. I met with Hal and his managers to discuss how to get started: define the Product Backlog for the release, divide the development organization into cross-functional Scrum teams, and parse the work among the teams. We struggled with this task, trying very hard to take into account team dynamics, personalities, domain knowledge, and couplings between functionalities. We wanted the teams to get along as well as possible, have all of the knowledge and skills needed to do the assigned work, and not be dependent on the progress of other teams for their own team’s success. We weren’t able to do this to our satisfaction without splitting people with key domain or technical knowledge between teams. One individual, for example, was assigned to four different teams. Although this was hardly ideal, we didn’t want to spend the entire six months planning for the release, either, so we decided to move on.
I discussed with Hal and his managers some of the most important things that can be done to optimize team performance. I recommended removing the cubicles and setting up collocated team spaces. Hal decided to wait on this recommendation because they had recently built the cubicles. I also recommended eliminating all of the development artifacts—like design documents—that existed only to support the waterfall approach. Scrum relies on high-bandwidth, face-to-face communication and teamwork; cubicles and unneeded artifacts promote isolation and misunderstandings.
I conducted a ScrumMaster training session to prepare Hal’s managers for Scrum. During this training, I emphasized that ScrumMasters have no authority over the development teams; they are present only to ensure that the Scrum process is adhered to and that the teams’ needs are met. We then kicked off the Scrum and the release for the teams with Sprint planning meetings. The teams started and ended their Sprints simultaneously to facilitate the overall review of the release’s progress every 30 days. During these initial Sprint planning meetings, we reinforced the various Scrum rules. In particular, we emphasized that a Team is self-managing, that it has only 30 days to do its work, and that its work must result in completely developed pieces of functionality.
Some of the teams expressed doubts that the teams as constituted were adequate. Some teams didn’t seem to have enough testers to do all of the testing or enough writers to create all of the documentation. In response, I explained to them that a Team is cross-functional: in situations where everyone is chipping in to build the functionality, you don’t have to be a tester to test, or a designer to design.
The teams at Service1st met every day for the Daily Scrum. Alicia, the ScrumMaster of several of the teams, directed the meetings crisply and professionally, ensuring that everyone answered these three questions: What have you done since the last Daily Scrum? What are you planning on doing between now and the next Daily Scrum? Do you have any impediments to report? She helped the teams complete their meetings within the time-box of 15 minutes.
When Alicia went on vacation, another ScrumMaster, George, filled in for her. He was pleased at how crisply the Daily Scrums went, but nonetheless he was troubled by a strange feeling that something was amiss. After several days, he realized that he heard hardly any requests for help or offers of help. There were no side comments that he had to contain to keep the meeting to 15 minutes. After some sleuthing, George figured out why. As team members reported progress, they were looking at George instead of at other team members. They were doing so because they were reporting to George, who they saw as their project manager. Even though they’d been told otherwise, the team members still felt that George was in charge and thought that the Daily Scrum was a meeting at which they would report to him, and not a forum at which they’d synchronize with each other.
Once George realized this, he talked it over with the team members, reinforcing the message that he was there only to facilitate communication among team members. The meeting was for the team, and the team members should make a point of avoiding looking at him. To help the team members adjust to the real purpose of the Daily Scrum, George requested that the team members look at each other when reporting.
Being managed by someone else is totally ingrained in our life and work experience. Parents, teachers, and bosses who teach us to self-manage instead of striving to fulfill their expectations are rare. Why should we expect that when we tell a Team that it is responsible for managing itself, it will know what we are talking about? “Self-management” is just a phrase to them; it isn’t yet something real. A Team requires concrete experience with Scrum before it can truly understand how to manage itself and how to take the responsibility and authority for planning and conducting its own activities. Not only must the ScrumMaster help the Team to acquire this experience, but the ScrumMaster must also do so while overcoming his or her own tendencies to manage the Team. Both the ScrumMaster and the Team have to learn anew how to approach the issue of management.
During a Daily Scrum, I heard one developer report that he needed another developer to check in some code so that he could make some modifications. The good news was that the Team was using a source code management system; the bad news was that there were apparently some bad engineering practices; otherwise, the code would be checked in regularly. I asked the team members if I could meet with them after the Daily Scrum.
When we got together, I went over the concept of an increment of potentially shippable product functionality. Each Sprint, the Team commits to turning selected Product Backlog into such an increment. For the functionality to be potentially shippable, it has to be clean. The team members wanted to know what I meant by “clean.” Did I mean free from bugs? I answered in the affirmative and told them that clean code not only has to be free from bugs, but must also adhere to coding standards, have been refactored to remove any duplicate or ill-structured code, contain no clever programming tricks, and be easy to read and understand. Code has to be all of these things for it to be sustainable and maintainable. If code isn’t clean in all of these respects, developing functionality in future Sprints will take more and more time. The code will become more turgid, unreadable, and difficult to debug. I also reminded the team members that Scrum requires transparency. When the Team demonstrates functionality to the Product Owner and stakeholders at the Sprint review, those viewing the functionality have a right to presume that the code is complete, meaning not only that the code is written but also that it is written according to standards, easy to read, refactored, unit tested, harness tested, and even functionality tested. If this isn’t true, the Team isn’t allowed to demonstrate the functionality, because in that case, the viewer’s assumption would be incorrect.
This conversation provided the team with some background. The team members now wanted to know why I was concerned about the code not being checked in. I said that code is often checked at a higher rate than usual so as to facilitate frequent builds. A build is a compilation of all of the code in a system or subsystem to validate that all of the code can be pulled together into a clean set of machine-readable instructions. The build is usually followed by automated tests to ensure that all of the functionality works.
The team members looked at me innocently. They told me that, unless there were special circumstances, they built the system only toward the end of the development cycle. Now that they were using Scrum, they planned to start builds around the twenty-second or twenty-third day. Then they would start cleaning everything up. This revelation took me by surprise. Various team members were reporting during the Daily Scrum that certain functionalities were complete, but according to what I was hearing now, nothing had yet been checked back into the source code library, built, and tested. I asked whether this was the case, and a silence suddenly descended on the meeting. Everyone realized that there was a problem. One Team member, Jareesh, wanted to know how he could possibly check in code that frequently. He often kept code checked out for 5 or even 10 days while he was developing functionality. I asked how he could know on a given day that the code he had developed wasn’t in conflict with someone else’s code if he hadn’t checked in his code. He said that if he checked it in frequently, he would have to make such adjustments daily, but that by checking in his code only when it was complete, he had to make such an adjustment only once.
I again reminded the Team that Scrum requires complete transparency. Every day, the team has to synchronize its work so that it knows where it stands. Otherwise, team members might make incorrect assumptions about the completeness and adequacy of their work. They might think that their code is fine, while Jareesh is working on code that negates or diminishes the value of their work. Scrum relies on empirical process control, which in turn is based on frequent inspections and adaptation. If the Team couldn’t inspect its status at least daily, how could it adapt to unforeseen change? How could it know that such change had even occurred? How could the team avoid the traditional death march of pulling everything together at the end of a development cycle— in this case, a Sprint—if it didn’t pull everything together at least daily?
I told the team members that I couldn’t tell them how to develop software. I could question them about the completeness of their code, and I could suggest remedies, but the solution was their responsibility. I could help their ScrumMaster make sure that they were following the Scrum process, however. In this case, this meant that the team members had to devise engineering practices such that every day all of the code that had been written was checked in, built, and tested. Just as at the end of the Sprint, every day this code had to be clean—or else the inspection and adaptation mechanisms of Scrum wouldn’t work.
From this experience, the Team learned about the way Scrum’s inspect and adapt mechanisms necessarily impacted some of its practices. The Team had initially thought that the Daily Scrum was only a short meeting at which the Team would synchronize its work and plan for the coming day. However, the subtle but important aspect of this synchronization is that it requires the Team to know exactly where it is and where it isn’t. Without engineering practices that supported such an assertion, the Team would be unable to synchronize its work. The team members and I spent the next several weeks looking into the engineering practices that they might adopt. I helped team members understand the engineering environment and build processes that are necessary for Scrum to work. I also helped them understand several of the Extreme Programming practices—such as shared code, coding standards, and pair programming—that might help them meet this need.
Engineering excellence for its own sake is a hard sell because it is theoretical, and Teams have real work to do. Scrum, however, requires engineering excellence for its inspect and adapt practices to work. This organization couldn’t realize all of Scrum’s benefits without improving its engineering practices. By the end of the Sprint, the team members were on their way to improving their engineering practices and were working with other teams to ensure that they all had common practices. This task, of course, would never be complete, as improving engineering competence and professionalism is an unending process. However, they were on the right road, and their software, the company, and their careers would benefit from their efforts.
As happens in most organizations starting to use Scrum, many of Service1st’s teams overcommitted themselves for the first Sprint. Rather than using the full time of the first Sprint planning meeting to detail all of the tasks required to build the functionality, the teams shortchanged the effort and went by gut feel. The team members selected Product Backlog that they felt they could reasonably convert to functionality within the Sprint’s 30 days. But once the team members got to work, they found that there was more to do than had been anticipated. At the end of the first Sprint, these teams were able to demonstrate less than they had hoped; in once instance, a team demonstrated largely untested functionality. Their ScrumMaster later reminded them that this broke a Scrum rule and was not to happen again.
Having learned from their experience during the first Sprint, the teams spent much more time planning the second Sprint. They detailed the tasks, reviewed available hours, weighted availability against commitment, and—as a result—undercommitted. The teams had assigned each task more time than was necessary; this led the teams to overestimate the amount of work that would be required to develop the selected functionality. Halfway through the second Sprint, the teams realized that they had time and energy left over. Working with their Product Owners, they selected more top-priority requirements from the Product Backlog and tackled those as well. The second Sprint review was a rousing success. Not only had the teams built functionality, but management was also able to get a clear picture of what the release would look like early in the release cycle. Management was able to provide guidance as the release progressed, rather than waiting until the end of the release cycle.
After the second Sprint review, Hal held a Sprint retrospective meeting. We conducted this retrospective with the entire development organization, including all the teams and their members, with everyone sitting in a large circle. Going around the circle, the team members spoke about what they felt had worked and what needed improvement during the next Sprint. Hal acted as scribe, summarizing everyone’s comments on a whiteboard. Each person identified what had gone right during the Sprint and what he or she would like to improve for the next Sprint.
What was the outcome of the Sprint retrospective? Many at Service1st were pleased to be helping each other; when someone fell behind, other team members jumped in and helped. Some of the coders were delighted to be sitting next to testers because they were able to understand the full set of tests that would later be applied even while they were still in the process of coding. Everyone was glad to be making evident progress on the release so early in the release cycle. One programmer was thrilled because he had gotten to talk to and work with a designer with whom he had hardly exchanged a sentence during his three years of employment at Service1st.
What could use improvement? The team members who were split among several teams didn’t like their situation. They were unable to concentrate on one set of work, and they found it hard to determine how to allocate their time to each team so that they would be available when they were needed. Most teams were also displeased with their cubicles. Even though they had initially thought that they wanted the privacy of cubicles, they eventually began to feel that the walls were getting in the way of their collaboration. All of the teams felt that they lacked the optimum skills to accomplish their work—several teams were short on testers, and several other teams were short on writers.
Everyone then looked at Hal and his managers. How were they going to solve these problems? Whenever possible, I recommend that a team devise its own solutions to its problems; team members are closer to the work than anyone else and can come up with the best solution. We had just gone through the inspection part of an empirical process; what did they want the teams to do to adapt? The natural tendency of managers is to figure out how to do things right and tell the workers to do it that way; teams expect this. But the former managers were now ScrumMasters, and the teams were responsible for their own management. The ScrumMasters were only there to act as advisors or to help the conversation along. Once they realized this, the teams started looking for their own solutions to their problems.
The teams struggled to find overall solutions, but every solution that was proposed would help only in the short term. As work progressed, the Product Backlog would change, and different team compositions would be necessary. I told the teams that they would be hard-pressed to come up with more long- term solutions; any solution they devised would probably be good for only one or two Sprints. Circumstances would probably have changed so much by then that new solutions would be needed. This is one of the great truths of Scrum: constant inspection and adaptation are necessary for successful development.
The teams broke into smaller groups and devised the following improvements: The teams would adjust their workloads so that no one had to be assigned to multiple teams. If they found this to be impossible, the critical resource would serve only in an advisory role on the other team and would commit only to his or her primary team. To address the problem of a shortage of all cross-functional skills, they decided to try helping each other more. The tester, coder, writer, and designer would all take a first pass at the functionality design. Then the tester would flesh out the details as test scripts, while the writer started documenting and the coder started coding. The designer would tie together the results of this work so that when the code was done, the test was ready and the help system was in place for that function. To reduce the overall time required for testing and retesting the functionality, the teams decided to start using test-driven development practices with automated unit testing harnesses.
The teams weren’t completely satisfied with these solutions; they didn’t think that they would solve all of their problems. Nonetheless, the time allocated for the Sprint retrospective meeting had passed. I told the teams that they would never achieve perfection, no matter how much planning they did. Even though they were closer to the work than their managers had ever been, planning more than 30 days in advance is nearly impossible. However, because the teams were responsible for managing themselves, they were free to make adaptations during the Sprint. We’d inspect how things had gone at the next Sprint retrospective meeting and then make necessary adaptations again.
I keep thinking that I’ve learned the benefits of empirical process control with its reliance on frequent inspection and adaptation to stay on course and deliver the best possible product. But my training in defined management keeps rearing its ugly head. Deep down, I continue to believe it is my responsibility to lay things out perfectly at the beginning and then insist that the plan is adhered to. When adjustment is necessary, I feel that it’s my fault for not getting everything right the first time. But Scrum rules save me from myself. It is not the ScrumMaster’s job to manage the Team. The Team has to learn to manage itself, to constantly adjust its methods in order to optimize its chances of success. The Sprint retrospective provides a time for such inspection and adaptation. As with many other Scrum practices, the Sprint retrospective is time-boxed to stop the Team from spending too much time searching for perfection when no such thing exists in this complex, imperfect world.
A rule of thumb that I’ve adopted over my years of Scrum implementation is this: let the Team figure things out on its own. The ScrumMaster role ensures that this will happen, since the role includes no authority over the Team. The ScrumMaster is responsible for the process and removing impediments but is not responsible for managing the development of functionality. ScrumMasters can help by asking questions and providing advice, but within the guidelines, conventions, and standards of the organization, the Team is responsible for figuring out how to conduct its work. The ScrumMaster’s job is to ensure that the Scrum practices are followed. Working together, the ScrumMaster and the Team shape the development process so that it will bring about the best possible results and won’t let things get too far off track.
We had finished conducting the Sprint review meeting. The ScrumMaster was wrapping up by inviting comments from the stakeholders. Peter, a Service1st founder, was particularly pleased with the progress; he finally knew what he would be getting well before the end of the release development cycle. However, he didn’t like that the Team would sometimes deliver more or less than it had initially estimated it could do. This imprecision left him uneasy, and when he found out that the Team wasn’t recording the actual hours each team member worked on each task in the Sprint Backlog, he was more uneasy. He wanted to know how the team would be able to compare estimated hours to actual hours worked if it wasn’t recording actual hours worked. Such a comparison would give the Team valuable feedback, he felt, and might help it improve its estimates in the future. As Team estimates improved, the Team’s work would be more predictable, and there would be fewer surprises.
Many people love Scrum’s frequent, regular delivery of working functionality, the high morale of the team members, the improved working conditions, and the excellent quality of the systems. But phrases such as “the art of the possible” drive them crazy when they see its implications. Some hit at the heart of the misuse of the word “estimate.” I saw this misuse recently in a board meeting, when a vice president of marketing shouted at the vice president of development, “How can I ever trust you when you never meet your estimates?” To estimate means to form an approximate judgment or opinion of the value of a measure, but that wasn’t the definition that was being used.
Many business relationships are based on contracts and predictability that don’t tolerate the imprecision inherent in an estimate. When a salesperson says that his or her company will deliver a new release that handles a customer problem in June, a contract is formed. The customer believes that the salesperson has adequately understood his or her needs and translated them into requirements and specifications and that functionality solving his or her problem will be delivered with the release in June. The imprecision of the communication from customer to salesperson to marketing to development to a designer to a coder to a tester to a system that does what the customer wants is immense. Combine this imprecision with all of the other imprecise communication of expectations, with the imprecision and truculence of the technology being used, and with the fact that people are doing the work, and any estimate of a release date becomes suspect.
How then do we get anything done? Business and most other processes rely on some degree of predictability, and we’ve just posed a problem that seems to defy predictability. As you’ll remember from the discussion of empirical and defined process control in Chapter 1, the problem was framed as follows:
It is typical to adopt the defined (theoretical) modeling approach when the underlying mechanisms by which a process operates are reasonably well understood. When the process is too complicated for the defined approach, the empirical approach is the appropriate choice.
—B. A. Ogunnaike and W. H. Ray, Process Dynamics, Modeling, and Control (Oxford University Press, 1992), p. 364
Scrum’s implementation of the empirical approach is through inspection and adaptation. All of the stakeholders are brought together every month to inspect progress on the system and to determine whether it meets their perceived needs, addressing their highest priority needs first. To the extent that the process of translating the requirements into the demonstrated increment of functionality doesn’t meet their needs, the process, people, technology, or requirements are adapted to be more effective.
A Team’s first Sprint is the roughest and most imprecise. Often this is the first time the team members have worked together, and certainly this is the first time they have worked together on this problem. The problem described in the Product Backlog might be well known to the Team, but often it requires more understanding. The technology being employed by the Team has sometimes been used before, but often at least one new piece of technology or new release is thrown into the project. As the Team sits in this stew of imprecision and complexity, we ask the Team to commit to how much it can deliver in a 30-day Sprint. We ask the team members to tell us this within the eight-hour time-box of the Sprint planning meeting. Of course their estimate is going to be off!
We accept that the Team’s estimate will be imprecise in the first Sprint. Team members delivering something approximating their commitment in the first Sprint is a testimony to human pride and determination—not the Team’s estimating accuracy. I see this happen over and over. We accept the Team demonstrating more or less than that to which it committed because we know the complexities with which it is wrestling. The complexities usually stop anything from getting done. Scrum is often brought in when projects have failed, and the primary cause of failure is that the projects are floundering in the complexity. The failed projects are unable to get going. Scrum rewards action, rewarding a Team for just delivering something. Scrum asks the Team to tackle the complexity and deliver something. We limit the amount of complexity the Team is asked to tackle by time-boxing the work in a 30-day Sprint. And teams deliver. In my experience, when the imprecision and unpredictability of the effort are accepted, teams are willing to proceed and do their best. The job of the stakeholders is to accept the imprecision. The imprecision is worrisome, but it is inherent in the problem of software development.
How do we deliver releases on time that meet customer needs if the problem domain is this imprecise? Part of the answer is that estimates do get better. As the Team works together building the requirements into functionality on the selected technology, they get better. They unearth more of the unknowns. By the third or fourth Sprint, Teams are able to deliver pretty much what they commit to during the Sprint planning meeting. However, complexities still occur and disrupt this improved accuracy.
The rest of the answer is that the Product Owner and all stakeholders are responsible for figuring out what to do given how much functionality is delivered every Sprint. Given what the Team has delivered, what should the release consist of? Given how quickly or slowly the Team is turning Product Backlog into increments of functionality, when does it make sense to implement or release a set of the functionality? The Product Owner and stakeholders are driving the development cycle by trading off functionality and time. If they execute more Sprints, they can have more functionality. If they execute fewer Sprints, they will have less functionality. Or maybe they can add more Teams and determine how much this will increase the delivery of functionality. All of these decisions are adaptations that the Product Owner and stakeholders are making based on their inspection of what the Team actually does, not what it estimates it can do.
People are very complex, and often they don’t do what we want them to do. I remember a situation at a large computer manufacturer that sold a very complicated high-speed printing system. Although the printing system could print reports very quickly, it kept breaking down. Customer engineers (CEs) worked many hours at customer sites keeping the printing systems working and the customers happy. But the computer manufacturer’s management wasn’t happy. The number of hours that the CEs were working was too costly, and the printing system division was losing money. To remedy the problem, management implemented new measurements: CEs were given bonuses based on how little time they spent repairing equipment. But to ensure that this didn’t impact customer satisfaction, the CE bonuses also depended on customer satisfaction. After implementing this new bonus system, management was pleased that the cost of CEs working on problems dropped dramatically, and customer satisfaction stayed high. Several months went by before someone in management noticed that the cost of parts had skyrocketed during this time. Upon investigation, it turned out that the CEs were stocking entire subsystems at each customer site. Rather than fixing problems and repairing equipment, they would immediately replace anything that didn’t work with a new subsystem.
People in software development teams are the same. When management tells them that it wants them to improve the accuracy of their estimates, what they hear is that management doesn’t want any surprises. Some organizations try to improve estimates by first building databases of actual and estimated hours worked and then deriving statistics of the variances. For example, such statistics might show that a team worked 24 percent more hours than it estimated across four Sprints. Management naturally sees this as a problem. Management might then create a system of rewards if the team can reduce this imprecision. Management might tell the team that part of its performance review will depend on improving this variance to less than 20 percent. Once this target has been established, I guarantee that the team members will meet it because their salaries depend on improving this metric. Their success will cause management to view them favorably and perhaps promote them or give them more interesting work. Regardless, good things will come if the team members do what management wants. The typical way that team members then improve estimating accuracy is to drop quality or to implement the functionality with less quality. They might stop refactoring out duplicate code. They might not follow standards. They might implement a control that is less difficult but that isn’t as user friendly. They might not rationalize the database. None of these actions are visible to management. All of these tricks are employed to meet the measurements and for the team members to do well in management’s eyes.
The problem I’ve described here is called “suboptimal measurement.” If you focus on improving only one part of a system, you might cause another part of the system to go haywire. The overall result is then worse than before. However, if you measure the right things, improvements can be made. In this case, increasing the accuracy of estimating by comparing actual hours worked to estimated hours worked is a suboptimal measurement. Comparing what the Team actually produces to a desired release date and release goals is a much more appropriate measurement.
For inspection and adaptation to work, we must know what we are inspecting. If we tell a Team that it can only demonstrate quality and actual working functionality, the Team will comply, and we will know real progress on delivering a release. If we tell a Team that we want it to improve the accuracy of its estimates, it will improve this metric regardless of the shortcuts it takes. Scrum asks management to focus on the overall delivery of functionality and eschew suboptimal measurements.
Peter is on track in wanting to improve estimates. To his surprise, they will improve naturally, Sprint by Sprint, as the Team becomes more competent in dealing with the technology, the business domain, and each other. What Peter needs to remember is the overall measurement—delivering the best system possible on the most appropriate date, and with excellent quality. All other measurements must be carefully implemented so that they support this overall measurement rather than undercut it. We must always factor into our measurement systems an awareness of the innate human desire to please, often regardless of the consequences.
I’ve mentioned many times already that Scrum is difficult. It requires frequent inspection and adaptation because these are the only known control mechanisms for complex problems. Management finally starts to understand and love Scrum when it comes to accept this and accept that this hard work is part and parcel of complex problem solving.
The first tour I took of the engineering space at Service1st was downright depressing. People were either housed in offices with closed doors or exiled to cubicles. Most people were alone in their offices or cubicles, often staring at a computer monitor. There was no conversation, no hum of activity, no feeling of a group of people undertaking work that they were excited to do. A lethal arrangement of space and walls had isolated the employees of Service1st.
The development process at Service1st, a standard waterfall approach with all of the attendant documentation, also isolated the company’s employees. Designers designed and then wrote design documents. Programmers read the design documents and then programmed; they were allowed to ask the designers questions, if they absolutely needed to, but they were discouraged from asking too many, as this would interrupt the next set of design work. When the programmer had finished, he or she would give the specification document and code to a tester. The tester would try to find things wrong with the code, documenting any deficiencies and failures in a bug database. The programmer would inspect the bug database and fix errors; programmers could question the testers if they didn’t understand the bug report, but too much interruption would disrupt the testing process, so this too was frowned upon.
The isolation was a consequence of the development process at Service1st, which minimized human interaction and face-to-face communication. The process demanded written communication between people who needed high-bandwidth communication to minimize misunderstandings and the consequent errors. People were not only physically isolated, but the development process isolated their work and interactions as well.
Everything felt different by the time the second Sprint review rolled around, and it was clear that there was positive change afoot by the subsequent Sprint retrospective. People were talking and sharing; laughter and lively conversation filled the workspace. I heard detailed questions and responses. I heard a buzz that filled the entire floor and people engaged with each other in mutually working to understand and solve problems. A common theme during the Sprint retrospective was how much the team members enjoyed working on this project. You could see it in the team members’ body language. Everyone was relaxed, bantering, comfortable with being themselves around each other. The team constituted a community unto itself.
It is a now a real pleasure to visit Service1st. I walk in and people greet me, as they also greet each other. Hallways are places for conversations, not just paths for going from your car to your cubicle. Plans are already under way to rearrange and ultimately to demolish the cubicles. Employees had previously treasured their walls and the privacy they afforded. Hal changed the process and got a new neighborhood. He changed the process and got people who look forward to showing up in the morning to work with their friends and peers.