Not Everything Is Visible at Service1st

Not Everything Is Visible at Service1st

Once again, we visit Service1st for instruction in Scrum. Scrum was being used to develop release 9.0 of Service1st’s customer service software. The team committed to developing a lot of functionality in its very first Sprint. Irene, the ScrumMaster, tried to get the team to tone down its commitment, but the team insisted that it could complete all of the Product Backlog that it had selected. At the Sprint review, the team successfully demonstrated all of the functionality as well as several additional features. Management was ecstatic: Scrum was wonderful; the team was wonderful; everything was wonderful. Management thought that releases could now occur more frequently or contain more functionality.

But I was suspicious. During the demonstration, the team members had followed a scripted demonstration and had seemed reluctant to stray from the script. The reason was probably that they were operating under time constraints, but what if this wasn’t the case? In wartime, safe paths through minefields are marked with white lines. If you stay within the white lines, you are OK. If you wander outside the white lines, no one knows what might happen! The demonstration had seemed to be scripted to operate within white lines. I stayed after the Sprint review with several team members and exercised the functionality myself. The system encountered various errors, stack overflows, traces, and severe crashes whenever I departed from the script, straying outside the white lines.

Upon closer inspection, the team’s apparent high productivity was found to be the result of not having tested the functionality and not fixing bugs found during the testing. The team was so excited about presenting that it forgot Scrum’s rule of sashimi: Every increment of potentially shippable product functionality that is demonstrated at the Sprint review must be complete. It must contain all analysis, design, coding, testing, documentation, and anything else appropriate for the application—a complete slice of the product.

I suggested to Irene that she not let the team proceed with any new functionality until it had really completed the functionality it had already demonstrated. Incomplete testing and bug fixing should be put on the Product Backlog as uncompleted work. The code was fresh in the team’s mind; debugging it in the next Sprint would take less time now than it would later. Making the team debug the code immediately would also reinforce the message that only completed work was acceptable. But the team rebelled. It feared that the next Sprint review would be humiliating. In the first Sprint review, it had come off as SuperTeam. In the next Sprint review, it would look like Elmer Fudd with nothing new to demonstrate. How could it demonstrate the same functionality as the prior Sprint review, adding only that the functionality now worked?

Scrum can’t be learned overnight. The team hadn’t realized the implications of the rule of sashimi: that every increment must consist of potentially shippable functionality, completely tested and documented. Now it understood this. But should the team be punished for its ignorance? Should the team have to look incompetent in front of management? Irene wisely relented, but only a little bit. After some scheming, the team and Product Owner decided that the team would also build a piece of workflow functionality that would show the previously demonstrated functionality working together. Although this wasn’t much additional work, the demonstration of it would save the team’s pride. After all, the team had done a lot of work.

The team came to the first Daily Scrum of the new Sprint. In sequence, the team members reported that they were either testing or fixing bugs. The meeting was complete in five minutes, but no useful information had been exchanged. Of course they were working on bugs. But which bugs were they working on? Without each team member clearly identifying what he or she was working on, the Daily Scrum was useless. No real commitments were being made and checked on. Nobody knew the areas of code their teammates were looking at, so they could not offer advice or help. Basically, the team members had reported that they had worked yesterday and were planning on working more today.

The Reality

The team had overachieved on coding by underachieving on testing. It had hacked together a demonstration that worked only in the lab and at the presentation. The functionality certainly wasn’t sashimi. If the Product Owner had called for the code’s release after the Sprint review, a lot more work would be required before everything was nailed down. In traditional projects, the team spends months analyzing and designing without producing anything of interest to stakeholders. The Service1st team had done the reverse, demonstrating more functionality than had been completed. The stakeholders now believed that they were further along than they really were. They were excited about a situation that didn’t really exist!

The Agile Manifesto is a statement of values and principles that describe the various agile processes, of which Scrum is one. The Agile Manifesto was developed in February 2001; more information is available at In the Agile Manifesto, the seventh of twelve principles is, “Working software is the primary measure of progress.” When a stakeholder or the Product Owner sees a piece of functionality demonstrated, he or she can assume that it is complete. The Product Owner bases his or her view of progress on this belief. When any increment is not complete, all incomplete work must be identified and restored to the Product Backlog as incomplete work.

Irene was a freshly baked ScrumMaster, having received her certification just the previous month. As such, it was understandable that she had overlooked a key symptom of trouble. The team had refused to keep the Sprint Backlog up-to-date throughout the project. After the Sprint planning meeting, the Sprint Backlog went untouched. When a team is hacking together functionality, it often doesn’t spend much time on analysis, design, and testing. It codes, codes, and then codes again. It then cobbles everything together with chewing gum for a demonstration. When the team is developing software coherently, it plans and allocates all of the work necessary to build a complete increment of functionality. The Sprint Backlog should reflect this attention to detail.

The next Sprint planning meeting took over a day. Irene wouldn’t let the team proceed with the Sprint until it had a detailed Sprint Backlog. Irene watched while the team detailed the work needed to define the new workflow functionality. She then made the team commit to updating the Sprint Backlog every day before leaving work.

You would think that the team would have learned everything there is to know about self-management from the first Sprint. But the excitement of Scrum can lead to overlooking the hard parts of it. Managing yourself is hard; it’s much easier, although less satisfying, to let someone else manage you. The Sprint Backlog that the team developed during the Sprint planning meeting consisted of two types of work. For each piece of functionality demonstrated in the previous Sprint, there was an entry to test it and then fix any bugs that were found. The tasks to build the new workflow functionality composed the rest of the Sprint Backlog. This work was laid out and then reported on in detail. However, the test and debug tasks were abstracted and summarized to such an extent that the number of hours remaining couldn’t be determined. Was the Sprint behind or ahead of schedule? Nobody knew. The test and debug work never burned down because the amount of work remaining was unknown.

Irene met with the team and described the trouble she had inspecting its progress. She told the team that Scrum works only when everything is visible and everyone can inspect progress and recommend adaptations. The team members were reporting that they were testing for and fixing bugs, but the information they provided wasn’t detailed enough to be useful. When one team member reported on his or her work, the other team members didn’t know whether they should help. They couldn’t assess whether they were working on a similar problem or were even in the same area of functionality or code. The number of bugs detected and fixed couldn’t be ascertained.

Irene asked that the team members report by the specific test employed and the specific bugs found. She asked that a test be identified for every aspect of functionality previously coded. These tests should then be entered into the Sprint Backlog. She also asked the team to create testing and bug metrics. She wanted the team to know the number of tests employed, bugs uncovered, and bugs fixed, and she wanted the team to understand how many bugs would remain unfixed at the end of the Sprint. She wanted the team to know the quality of the product that it was building. She was teaching a team that previously had counted on a quality assurance (QA) group to test the product to instead take on this responsibility itself.

Prior to the next Daily Scrum, Irene posted the Sprint Backlog on the wall of the team room. When each team member reported on his or her specific tests and bugs, Irene checked that they were listed on the Sprint Backlog. She did this at every Daily Scrum going forward, ensuring that the team managed its work at the level of specificity necessary to know what was going on. The team and Irene were able to monitor the bug trends. Was the bug count going up or down? Were any new bugs being introduced as old ones were being fixed?

Reporting at the Daily Scrum has to be specific. Commitments are real only if they can be assessed. In the absence of specificity, Irene’s team was hiding behind the umbrella phrase of “bug-fixing.” The team members couldn’t plan or synchronize their work.

Lessons Learned

The team was excited about no longer being under the constraints of someone else’s plan. It was excited about being able to get to the coding. It was excited about the opportunity to prove how much it could do. In sum, all of this excitement led the team to forget solid engineering practices.

Irene taught the team how to manage itself. It had to understand all aspects of what it was doing and frequently correlate its activities in order to deliver a completed set of functionality. Self-organizing teams aren’t unmanaged teams. To manage itself, a team must have a plan and report against that plan. The details of the plan and the reporting must be specific enough to be meaningful. The team has to be able to synchronize its work.

A Scrum team is self-organizing. It assumes responsibility for planning its own work. The Sprint Backlog is the visible manifestation of the team fulfilling this responsibility. I’ve seen teams of three very skilled engineers not use a Sprint Backlog, delivering solid functionality from plans they kept in their heads. However, most teams need to think through what they are doing and write it down so that team members can refer back to a plan as they work. The Daily Scrum synchronizes everyone’s work only if the work has been thought through. Otherwise, the Daily Scrum is useless.

The ScrumMaster has to teach, enforce, and reinforce the rule of sashimi. Sometimes teams try to cut corners. Sometimes teams are so used to waterfall development processes that they view testing as someone else’s problem. The mechanism for detecting whether the team is doing all necessary work is the Sprint Backlog. The ScrumMaster ensures that testing activities are also separately delineated in the Sprint Backlog until the team understands the meaning of the word “complete.” Once the team understands that the process of developing functionality includes analysis, design, coding, testing, and documentation, all of these unique waterfall activities can be collapsed into one Sprint Backlog task.