The Fallacy of Status
Udi Dahan Haifa, Israel
After a successful first project, I confidently embarked on my second. This was a larger project, it was more strategic to my employer, and I would manage a multi-disciplinary team of people. I was sure that the skills that had served me the first time around wouldn't fail. Interestingly enough, it was my trust in my team's status reports that was my eventual undoing.
About 2 months into the project, my infrastructure team lead confessed, "It turns out some of the architectural assumptions we made were unfounded". However, he assured me that, "We'll be back on track by the end of the month". Despite his reassurances, and the contingency buffers I had in place, I couldn't dismiss the sense that something was wrong.
At the end of the month, I followed up with the same team lead. He showed me how the refactoring work had been completed on schedule and how the developers were all set to hit their targets for the coming month. When I sat down with my integration team lead, she notified me that everything looked good from her vantage point, too. Modules were complying with their specifications, each had been sufficiently tested, and all the multiple layers of the architecture tested stable enough for the first integration.
After a slightly bumpy first integration (as many of them are) and a regular quality assurance cycle, I was astounded to discover almost every use case had critical bugs in it. We were almost five months into the 15 month schedule, but nowhere near a third done with our project work.
I remained certain that all the team members would pull together to finish on time. One month before we were supposed to be going live, everyone was reporting that their work was at least 95% done. However, when I brought in one of our real users to try the system out, she told me in no uncertain terms, "This is broken in so many ways, I couldn't stand working with anything like it". That didn't sound like a project 95% done to me.
An experienced project manager, Patrick, was brought in to "save the day". While Patrick, the project savior, (and today, my mentor) was getting things back on track, he explained to me the fallacy of status. The customer defines "done", not a status report.
The fact that the database team reported 95% completion had no real bearing on whether our users could use what we developed. Even if the status reports looked perfect, they were giving an incorrect view of the project progress. In short, the project was doomed, practically from day one, because I wasn't mapping to the goals of the project.
I finally understood why I always needed to work with users to have them evaluate each feature as it was created, to be sure it added customer-perceived value. That way the project status reports, converted to earned-value reports, show the true percentage of earned value created rather than only showing how much work is left.