The Biggest Lie in Corporate America Is Phase 2:
Software development is a frenzied decathlon of activity, constantly pressed on all sides by resource constraints, budgets and deadlines. Many items originally scoped for delivery in the first release — especially those scheduled late in the project — end up not making it to customers. Good intentions notwithstanding, Phase 2 is typically the mythical future world where those "things we didn't get to" in Phase 1 go to die. Few members of the team are harder hit by this reality than the user experience and design staff. Their holistic visions of product and service experiences are put aside in favor of significantly thinner software.
Waterfall development lays the wrong foundation
In a waterfall development cycle, this phenomenon is all too common. The team defines, designs, develops, tests, finds a bunch of things wrong, and ships the product anyway. Parties are thrown. T-shirts are printed and the team moves on the next project, leaving Phase 2 a distant memory.
The design team is trained over time to believe that if a feature doesn't get into Phase 1, it will not get built. When transitioning from waterfall to agile development, the design team brings these preconceptions with it. But the fundamental difference between the two approaches is that agile development actually is an iterative process. Each sprint is essentially a "phase," and many sprints make up a project. As design teams make the transition, one of the biggest hurdles is convincing them they don't need to cram every design element and pixel-perfect rendition into the first sprint. We tell them that each sprint iterates and builds upon the previous one to incrementally improve the product based on customer usage and feedback. But they've been burned too many times, and they don't believe us. They continue to push for design perfection early in the process, causing scope creep, delays, and team conflict.
Successful agile teams focus on problem statements
How then, do we begin to acclimate designers to an agile process and get them over this hurdle? The answer lies in focusing the entire team around a problem statement.
Most software development teams today are feature-focused. Their primary task is to build features. Somewhere, far upstream from the individual contributors, someone defined a business problem followed by the solution to that problem. The team was then tasked with implementing that solution. Instead, teams need to be tasked with problem statements. They need to be presented with the business problem and a measurable definition of a successful outcome. The team owns the solution because they conceived it. This holds especially true for the design team. The freedom to experiment with solutions, validate design ideas and refine successful designs results in a rapidly iterative process. A few weeks of running through short-cycle feedback loops, launches, and revisions begins to break the team of its Phase-2 phobias.
Solving problems strengthens teams
I led a cross-functional agile team of developers, designers, product managers and QA specialists tasked with improving the communication stream between recruiters and jobseekers on a thriving online job board. We assigned this team a very specific, constrained problem statement: jobseekers, on one side of the ecosystem, were not responding to in-product communications from recruiters and hiring managers on the other side. We had a benchmarked usage statistic that showed low, double-digit response rates. We knew from previous research that increasing this response rate would increase product usage and satisfaction. The one other constraint we put on the team was a time limit — three months. If they could show significant progress in that time frame and had not yet reached the target response rate, they would be granted more time to continue solving the problem.
The team was free to solve the problem in any way they deemed worthy. They worked in two-week sprints and used the response rate metric as a weathervane to determine if the solutions they were implementing were heading in the right direction. The designers on the team had two weeks to design each sprint's solution (or portion thereof). As each sprint went by, new solutions were deployed or updated while the response rate continued to creep upward. A constant stream of qualitative and quantitative feedback was fed to the team and each two-week cycle provided the opportunity for course correction, improvement, or evolution. As the project went on, the experiences the team built grew more robust and more successful.
The team, including the most skeptical designers, quickly learned that Phase 2 did in fact exist — as did Phase 3, 4 and beyond. Features that did not make it into the current sprint were prioritized into the next sprint. These prioritizations were not driven by the need to complete a finite set of tasks. That was not our definition of success. Instead, they were driven by direct insight into the success and failure of the solutions the team was able to create in each sprint. Successful ideas were evolved and expanded while failed efforts were quickly removed or improved upon. Course correction was possible.
As the team crept closer to the target response rate, they had built not only a consistent cadence of build-measure-learn (Eric Ries's Lean Startup validation cycle) but a more collaborative process that used data, rather than subjective executive decree or completion of a random set of features, to determine their work and success.
No comments:
Post a Comment