THE PERFECT agile process for new product innovation achieves two things: everyone is intimately involved in creating something for a well-known archetypical user and everyone has scientific certainty that the next engineering feat and market risk is a hypothesis that merits experimentation and discovery.
Your agile process doesn’t do that? I’m not shocked.
If you aren’t sure – how happy are you with answers to questions like:
- How did you prioritize this feature over that one?
- Do you really think people want this?
- Are you sure this is what they asked for?
Most agile implementations are focused exclusively on maximizing the efficiency of 40hrs of develop labor-time. Like optimizing a car factory for the performance of only one step in the assembly flow – that’s absolutely counter-productive.
Here’s the perfect Work-In-Progress flow:
Idea, hypothesis, commitment, construction, iteration, validation
To further smack the face of convention, I’ll add three claims:
You don’t need testers.
You don’t need estimation.
What we do need is the discipline to know we are building the right thing before we build it, then test that assumption objectively. This means A/B releases sometimes, cohort analysis, and a clear understanding of the most important metric for each product. This agile process combines all of my experiences with the concepts discussed in The Lean Startup by Eric Ries, and results in a Scrumban flow best suited for building a culture of innovation.
Let’s break it down:
1 – Idea (Pains to Solve)
The idea backlog is everything feature and improvement we feel reasonably certain will make the product more successful. This is where we discuss the future. These start as undocumented epics and end up split into user stories that can be objectively prioritized.
2 – Hypothesis
An idea gets to the hypothesis stage once we have collaboratively estimated the impact of the feature. This requires metrics and forecasting. The work we do needs to be traced via innovation accounting. If we don’t have an objective, realistic impact number that let’s us know if we built the right thing, we won’t be able to learn properly. We need ensure we small-batch-ify the features into releasable stories small enough to be code complete in around two days. The larger the story, the less likely our predictions are accurate. This is like being the first person who invented cake. Small changes in variables make the scientific method, even in an art like baking, successful.
3 – Commitment
Pretend there are 7 cards in the Hypothesis column. Because we have numbers that relate to value and certainty, we can easily prioritize our backlog and complete “Sprint Planning” for the highest-impact cards. This looks like a hackathon, so whiteboarding, pair-photoshopping, research, and hackathoning is completed. When we are confident we are building the right thing and that we know how to build it, we commit to our users what is coming up next for them.
4 – Construction
We pull cards into construction while we are building them. We are creating and coding, answering two questions as we go as new ideas come up. Is this realization something essential to our hypothesis? Can this wait until later? We want the construction process to be collaborative and insightful.
5 – Iteration
When we’re done coding, we review the working software as a group. This isn’t manual testing, this is more along the lines of “Two likes and a dislike”. If we see a defect or have an improvement, we ask the two questions for each thing – Is this realization something essential to our hypothesis? Can this wait until later? We want to remove “contaminants” from our experiment at this stage without changing the hypothesis. For example, if a different shade of blue will improve readability so much that we think NOT changing it will invalidate the hypothesis, let’s change it. If we think the user would like to choose the shade of blue, that’s a new idea and its added to the backlog.
6 – Validation
Our definition of done requires that we validate the hypothesis. This is done with product metrics, social listening, and customer interviewing. If we prove our hypothesis we should ask “Can we do more?” and brainstorm how to continue to reap the benefits of the new feature. If we disprove our hypothesis, we know we didn’t build the right thing and our retrospective (and customer interviews) should ask “Why didn’t this feature achieve {hypthosis xyz}?” and “What did the user want instead?”
What we won’t do:
We are purposefully avoiding what I call “guild-based agile” – a process flow that encourages functionally-distinct steps and hand-offs. I don’t want the engineers to optimize their coding output or designers to optimize their number of deliverables or analysts and testers optimizing their little area of concern. I know that mini-waterfall makes you feel more productive. I’ll help you show your work to the world in a way that makes that urge irrelevant. I want well-rounded creators who build a relationship with the people who use our product. The number of features added is less important than continuous innovation and deliberate empirical product development.
Now – Let’s challenge my claims:
You don’t need testers – In guild-based agile, where the step each card follows relates to a functional division, testers are the natural last step. Here’s a flow I’d never use – Requirements, Creative, Estimation, Coding, Testing, Done. The goal of a push-based workflow is for each step to maximize its own productivity. For a creative person, this means designing a large batch of screens all at once, iterating on their imperfections, shipping them off, and avoiding distractions from developers who don’t understand the designs. The same is true for each step in this dysfunctional process.
The presence of a tester is the symptom of immense process waste. A collaborative team creates with ideas, imagery, and code simultaneously. Collective genius around a goal – We will solve pain X for user Y by building Z – is more valuable than individual contribution. Guild-based agile encourages alienation and destroys creativity. As the batch size grows, getting things done quickly becomes prioritized over questioning “Are you sure we built the right thing?”
Having every function encouraging each other with real-time feedback, focused on the user, reviewing the working software together, and taking pride in the product is far more effective.
You don’t need estimation – Remember when I said all the cards should be about two days of work or less? Remember how the card isn’t done until market data is reviewed and the success of the card hypothesis is validated? You don’t need estimation, because you can simply count the cards. You can forecast based on cycle time. From the time we have an actionable hypothesis, how long does it take know if it proven or disproven? The WIP limit depends on the size of the team and should be empirically adapted. The moment someone feels they don’t have a real sense of empathy for the user of a story or its unclear if you’ll know whether or not your work is valuable – slow down!
It’s true for every function, no matter how specialized the cross-functional team becomes. Don’t make more designs once the hypothesis column is full, do marketing and surveys on what you have, walk through each build with the developer to discuss new ideas. If the validation column is full, don’t code anything else – confirm if the right things are being built before building anything else. I have a hypothesis, of course, but I’ll share that later.