What you need: Complexity Mindset

Organizations that are too rigid cannot adapt to changing economic conditions, demand, prices, interests, shocks, and crises. Problematically, an enterprise can grow quite large while preserving its rigidity in the medium-run, delaying the need to adapt until the industry as a whole has a crisis (airlines, automobiles, etc).

Thus, my first book focused heavily on the need to build and cultivate tension within an organization to ensure that continuous experimentation preserves adaptive complexity.

One example that humanity has struggled to grasp, is the adaptive complexity of a system that relies on individuals who need to be a mix of selfishness and altruism. Standard economics is largely built on the axiom of rational self-interest, that individuals have static preferences and will optimize marginal returns on margin investment. For instance, when I have $10 to spend, I will maximize the utility or value of my spending, based on information, rational self-interest, and prices.

Behavioral Economics, however, has shown us that the choices we make are often quasi-rational at best. Any parent who has gone grocery shopping with a young child knows that gut-feel, intuition, snap decisions, and a desire to get the complex decisions of a stressful shopping trip over with, leads to less-than-perfect spending decisions. Anecdotes aside, research has proven a growing list of fallacies and biases that are consistent across gender, race, culture, and intelligence–from anchoring our conclusions based on whatever information we receive first, to an optimism bias that our success is more likely than the average success rate.

As Hoff and Stiglitz review, there is strong evidence for treating individual actors within a system as encultured actors, partly depending on social context and expectations to determine the best decision.

We can each fit the standard model of classic economics during “slow-thinking”. Under observation, we act rationally self-interested as long as we have perfect information and sufficient time for deliberation. The rest of the time, we are swept up in our action-packed schedules, engaged in thousands of quasi-rational decisions. We copy what has been successful for others, act agreeable when the impact of a decision is unclear, and rely on our past experiences to maintain the habits that have worked so far. This “fast-thinking” is not static, however. The research is very clear that we can be manipulated in very subtle ways, toward selfishness, mistrust, polarity, and dishonesty. Likewise, with enough changes to social context, education, and time, the weak can be strong, the forgotten can be outspoken, the rigid can grow again.

So, who would you like to be?

While I will present the science behind complexity thinking in lean-agile organization development in future posts, the real question that I am left with is, “Who would I like to be; who should I hope to become?” Naturally, there is no perfect answer to this. Personal identity is a question of strategy, in its own way; you can only choose so many vocations, specialties, social contexts, and roles. To be enormously successful in one arena is a trade-off against other opportunities.

One thing, however, is entirely clear. To the extent that our quasi-rational behavior allows us to rely on several “identities” based on mental schema, role models, behavioral narrative, and social norms, isolation within any one institution and ideology is a dangerous prison. We are only free to determine our own path to the extent we know those paths exist. We can only adopt the best mental schema for an unknown decision by having as many modes of thinking as possible at our disposal. We can only carve out the best self-identity as the exposure to new options, cultures, and role models permit.

Frankly, if we are all very honest, we find it easiest – because is simple, familiar, and less scary – to remain stuck in the simplistic modes of thinking we developed as children. Good-baby/Bad-baby, Good-mommy/Bad-mommy, Good-worker/Bad-worker… and yet, when we view the world as a complex adaptive system, it is true of humanity that the health of the forest is so much more complex than can be observed tree by tree. Sometimes we need mother-soldiers, brother-florists, teacher-friends and so on. So my call to action is not to choose a single destiny and blindly pursue it; my imperative to you is see all the paths, invent a hundred identities, meet every kind of person, think through the lens of your worst enemies. Only by expanding your vocabulary, experience, and exposure to the full complexity of the world can you hope to say, in the end, “I chose who I have become.”

Cited:

Hoff K, Stiglitz J. “Striving for balance in economics: Towards a theory of the social determination of behavior” Journal of Economic Behavior & Organization, 2016, vol: 126 pp: 25-57

 

The “Priority” field in JIRA

The Legacy We Build Upon

The topic of lean metrics requires an understanding of the influence of Kanban and the Toyota Production System. Scrum, as an agile process framework, was built as a lightweight version of the TPS Kanban practices that anyone could use, while Atlassian developed JIRA to enable visualization of any process, regardless of its complexity.

The “priority” field originated as part of Jira’s oldest legacy as an issue-tracking system. Greenhopper was a plug-in that enabled the issue-tracking database and web services to be used for Lean-Agile teams. Greenhopper created a new front end for the data in JIRA – namely, the work-in-progress board and the product backlog – so that a Scrum team or Kanban team could use JIRA for agile.  Ultimately, this became synonymous with JIRA and the today it ships with the agile front end OOTB.

In Scrum, the Product Backlog is used for prioritization, so the “priority” field has little meaning. In Kanban, the priority field is used to create swim-lanes based on Classes of Service.

Expedite Anything That Blocks Value Creation or Capture

“Blocker” would indicate that WIP limits can be violated and workers should interrupt their current work in favor of Expediting the blocker through the system so it doesn’t not cause long-running damage to continuous flow. The equivalent in Scrum is building out a process for stories, tasks, or bugs that are allowed to violate the Sprint commitment. In either case, the goal is to recognize that this is costly and unhealthy, an exception to normal rules. Unless it is a production defect, some element of the expedite cost should levied against the person demanding special treatment.

Flush the System of Any Critical Items that Disturb Continuous Flow

“Critical” would indicate that a card should “jump over” all other items at each step in the process, but it should not interrupt work or violate WIP limits. These items get to cut to the front of the line, but are not allowed to interrupt completion of existing efforts. The equivalent in Scrum is building out a process for what items, if any, are immediately prioritized to the top of the Product Backlog. It is typical for this to be a “Fixed Date” class of service – because fixed dates are the most consistent destroyer of sustainable flow, we want to get those things out of the way quickly so that the system can return to normal.

Everything Else Follows the “Normal” Process

“Major” and “Minor” and “Trivial” are typically part of the same Standard (aka normal) Class of Service. If used in Scrum, it is primarily for the benefit of the Product Owner to visualize previous conclusions about prioritization. In Kanban, these are meant to respect all WIP limits and follow a First-In-First-Out (FIFO) method at each step in the process.

Scaling Like Organic Systems

A System

A system – as we will define it – consumes resources and energy to produce something that is more than the sum of its parts. Not only does is produce value it does so in a way that sustains its own existence. If we consider Henry Ford’s early Model-T production system that assembled automobiles, the raw materials – rubber, coal, plastic, steel – were meaningless as an unformed heap. Along the way, the “intrinsic” economic value of the raw materials were destroyed and could no longer be sold for their original price as raw materials. At the time, there would have been no resale value for many of the assembly pieces, because Ford created an entirely new value network and disruptive business model to create a market that could properly assess the value of the non-luxury automobile. Yet, once assembled, the assembly line put these pieces together to create value greater than the sum of its parts.

An example of a relatively simple organic system is a single-celled organism like some species of Plankton our oceans. A plankton lacks sophisticated embryogenesis, there is no differentiation of multiple tissue types, no embedded systems, and no coordination mechanism across cells. Nevertheless, the simple biochemical processes and the internal workings that complete these processes have continued for billions of years by not only producing its own self-maintenance, but also by managing to reproduce. There is a surprising large amount of DNA for such a simple, small, organism – but why did this legacy of code begin amassing in the first place? Whether we venture to call it “divine” or not, there was certainly a spark of some kind that began an explosion that has yet to collapse back into chaos and the dark.

Even with these simple systems, where we can trace each exchange in the value-transformation process, including materials, structures, energy, and ecological context, the sum total of the Model T and the factory that produced it is more than its parts heaped separately in a pile. Our difficulty in understanding such systems is a problem of multi-fractal scaling. For now, let it suffice to say that making a variable in a system better may not result in a linear change in outcome.

 

A Complex System

We have major issues understanding how (or worse yet, why) a system consumes resources and energy to produce value in excess to the sum total of the elements and energy amassed in the absence of the system that produced it. This problem is only compounded when we begin embedding specialized sub-systems within an organism. In the example of an automobile factory, we could say that every cell of every person is a system, that each person is a system, and that each distinct functional area, separated by distance, is a system. The accounting and finance “system” and the inventory and assembly “system” must interplay as part of Ford Motors, a system in its own right.

So we can define a complex system as having embedded sub-systems, causing the observer to not only see that the whole is greater than the sum of its parts, but the observer may also slip into a “confusion of levels” if they attempt to manipulate a part of a system to shift the outcome of the whole. Worse yet, confusion of levels can have disastrous, non-linear results that are the opposite of the intended change due to confusion of cause and effect. When sub-systems are embedded within each other, their interrelationships may act on differing scales, either in time or place. So we must careful when attempting to improve a complex system. We must use empirical process control to chart the change in systems outcomes rather than simply optimizing subsystems in isolation.

 

Multi-Fractal Scaling

A fractal is a pattern that repeats self-similarly as it scales. One of the most common fractal scaling patterns in nature is branching. From the trunk of a tree, to major its major limbs, to twigs, and finally leaf structures, this fractal scaling pattern enables a lifetime of growth cycles. Leaves can bud purely based on opportunism, in a relatively disposable manner. This is because the tree, as a seed, has all the legacy of generations of trees locked inside it. The tree does not aspire to be “the perfect tree” or assume that it will grow in perfect sunlight, humidity, soil pH, and water availability. The tree does not get angry when a major branch is broken off in a storm or struck by lightning. Instead, its fractal scaling pattern is prepared for intense competition for sunlight in the sky and resources from the ground. The tree’s scaling pattern has risk mitigation “built in” because it grows the same in the middle of a field with frequent rain as it does in a dense forest.

We see this branching strategy throughout nature, from ferns to human blood vessels. However, an even more effective approach to self-similarity comes from multi-fractal scaling. The ability to adaptively select between more than one repeating pattern or differentiated patterns based on scale requires a different kind of fractal: time-cycle. It is not just the branches of a tree that result in an environment-agnostic strategy for growth, it is the adaptation to cyclical daily growth, scaled to cyclical annual growth, than scaled to multiple generations of trees that grow. This final step is an important one. Multi-fractal scaling is not only the source of novelty and adaptiveness “built in” for a single tree, it repeats at an even larger scale as a species competes for dominance of a forest. Multi-fractal scaling encourages “just enough” opportunism to enable small-scale experiments that can be forgotten without loss at a greater scale, or thrive when conditions change.

 

Adaptive Multi-Fractal Scaling

The strength of multi-fractal scaling, from branch to tree to forest, is its total reliance on empirical process control.  The legacy code is a confusing jumble of competing messages that a human mind, attempt to “engineer a perfect tree” would attempt to simplify and beautify. That legacy code, however, wasn’t written with any intention of crafting a perfect tree. That code was written to create a minimally viable reproductive system. It is built for one thing: continuous experimentation.

Continuous experimentation happens at each level of multi-fractal scaling, risking economics appropriate to its scale to find asymmetric payoffs. An Oak tree risks very little per leaf that grows over the entire course of its life. In a dense forest, however, that continuous experimentation of growing leaves higher and more broadly opportunistically based on local returns on investment can suddenly break through the forest canopy or unexpected fill the hole left by another tree’s broken limb. An Oak tree does not require centralized control of where leaves will grow or which limbs to invest in. Instead, the legacy of continuous experimentation enables multi-fractal scaling that competes locally and opportunistically.

Again, we do not need to understand what spark set this fire ablaze, we only need to see that it is still spreading and we are a part of it. Over-simplification of superficial outcomes will lead to poor decisions about inputs. Organic leadership relies on context, structure, and enablement of continuous experimentation. Organic leadership is a “pull” system that relies on scaling patterns for decentralized empirical process control. Artificial “push” systems force requirements and attempt to bandage the inevitable inefficiencies of a non-adaptive system.

 

A Complex Adaptive System

A complex adaptive system does not merely take in resources and energy to produce itself and reproduce itself as a unified “whole” that is greater than the sum of its parts. It does not merely embed subsystems with multi-fractal scaling and decentralized control. A complex adaptive system also operates with a continuous experimentation system built in to its normal framework of activities. When we make the leap from an Oak tree to the human body (or any other mammal on Earth), we can truly appreciate just how complicated it is to improve the health of an individual, or an entire population, when we observe the interrelationships of various physiological and socioeconomic systems and sub-systems. Creating lasting change is not only complicated in terms of finding the correct level and understanding the full ramifications across the entire system, each complex adaptive system is also continuously experimenting and will adjust against such changes based on short-run, local, decentralized opportunism.

To care for a complex adaptive system requires not only an understanding of inputs, processes, and outputs, but also the multi-fractal scaling of continuous experimentation that maintains long-run viability. When short-run economics are working against long-run viability, it is not sufficient to reward “correct” behavior to counteract short-run opportunism.  Instead, we must shift the context of local decisions so that short-run opportunism serves long-run viability.

Accidents Will Happen

Accidents may seem to the observer to be unintentional, but continuous experimentation is built to test the boundaries of success, to ensure that precise empirical process data is also accurate for the needs of viability. In other words, if you’ve ever accidentally tripped and fallen, or accidentally loosened your grip on an egg and dropped it on the kitchen floor, this was a natural element of complex adaptive systems quietly running experiments.

Embedded in our own human code, our sub-systems are all built for continuous experimentation as a method of calibrating precision to accuracy, using multi-fractal scaling on short, long-short, long, and distributed cycles. A short cycle is an immediate reference point for an event, using data held in working memory, and is reactive to immediate changes. A long-short cycle compares current data to immediately recognizable patterns of events, more embedded memory or conditioned responses that have proven useful over time even if we assume the event is an occasional outlier. More significant, painful events can skew our “normal” for decades and even become passed to the next generation as part of our genetic code. A long cycle has been stored to our genetic hard drive for future generations. A distributed cycle is a socioeconomic artifact that requires a medium of exchange and may last for centuries.

As humans, our multi-fractal scaling of continuous experimentation results in the creation of complex adaptive socioeconomic systems. Our legacy code drives us toward exchange, tooling, building, and reproduction because the experiments that are in motion are far from complete.

Like our occasional fumbles and falls, our social systems produce results that appear to be accidents with no guilty party, pure coincidences of circumstance, which occur due to failed experiments. Organic leadership harnesses this natural propensity for decentralized opportunistic experimentation by encouraging it but setting boundaries for it, feeding it but ensuring checks-and-balances from opposing interpretations, and guiding it by changing context and opportunity rather than directly managing outcomes.

Have you failed at dual-track Scrum?

Dual-track Scrum is a red flag that no part of your organization is practicing lean agility in any way shape or form. It preserves the transactional, finite, short-sighted project mindset.

Cadence improves internal signalling, but layering staggered cadences means you missed the underlying economic factors that make Scrum so effective. 

To be transformational – to dramatically shift your business model, disrupt your industry, or move to long-run economic optimization – requires an understanding of multi-fractal scaling and how time, distances, investment, and exchange differs based on their scale. 

For an in-depth look at time-cycle scaling in a typical digital value stream, check out my playlist on YouTube:

Time Cycle Scaling Economics

Orienting is Essential to Agility

Responsiveness and disruptive influence are the cornerstone of agility, because change through continuous experimentation is fundamental to life. Healthy and viable systems maintain their complexity far from equilibrium, relentlessly fighting collapse and death. After all, “poised” on the brink of chaos, there is an obvious business definition for agility:

Responsiveness to signals in a market with imperfect information and imperfect competition.

This context necessitates process control that keeps identity and novelty in constant tension – even against our most brilliant ideas. Thus, our tactical principles for general preparedness, quick orientation, and powerful responsiveness will be rooted in the need to orient faster than the enemy system, our ideological competition. Only the working product of our efforts can provide a pragmatic judgment of the value we have created, so the ultimate measure of our success as a Disruptive Influence is the actual change in behavior we have caused.

Because individuals and interactions are inherently complex, adaptive, and difficult to predict in the reality of socioeconomic competition, we value knowing them directly, studying them and interpreting their position ourselves. We value this over relying on their predictability, likelihood of adherence to an agreed-upon process, or correct use of the best possible tool for any given job. Although we assume processes and tools taken at face value will deceive us into a false sense of stability, we also recognize that individuals and interactions cannot always be taken at face value either.

Because responsiveness, both in decisiveness of action in an unexpected situation and as adaptation over a long-term investment horizon, will consistently be awarded with asymmetric payoffs, we can only trust a plan to the extent it includes contingencies, delays commitment, and distributes control to the individual with the best understanding of the situation at the time a decision must be made.

Because compromise is the inevitable and unsavory outcome of “contract” negotiation, while creative endeavors in contradistinction rely on the energy of tension, cognitive dissonance, intra-organizational paradoxes, and conflicting interpretations, we invest our time and effort in social exchanges while delaying formalization. A contract relies on an external locus of control for its power and validity, whereas we must prioritize a social and socioeconomic view of the complex system we hope to lead into adaptation.

Because a socioeconomic “factor of production” is defined by its output, evaluated on how much more value “the whole” can add in excess of its “parts”, and because digital products are continuously created and maintained but never mass-produced, we take the tangible product of our endeavors as the only valid measure of its worth.  However good the product design looks on paper, however well-defined the future state is documented, only exchange in the marketplace can determine the economic value of the product we have actually created

Drawing the Line Between PO and BA

The Scrum Business Analyst

I have heard more than once “There is no BA in Scrum.” Imagine how your BA’s feel when a transformation starts!  At best, they are uncertain what their role ought to be. At worst, it is made clear by everyone else in the process that the BA is no longer needed or wanted.

The irony, for an agile coach viewing this as an outsider, is that numerous individuals throughout the value stream who are also struggling to cope with the shifting sands of transformation, frequently report that mistakes, lack of prioritization, failure to clear dependencies, and miscommunication are due to “being too busy.”

Obviously, just from this “too busy” problem, there are two important things the BA ought to do as an active member of a Scrum Team in a scaled environment:

  1. Act in a WIP-clearing capacity to the extent their t-shaped skills allow.  To whatever extent they do not have T-shaped skills, the moment they are not clear on how to utilize their time is the perfect opportunity to develop these skills.
  2.  Capture the very broad “reminders of a conversation” about a story that, in a large enterprise, occur across a larger number of individuals, over a longer time period, and in more geographically distributed locations than “core scrum” implies.

Roles and Accountability

Now we can draw the line between the Product Owner and the Business Analyst.

The Product Owner is accountable for decomposing an Epic or expressing a single enhancement as User Stories.  The Product Owner creates a Story card in JIRA for this initial Story list that includes a JIRA Summary and the User Story in classic format:

As a {user persona} I want {action} so that {expected value to the user}.

This is an expression of “Commanders Intent” and represents why the story is being developed and who cares whether or not it is developed.  Thus, the User Story is an expression of product strategy, and represents trade-off choices and prioritization.  The decision to expend finite on and expiring resources – time, energy, money, and talent – on one product change versus another is the most critical accountability of the Product Owner.

Although the what and how is negotiable, the intention of the Product Owner serves as a litmus test for all subsequent decisions.  The what and how are the realm of operational effectiveness rather than strategy.  It includes the framework of economic decision making and the processes, practices, and tools that streamline communication and align strategic direction of a distributed control system.

The Business Analyst uses the Description to succinctly express the what and how that has already been determined so that no context is lost in subsequent decisions.  The what and how remain negotiable to the extent these better serve the “Commanders Intent” of the User Story.

In an analog Scrum board, there is typically an agreement on “front of the card” and “back of the card” content that serves as the “reminder of a conversation” for the team.  In a scaled environment relying on a digital board like JIRA, the Summary and Description fields serve a similar purpose.  As the number of individuals contributing to the value stream increase, the need to detail the conversations that have already occurred increases as well.

In the process of detailing each Story Description, it will often be apparent – due to test data or testing scenario coverage – that a Story ought to be split into two or more stories.  The Business Analyst completes this activity and is accountable for communicating the split to the Product Owner.

 Stories may also be further split during Backlog Refinement or Sprint Planning based on additional insights from the team. Attendees should collaboratively decide who will capture this decomposition within the tool, but the Product Owner is accountable for prioritization decisions (if the split impacts this).  

Purpose of the Story Description

So, to meaningfully define the role of the Business Analyst, we need an understanding of what value is created if one individual “owns” capturing the elements of a Story Description as the number of these predetermined elements continue to grow. To the extent at scale that the team is unable to economically interact with every other value add activity in the value stream, the purpose of the Description is a succinct expression all value-add activities and decisions that have influenced the User Story prior to development. While we want to express these in the fewest words possible, and work toward distributed control of decisions, we do not want previous insights “hidden” unnecessarily from the Scrum Team.

Several important activities have likely occurred prior to our Sprint:

  1. Business decisions fundamental to the economics of our interaction with the customer.
  2. Funding based on an overarching strategic initiative.
  3. Customer research and analysis of product metrics.
  4. User Persona definition and Empathy Mapping.
  5. UX Proofs of Concept and/or A/B Testing.
  6. Stakeholder meetings.
  7. Success Metrics defined.
  8. Technical dependencies fulfilled (such as a new or updated web service API).
  9. User Story decomposition.
  10. Other Stories already developed related to the feature.

Thus, many details needed “downstream” should be easily expressed in advance of the Sprint:

  1. Why are we building this story?
  2. Who is the User?
  3. How is this User unique in our Product (i.e. relate persona to an account type)?
  4. What Test Data will need to be requested to test the story?
  5. What steps does the User follow to obtain the value of the story?
  6. What will the User see when they finish the story?

Management by Spreadsheet? You’re Doomed

How Bad Operations Management – Capacity Utilization and “Managing by Spreadsheet” – will destroy your company.

Of course, this favorite tactic of ineffective operations managers takes so long to unravel everything you’ve worked for that leadership never figure out what happened. After the laws of economics bring the company to its knees, smaller in revenue and resources, they begin growing again, repeat the same mistakes, and crumble.

My colleagues and I enjoy calling this flawed approach “Management by Spreadsheet” – and it is unfortunate that this is a scenario where leaders rarely learn from history and are proverbially doomed to repeat it.

I understand, knowledge workers are expensive and variability of demand for their brilliance as spending on payroll rises is a terrifying prospect.

But I promise you this: The moment you attempt to control variability in capacity utilization for your individual allocable resources, you have signed a death sentence for your knowledge-worker-dependent company.

Why? A focus on capacity utilization sends a clear message to employees that there is nothing more important than the time they spend actively engaging in the most important function on their job description – which you probably put right on that spreadsheet. Every individual will now maximize their own workflow and they will do it at the expense of the overall system.

How? This emphasis tends to be set squarely on the “run” phase of each worker’s process. For example, a developer now has the clear message that optimizing for time spent coding is the only expectation from leadership. When an individual worker in a complex process optimizes their own capacity utilization, there are a number tactics they pursue:
– Isolation from other workers rather than collaboration.
– Dependency on other workers to complete the planning, setup, and validation portions of their workflow, decreasing quality and overall value-add.
– Demand to receive ever-larger batches of work to increase the amount of time they can work uninterrupted.
– Increasingly large-batch output, increasing cycle time and decreasing quality.
– Increased external locus of control as anything outside their large-batch run-phase focus is not their “job” anymore.

As a dog returns to his proverbial vomit, so also the operations leader focusing on capacity utilization will do everything in their power… To make this situation EVEN WORSE. (S)he will add capacity to alleviate bottlenecks.

The complex system in which individual processes optimize their own run-phase process inevitably puts immense stress at a single point in the system. Whether one person or an entire department, capacity utilization management will create one crisis after another due to bottlenecks. The individual (or team) who becomes the bottleneck becomes overwhelmed and will wave every red flag they have as high as they can.

And leadership, who put them in this painful situation, attempts to save the day with additional control over capacity – by adding NEW resources at the bottleneck.

Of course, because variability of demand was the original problem, this means capacity bottlenecks will emerge in each subsequent silo in the system over time. Typically, this does not happen within the course of one project so no one can see it except the leader who is making the bad decisions in the first place.

In software development, where resources are extremely expensive and in short supply, this has horrible consequences – many projects can stall at the same bottleneck due to the lengthy cycle time of “talent acquisition”. Companies make hiring decisions in a state of crisis, when they are least likely to consider the long-term impact of the decision for their payroll or company culture.

At first, this leads to increasingly expensive hiring decisions with the fallacious assumption that resolving this one bottleneck will balance out the system. Then, due to variability in demand and increasing batch size and feedback cycle time, other bottlenecks inevitably emerge.

This is when leadership starts doing things that make alarms go off for everyone – suddenly, because the operations tactics have made payroll expense more of a burden than ever, the company stops hiring expensive resources out of panic and begins taking the cheapest body they can throw at the bottleneck in the shortest time.

You will see your first major walk-out of your most important resources begin.

In a state of denial, a capacity utilization manager will see this as a windfall, since payroll just went down. A systems view sees this is the beginning of the end, because the ratio of people with the most historic commitment, highest barriers to exit, and longest legacy of contribution to company culture – to those who are cheap and new and willing to leave any timeout – has just shifted drastically.

If you haven’t realized it yet, that spreadsheet is lying to you and it is pushing you toward financial crisis.

The truly sad thing is that some portion of executive leadership – sometimes all of them – actually believes things are getting better. More numbers and more spreadsheets translate to “we’re doing the best we can with a hard situation.” So the capacity utilization advocates survive the crisis that follows, because they have the spreadsheet that pacifies the misled leaders. The focus stays on utilization, bottlenecks, and talent acquisition, continuing the downward spiral.

Of course, you’re also no longer leading the same company as when you started tracking capacity utilization of allocable resources – whatever cross-functional teams composed of collaborative contributors you had are gone. Sometimes, an entire functional unit has turned over, leaving you with a group that was forged in the mold of large-batch, long queue, high cycle time work.

A few bad projects and damaged customer relationships later and that variability of demand combined with the additional capacity from less effective resources create a perfect storm. A small but perfectly manageable drop in demand and every individual can no longer maximize their capacity utilization.

Panic ensues.

Your most expensive resources probably start looking for another job because they see very clearly that the marginal return on their weekly payroll just tanked.

The second walk-out occurs at this point, continuing to strain relationships with existing current customers while placing the occasional project in crisis as another resource is taken from their silo and expected to hit the ground running.

Revenue is still falling, so low performers are sought out and “unnecessary” benefits are reduced. Fire a couple sales people here and a few testers over there because they can’t easily prove how they add value. Replace your talent acquisition rep. Anyone that was never a bottleneck, in fact, is expendable – but you’ll start with anyone you can get away with firing so that you don’t have pay out accumulated vacation or other obligations.

I assure you that the entire company sees that this fierce loyalty to “managing by spreadsheet” has resulted in the destruction of everything that made your company an awesome place to work. Everyone is now looking for a job.

Because the system has been trained against responsiveness to demand variability, it only takes a few more waves in the natural ebb and flow of demand to make you desperate enough to layoff anyone who is too expensive for their contribution at the exact moment of desperation. Anyone who was holding off on leaving the company due to timing or a sense of pride in “finishing what you start” will now also actively look for a job.

Now that you’ve shrunk to the point that less managers can “manage” the spreadsheets, you fire them, too.

Congratulations, you have basically destroyed everything you had worked to achieve with your company.

Unfortunately, many companies do not not learn their lesson, never ask the right questions, continue their flawed approach, and repeat the cycle of growth and collapse until the reputation for mismanagement makes it impossible to continue the vicious cycle.

If you are at any stage of this debacle, it’s not too late. The same internet upon which you found this post was built based on scientific principles from economics and queuing theory that can save your company.

Send me a message to find out how.

7 Simple Steps to Agile Transformation

I am never sure how to answer someone who says “What is agile?” After all, my mind is racing so fast that my ultimate, simple explanation – “A way to innovate and deliver products more effectively” leaves me wishing I could kidnap people for a 3-day course on lean-agile and continuous delivery.

What I can simplify (for someone who has a basic understanding of agile) are the steps in a true transformation, so that they can let me know where they are in the process.  Note that I have ordered these quite logically, while the real world is full of resistance, grey area, and co-evolution.

  1. Establish a cadence of synchronization (typically, this is scrum). Hypothesize the results of every change ahead of making it, test it, and validate or invalidate the hypothesis.  Inspect and adapt.
  2. Change from a human resource allocation mindset to a well-formed team mindset.
  3. Change from a finite project mindset to a living product mindset.
  4. Sell who you are, not what you plan to have on a shelf in X months.
  5. Change from a P&L and ROI mindset to an Economic Value Flow across the organization mindset (including upgrades in equipment, training for knowledge workers, benefits that raise barriers to exit).
  6. Change from centralized (top-down) market research, innovation planning, and risk assessment to distributed control over prudent risks.  This requires a framework for self-validation of discoveries, exploitation of opportunities, and communication of results.
  7. Change from performance tracking and formal leadership to systems optimization and organic leadership.

Hit Contact if you’d like to discuss your scenario or any of these points – I’m always available.

Do Project Tasks go in a Scrum Product Backlog?

I get this question frequently when training agile and scrum teams:

Do Project Tasks Belong in a Scrum Product Backlog?

YES.

Since answers to this question I have seen in chatrooms are typically insufficiently argued as part of a crazed political debate full of comments taken out of context, this very pragmatic question deserves a bigger picture answer – because the need to ask the question is a symptom of a stagnating transformation.

A successful shift from stage-gate or waterfall development processes to agile, Scrum, or Kanban requires a fundamental change organization-wide: from maximizing ROI and shareholder value to maximizing Economic Value Creation and sustainable competitive advantage. If this shift does not occur, the improvements gained from agile practices will inevitably stagnate.

Jez Humble refers to this state as Water-Scrum-Fall, that unfortunate state where most agile and DevOps initiatives plateau.

Most often when I talk to development teams, Product Owners, and ScrumMasters, this is often blamed on a lack of executive buy-in.

I completely disagree.  

I have also blamed a manager or two for the imperfections in the agility of a company, so I can relate to this view. To show you why you might not even want executive sponsorship, let’s revisit the view of a corporation as a minimum viable superorganism.

Complex Adaptive Systems Leadership

A corporation is not a machine with various parts to replace or maintain in isolation, it is a superorganism. It is a biological phenomenon that is not sufficiently explained by social contract theory or through monetary theories of motivation. Judgments about this reality are very easily clouded. Unfortunately, once measurement and monetary incentives change the natural behavior of the superorganism, it is difficult to change back – making it easy to fallaciously claim this as proof of their effectiveness.

Quantum physicists have suggested that undisturbed systems in the universe naturally stay in multiple states simultaneously, unless someone intervenes with a measurement device. Then all states collapse, except the one being measured. Perhaps what you measure is what you get. More likely, what you measure is all you get. What you don’t (or can’t) measure is lost.  – H. T. Johnson, “Lean Dilemma”

So when you hear “We need more buy-in from management” this is absolutely incorrect.  It is even counter-productive!  Adaptations by a complex system, that disruptive creativity and innovation agile champions desire, can only occur through organic, emergent leadership – a tribal, heretical rebellion. Adaption to a new stimulus may have a focal point, a “leader” who organically builds up energy in a new direction – but this leadership is an emergent property the complex system. In contrast, formal leadership (“management”) is a crystallization of a complex system, an attempt to reinforce a desired “normal state” – a force that exists counter to emergent leadership and adaptation. By default, formal leaders at all levels of an organism are incented (through power, money, and Agency Dilemma) to maintain homeostasis – i.e. the status quo. Even if a formal leader becomes the emergent leader of adaptation, this will be odds with her formal leadership. Unless she is willing to risk the loss of formal leadership, she will dissolve her capacity for emergent leadership and resume promotion of homeostasis – no matter how much it dampens creativity, innovation, and sustainable competitive advantage.

Evolution of a superorganism through disruption – whether a lean or digital or agile “transformation” – cannot occur if any one piece of the system is optimized in isolation from the whole because any superorganism, as a complex adaptive system, will exert tremendous energy to maintain homeostasis. The larger the superorganism, the more likely that optimization of one function or team will result in a net loss of desired adaptation (whether the desirable “adaptation” is called innovation, process improvement, or “growth”).

So, when a formal leader blesses a piloting of lean and agile practices by a completely isolated team, this is the superorganism equivalent of a mother’s amniotic sac – the team can establish itself as a unique complex adaptive system while in isolation, fed by the resources of the maternal superorganism but shielded from the homeostatic processes of the parent system. The moment this new team is re-integrated into the larger system, continued adaptation is unlikely. The company attempts to spread the innovation and creativity culture they achieved but instead can only formalize a shift in a subset of practices.  These practice, outside the context of psychological safety, a well-formed collaborative team, flop. No single activity of the pilot team will have the same value implemented outside the context of the pilot team’s “bubble” that safeguarded it against the homeostatic forces of the superorganism!

But wait – what about that “net loss” in innovation, creativity, and efficiency I claimed?

In practice, a company that adopts an agile process (let’s say Scrum) as a change in behaviors isolated to the teams developing software causes the rest of the system to expend energy maintaining homeostasis, and even more energy wasted by agents accommodating these homeostatic forces so that the development teams can preserve their no-longer organic place in the system.

I think you know exactly what that looks like:

  1. Updating documentation processes without seeing them as “artifacts” that emerge from an adaptive process rather than social contracts that require formal sign-off.
  2. Replacing one tool with another, causing a new set of employee workarounds to occur.
  3. Increasing frequency of software releases without changing the size of organization commitments.
  4. New meeting names that don’t change communication patterns or the homeostatic, status quo, “normal” flow of information.
  5. Continuous backlog decomposition as a manual transfer of a large-batch investment into small-batch development items.
  6. Oops! Another manual transfer at the end –  of small batch engineering back into large-batch approval processes.
  7. Changing job titles without addressing diffusion of responsibility and the lack of psychological safety inherent in the culture of the system.
  8. More overhead and forced “transparency” than if nothing had changed, through extra meetings, reports, metrics, and analysis, due to the natural distrust between formal leadership and emergent leadership, and the lack of trust in information flow between the homeostatic processes and the aberrant nomenclature of the development teams.

In the middle of all this, a large organization grabs their Project Managers and their Business Analysts, or anyone cheap who is around and doesn’t have the “status” a Product Manager, Director, or VP, and switches around their responsibilities to call them “Product Owners” and “Scrum Masters”.

What a debacle.

The newly-minted Product Owner receives Project Plans full of important tasks and milestones and big nasty Use Case document and an even bigger, unapproachable set of Technical Specifications – and is told to manage what the team delivers with User Stories.

Now in the midst of all this, should the Product Owner include Project Tasks in the Product Backlog?  Or to get down to brass tacks, could a task ever be a Product Backlog Items?

Absolutely!

But not all of them.

Some “Technical” Tasks (specifically not User Stories) are still Product Backlog Items

Technical tasks that create demonstrable economic value that the organization can capture, a known cost of delay, but are completely invisible to the user STILL NEED TO BE PRIORITIZED relative to other potential Product Backlog Items.

This, of course, is why the question of if these belong in the backlog is sign that a systemic shift in thinking has not occurred. If you are optimizing for project ROI, then these tasks just don’t have the marketable, monetizable potential of each Use Case. If you have a systems view of optimizing the flow of economic value creation, these tasks are judged relative to any other potential investment. Economic investment is continuous, the economic value created can be judged continuously, delivery and value capture is continuous, and you can prioritize based Weighted Shortest Job First or another collaborative decision making process about the of Cost of Delay.

“Artifact” Tasks are an Agile Anti-Pattern

There is, however, another kind of project task in Water-Scrum-Fall that SHOULD NOT be in any development team’s backlog: artifact tasks. These are things like “Complete wireframe for new home page” and “Document Social Integration for PokemonGo”. No matter how you small-batch these tasks, these are not Product Backlog Items. These are not even artifacts. Artifacts are the tangible leftovers of the creativity and innovation of a strong agile software team. A documentation, design, or planning task is antithetical to economic value flow. It is a trap. A box you put your money in and bury. It takes all the value-add, throws it in a pile, and lets it sit there, unused, as it become gradually less valuable.

This mini-waterfall process – this outrageous, lean-agile anti-pattern – surfaces in three ways, all of which I whole-heartedly reject and will actively undermine the capacity of others to pursue it in hopes that my heretical tribal rebelliousness will gain emergent leadership support:

  1. Business Kanban and Program Increment Planning tasks that lock up all creativity and innovation prior to the development team passively receiving instructions (as you see in shoddy implementations of the Scaled Agile Framework)? FAIL! TRY AGAIN!
  2. Tasks for non-developer “members” of the development teams completed as Sprint Backlog Items separate from the User Stories, thereby formally dividing cross-functional collaboration and preserving us-them Guilds (whether in dual-track Scrum or within even the shortest sprint)? FAIL! TRY AGAIN!
  3. Sub-tasks that formally divide up User Stories into function-specific tasks to complete? FAIL! TRY AGAIN!

These are all agile anti-patterns that prioritize tools, social contracts, and “process” over collaboration, communication, relationships, and creativity. You will never disrupt your organization, and your organization will never disrupt your industry, sorry.

“Milestone” Tasks are a Continuous Delivery Anti-Pattern

Since we started this asking if the BA / PM as PO ought to put Project Plan tasks into the Scrum Product Backlog, I’d hate to leave out “milestones”. Now you may say, “Andrew, that’s ridiculous, no one would treat a dependency as Product Backlog Item!” Indeed, ridiculous. But that’s the ultimate sign of your Continuous Delivery anti-pattern. Truly optimizing the flow of economic value creation across the entire complex adaptive system would completely remove “milestones” and “dependencies”. If you can’t get rid of Project Plans completely, and continuously deliver and validate Finished Story Benefits for ALL work that the organization takes from identified pain to economic value capture, whatever you started pursuing in your agile, or digital, or lean, or devops transformation, you’ve plateaued as a company.

And this is the really the paradox that made the lengthy description of complex adaptive systems leadership necessary. This hurdle is NOT something that “Needs executive buy-in.” This is something that is accomplished through outright insurgency, tribal heresy, and fait accompli rebellion.

That’s because Continuous Delivery takes more than agile ceremonies and user stories. It takes developers who are proud of knowing business context. It takes refactoring that no one approved. It takes a team move to Git from Subversion without telling anyone. It takes a handful of people setting up a Continuous Integration server no matter how often the nay-sayers tell them it’s useless. Continuous Delivery is a change in engineering practices and development culture that tend to happen without formal leaders needing to approve anything.

It just takes the right people having enough pride in being BETTER that they draw a line in the sand and defiantly announce “THIS IS OUR CRAFT!”

A Heartfelt Epilogue: Real Creativity, Innovation, and Disruption is MESSY

Now listen, human-to-human, if all you know about “agile” comes from that one book you read, YouTube, or a two-day certification, I won’t be surprised you’re thinking, “Wait, Andrew, that’s nothing like agile! How do I report you! How do I get you stripped of all your certifications?” That’s great. That reaction means I hit a nerve. Fantastic! Contact me and let’s talk about taking agility to the next level.

Truth is, I don’t look to my four certifications, five training course, three conference, my blog, OR EVEN my five years of attending, speaking at, and hosting MeetUp’s on agile as the proof of my legitimacy on these topics. I measure my expertise in the number of experiments, including the major failures I have been through with my development teams. The reason is simple: complex adaptive system leadership is an emergent property that require deep entanglement and shared experiences in the trenches. And, as it turns out, I’ve been in the thick of every kind of good or bad lean or agile possibility, trained people in that context, debated ferociously about it in multiple companies, and I have compromised my values or experimented with teams to directly challenge every single principle your little YouTube summary glossed over.

If at this point you think some teacher let me down and it’s a real shame, I’ll be happy to give you a recommended reading list and YouTube list and introduce you personally to other thought leaders that dive, like I just did, into the MUD of how you actually achieve: creative innovation, strategic and operational agility, and lean, continuous delivery of disruptive economic value.

Either way, reach out so real dialogue can get started.