Game Theory of Philosophy


Every game in our cosmic-economic system begins with its axiomatics; those underlying rules that its players and police hold as self-evident to promote and perpetuate the game. Baseball has three strikes, America has its dream, and capitalism loves freedom. Each machinic information system, even those built single-handedly by a philosopher, develop their foundations upon such axioms, assumptions, and self-evidence. The difference lies in how transparently the observer acknowledges these “self-evident” underpinnings of belief. To secure the axiomatics falls within the spectrum of warfare, in which the sociopolitical force that one system may exert over or against other systems gains expression. No ideas become validated or invalidated through violence. No truth becomes self-evident through war. Only war itself, and the faith humankind places in war, gains any ground in its axioms. Countless lives find their end in a refusal admit that an assumption may be incorrect, often not of their own refusal but of some distant leader.

The line between strong opinions and open warfare, or between one war and another, becomes difficult to trace. The orientation of a system of values against, for secret purposes of domination, any system of significance; this we may call Maneuver Economics. We have discussed these practices in detail throughout Invasive Ideology. The most powerful tool of Maneuver Economics is the capacity to axiomatize other information systems, forcing others to subordinate their internal Information Dominance as an integrated component of a collective presumption of Hegemonic Truth.

In this way, axiomatization that dominates through absorption is semiotic Oedipalization, relegating a rebellious child-system of signs under its father-system of signs. Viewed over time and from a distance, the systems of signs that gain coherence become tree-like. This is Arborescence. Arborescence is a mode of analytic thought that continuously branches, triangulating in a plane that emerges as a power-law between gravity and the sun. This at the heart of all despotic dominance, all control of the individual in society, the driving force of phallic capitalism, and the immense growth of the sciences out of philosophy.

If the cosmos, economy, or mind could all be Oedipalized into arborescence through axiomatization, this begs the question of what we mean by axiomatization itself and its significance in the conducts of our lives. Axiomatics, however coherent the system of rules, can only become “true” to the extent an actor desires participation in the game. The rules of football, baseball, or boxing are not truth-ideas that Clerics aspire to drive toward Hegemonic Truth. No one looks to metaphysics or religion to justify the proper number of rounds, innings, periods, or any other fundamental rule of each sport. Yet, regardless of how arbitrary their original invention or the history by which they came into consistent practice, each new player learns them from a more experienced player. The experienced player presents the rules as self-evident, table stakes to participate in the game. Either you want to play the game, by these rules, or you do not want to play. Full stop. End of argument. The history of philosophy has been the exposure of axiomatics like this, in which the rules of the game as it exists reveal themselves to have no basis other than the game itself.

Baseball has a three-strike rule. It is a rule with no intrinsic worth higher or lower relative to any other potential number of strikes per batter – two, four, seven… Yet at this point, whatever convenience it served players when the game, through oral and practical tradition, invented norms and standardized its conduct, “Three Strikes” has now been axiomatized into the legacy of baseball. Look how strange American culture reinvents other systems of rules based on this axiomatic. Three Strikes has spread into parenting, penal codes, politics, and business.

One may trace the history of its origins and its application in practice, but in the face of such axiomatic dogma no game will permit a meaningful exploration of the question “Why?” This question is meaningless. The impact of axiomatization has immense significance. Another axiom lies buried under every game, the greatest rule of all paternalism: so long as every player has the same rules applied to them, the game is fair. Paternalism is an Umpire. The game, produced by machinic information systems, must axiomatize every outcome, expanding the book of its rules into every exceptional possibility, axiomatizing every outcome into its cohesive framework. Each game becomes an ideological system, reproducing itself through confidence and certainty. Players consider every potential unfairness in support of the self-evident rule at the foundation. In baseball, it is that of three strikes – three strikes become an out, the first two foul balls count as strikes, and if the hitter does not swing at a pitch in the strike zone it is a strike. An entire system builds off a handful of arbitrary rules.

All this serious rule-making, conformity, and enforcement comes with investment. Without billions of dollars spent on fields, training, coaches, players, and the revenue at stake for the winners, such a foolish consistency might not seem as important. Baseball enjoys, in the professional arena, an axiomatizing subordination within capitalism. Capitalism likewise prefers a handful of arbitrary beliefs about fairness as its entire basis, as does any opposing socioeconomic philosophy. One distinction lies in the utter simplicity of its foundation that allows capitalism to axiomatize every other system, including the sale of opposing beliefs. This axiomatic has three nodes: representation, expansion, and acceleration.

Before we continue, let us understand the only real alternative by completing the example of sports. In contrast with the axiomatization necessary for the investment of massive franchises, children left to their own free play often do not formally agree to any complete rules of baseball. Every rule is open to experimentation. They may use tennis balls instead of baseballs, run down-and-back in the absence of four bases, or play without separation of teams. If the wish to take their practice more seriously, they mimic adults. They play a game free of the axiomatizing power of subordination under salaries, bets, investments, and lawsuits. This leaves their sociopolitical product more open to re-valuation.

This likewise introduces heartache and lesser forms of civil warfare. Suppose the children agree to a five-strike rule, or agree to not define any area as foul. Such an arbitrary change of axiomatics will matter little so long as democratic agreement holds steady. That is, as any father knows wells, until one child sees an advantage in unilaterally changing one of these arbitrary rules. In an unfettered phase space, machinic information systems have this childish tendency, to produce unfair games, systems that encourage cheating, winning based on manipulation rather than skill. It is in these moments that Oedipalization of the game becomes the path of resolution for the players, despite their original desire to escape supervision through free play.

When the unfair “nature of the game” cannot attain resolution with an equilibrium exchange of truth-ideas, one of two options occurs. Without a trusted source of resolution, an arborescent father-figure, such a ball field is independent of triangulation under a dominating axiomatic. Then a child is likely to “take their ball and go home” – a metaphor adults use often to describe anyone frustrated with the intricacies of the self-evident foundations of our various machinic information systems. The other option is to bring a parent to the field, thereby ruining the free play of childish creativity.

Now we have two problems in our metaphor. Some systems, some games, one cannot simply “walk away” from. We cannot take our ball and go home when it comes to death, taxes, and a few other axiomatic elements of social and existential facticity. This triangulation of unfairness follows one of two paths. On the one hand, players seek external retribution via paternal information systems by telling mom, calling dad, going to court, or going to war. On the other hand, players seek internal vindication by displacing conflict to a self-evident, autonomous information dominance: “This is how it has always been, there is no helping it,” or “Everyone must do it the same way, so there’s no sense arguing.” These two methods form the normative boundaries of all civilization. They are an integral outcome of the Genetic Capitalism of Will-to-Power. This is the ultimate axiomatic, the capitalism of life-codes. “Thou Shalt integrate your code or your contribution dies with you.”

Many philosophers, teachers, coaches, and priests attempt to hide that their arguments reach a conclusion they held from the beginning. Inspired by the scientific method, like any father who gains a moment of insight from the simple wisdom of his child, we as philosophers should be forthcoming at the outset regarding our axiomatics; we can all join this game on equal footing and with adequate forewarning, knowing the table stakes and the half-time accoutrements up front; or feel free not to play.

Quantum Liberty

“Some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”

– Erwin Schrödinger

Groundwork for an Ethics of Machinic Agency

While freedom in action, predicated upon equalities that never manifest empirically but instead follow predictable laws, we can nevertheless build a case for quantum liberty. Even if the physics of lawful activity, determined within a probability density of particle-laborers, suggests we are not free, we have an innate sense of responsibility for consequences. This responsibility in ourselves and others is Agency. The paradox of Agency is that it requires us to believe in free will and determinism simultaneously. However, this is only a paradox when we apply abstraction that places our ideas on a single plane. Without this confusion of levels, the system of freedom and determination becomes clear.

While freedom is a homogenous lack of hindrance predicated upon categorical non-individuality, liberty is the emergent process of relative socioeconomic non-hindrance catalyzed by the sociopolitical power-laws that maintain the stability of non-equilibrium exchange. Quantum Liberty means that cosmic expansion ripples into a system of inequalities that, through capitalistic exchange, generates the rules that make us free. Are we free to fly? How silly – of course not – but the laws of physics liberate us to the extent we exploit some superpositions against others. Liberty is, in practice, the exploitation at one Level of Observation the power-laws and constants we find true at other levels.

Our emotional sentiments toward the freedom-signal and the liberty-signal stir some rebellion to this truth-idea; but, as Marx and Engel said about so many platforms of the Communist party – anti-property, anti-marriage, anti-nationalism – we do not freely bring these abstract commodities, these wave functions of justice, independently into being. A crowd of assemblages, possessing capital-mass and Information Dominance, lead and control these concepts. We concern ourselves little with DNA and Hormones as laborers of the human body, concern ourselves even less with photons and electrons as laborers of the human cosmos, and only recently concerned the middle class with the citizen as laborers of a socioeconomic system. This is precisely why liberty is an anti-freedom; a tradition in philosophy that authors express in fluffy, optimistic, utopian crescendos. More specifically, the hegemonic majority, within one normalized standard deviation of the liberated “average” citizen, enjoys far more freedom than those “long tails” of the sixth sigma, the asymptotic minorities, the socially dead.

Before we conclude in favor of revolution on the one hand, or fascism on the other hand, let us understand what freedom through rules, and therefore quantum liberty, implies for reality and human life. It is not simply the axiomatics of exchange that make “free” markets stabilize around their electromagnetic equilibriums. Equal freedom of exchange does not create Liberty on its own. We also cannot justify totalitarian inequalities or anarchistic freedom based on the differentiation of vectors. The individual narrative of the egoist, as shown by Max Stirner, is always at the expense of others. Even if we were all equal in our labor upon a common claim of the resources of Earth, liberty is far from individual. The problem of liberty lies in the sphere of morality, and the consequences that arise when all things freely exchange in accordance with identical rules. As Bertrand Russell described, coherence is not sufficient evidence that our beliefs are true, as multiple coherent systems of belief accurately using the same data are possible, yet these systems are nevertheless incompatible with one another, implying only one is correct or all are incomplete (PP). Likewise, if Liberty is “Freedom maximized by Rules” we will quickly see that many coherent axiomatic systems of liberty result in different social consequences in practice. We should pay special attention, though Baudrillard analyzes this in hyperbole and pessimistic tones, to our realization that our systems of exchange are so ubiquitously managed that even the absence of a rule is judgement regarding the morality of that rule.

We shape the plane of socioeconomic inequalities primarily not by rules “among equals” but by encoded laws so far removed from the reality of their enforcement as to encourage ignorance or passive acceptance. It seems the Universe and the State have this in common. Few question the validity of gravity or the stop sign once their context socializes them to accept such external control. The apparent power-law constants of molar aggregation and the emergent anti-entropy of the quantum level constantly expand. The rules are the pipeline that secures the flow of liberty, but the original free play becomes something distinct in the resulting markets of exchange. We find this system beholden to coherence in motion rather than identity. The rules of at the level that we can predict are unequal to our personal level of observation. The continuous functions of Information Dominance; non-exchangeable in any scenario, are the rules that liberate us for exchange at our own level of singularity.

State of Nature philosophy puts the information equality of abstract citizenship precisely in this way – the king and the peasant die equally well on guillotine. In more recent media, everyone becomes equal with a gun in their mouth. What a simulacrum indeed! The exchange-value of human vitalism, the cosmic citizen-as-particle, meets its final market correction in contrast only to the State of War. Locke justifies slavery based on prisoners of abstract war, involuntary servitude limited to byproducts of The War Machine (STG), while Deleuze & Guattari poignantly speak on behalf of postmodern capitalism-citizens, that we are all slaves, slaves of slaves, bound to our facticity of death (AO).

Irreducibility is a pattern superimposed by the human mind, which in observation of gradation consistently loses track of relevance. It is far easier (and lazier) to establish dogmatic planes of signification. Mastery, whether a painter or a chemist, lies in the practice of layering gradations to create coherence. To the rest of us, the “irreducible” components of any system behave in a wave-like manner, a great ocean we barely know. With sufficient opportunities, when given the “breathing room” of sufficient space-time within the phase of existential instantiation, the components behave like particles. These waves crash onto the shore of our consciousness, impressing us and moving our sands. The wavelike components of reality thrust upon and collapse onto the beach of our mind as so many particularized objects – particles “in principle” only, because their irreducibility is as much a fiction of the excitable mind as the further reducibility on another plane of observation. Creating a continuous reduction leads to confusion of levels, because abstraction treats the ocean, its motion, and the crashing waves as one sign. Observing planes, like gradation between primary colors, confuses the observer unless they may jump from one order of magnitude to another, sweeping the fuzzy vertical under the epistemological rug.

The trouble with any system of coordinates is the implicit role of a coordinating system that controls the orientation of the coordinate system. For instance, while a fighter pilot during a dog fight works to complete complex maneuvers against the enemy, applying a fluidity of spatiotemporal orientation to generate and exploit opportunities, we must recognize that the orientation of the coordinate system, the fighter jet, orients under the control of a coordinating system, the pilot. Changing the orientation of a system of coordinates may change nothing about the components of the system, but it shifts the observation available and opens new planes of significance we previously overlooked due to gradation errors.

These problems of conception reveal a first principle: Quantum Liberty is skewed emergence of the probability density of component particle-becoming. The orientation of the Observer skews the concretized outputs of each sociopolitical production system. We can begin with a soft subjectivist assumption most components have an incomplete understanding of their system, and some components have an orientation that produces Information Dominance against other components and other systems. Therefore, we should begin any analysis with a healthy scientific skepticism of the Observer – especially of ourselves.

This analysis spans all of philosophy. First, the question of what cosmic laws may tell us about our own laws. Second, the question of what cosmic freedom may tell us about social, economic, and political freedom. Philosophy does not provide permanent answers, though many sciences are “spin offs” from the continuous improvement of the body of philosophical questions available. Most frequently, when we collapse planes of observation in our abstractions, we conceal the unanswered question and the analyst that asked it.

Berkeley assigned this cover-up to his monotheistic deity, while Hegel made us participants in this deity as a collective. Some agree with Schopenhauer, that questions and analysts are an unfortunate mistake of the cosmos, of which we intelligent self-reflective beings are the worst of all Observers. Others conclude with Nietzsche that the cosmic machine is amoral, so that a human’s Machinic Agency must be highly personal in its definition of values. First, we should play a detective game, in search of the lost Observers of semiotic abstraction. The Observer, as we have concealed it through invention, is the orientating system of any exchange-triangulation.

When we say that particles possess free will or exhibit mechanical determinism, in each case we are losing the metadata regarding the orientation and signification of the Observer. The abstractions of observation proceeds in truncating the parameters, cancelling the noise, leading the witness, and selecting the level of observation. The components of the system produced behave like particles under observation, relative to the system as a continuous function. Though wave-like prior to semiotic abstractions, they become particularized through the choices of the Observer and categorized based on level of observation and orientation of the coordinate system. The Observer, as scientist, philosopher, et al, superimposes a dialectical manipulation, over-codes an axiomatization, of a system that behaves wave-like until it becomes particularized.

Therefore, Machinic Agency emerges out of the suspension between antithetical oppositions, ones that must never resolve. To resolve them would cease the revolutions of the system and its complexity, annihilating the cosmos. Of course, no component can achieve this. The system moves along all the same. Machinic Agency manifests at some system equilibria, neither predicated on the subject by a synthesis of a universal totality, nor an uncaused cause of the soul, but a suspension between systems of rules and their freedom of exchange. Unobserved, the person is not a citizen, a father, a philosopher, these relations particularize an individual as a component of each coordinate system. Unobserved, or without self-reflective intelligent consciousness, the components are free. Free play herein as being, a moment of potentiality, an unrestricted market of wills crashing and churning like so many ocean currents. Taken in aggregate, homogenized through abstraction, we can extrapolate wave-like probabilities of being and becoming. These uncollapsed truth-value densities, like a tropical storm one week prior to landfall, we may then predict from afar.

It is this capacity for prediction and communication that bring together philosophy and science as strange bedfellows. As Schopenhauer observed, there lies a gulf between knowing something innately through practice or knowing something abstractly through generalized rules and reason, such as the difference between a carpenter cutting down a tree and building an ornamented rocking chair and an engineer studying the product of this endeavor with geometry and physics to mass produce it. The only thing gained by physics, mathematics, predicate logic, and other abstract methods is the ability to communicate and reproduce what an expert practitioner already gained, whether kickboxer, billiard player, or farmer, without any need. We can feel some nostalgia here, as he wrote The World as Will and Representation before the major industrialization, modernization, and globalization we know today. Today technology has allowed a form of capitalism, in which the applied sciences, general research, and development of artificial intelligence has made abstract efforts its own domain of creativity for its practitioners.

The above metaphor regarding the prediction of hurricanes also provides an excellent example of the goals of abstract reason when taken as a literal fact. Prior to computers, networks, algorithms, GPS, satellites, Doppler systems, and several radars connected globally, the oral traditions of Caribbean islanders and the practical wisdom of elders read signs of hurricanes. Science and technology standardized this wisdom, validated what data to gather, and stored hypothesis, error, and conclusions in a consistent manner so that despite geographic distribution, early warnings could become communicable predictions. Due to the methodological rigor of science, these predictions become trusted even between nations.

Science is the ability to standardize what we communicate and how we trust the meaning of its communication, even when we conclude together – “That was obvious! We already knew that!” Philosophy is the art of analyzing the inconsistencies, shortcuts, conflicts of interest, and moral implications of how these questions gain attention, the means of deliberation, and the consequences of the myriad of conclusions. Science and Philosophy represent two forms of collective observation, one regarding practical understanding the other regarding the process of knowledge production.

Observation is axiomatization. It takes knowledge that a master practitioner knows as self-evident through the body and the senses, then generalizes this knowledge in terms of the self-evidence of collective intelligence. The problem of truth-value is a problem of trust. Truth is the dominant information of a trustworthy system of coherent facts, backed by probability, experiments, debate, and sanitized data sets. The role of the observer and the conflict of interest inherent in a brilliant individual or the nostalgia of an entire generation we must interrogate with a mix of skepticism, doubt, and suspicion.

Too much Information Dominance in the hands of a solitary group is certain to divorce precision of truth-value from accuracy of truth-value. Each may become coherent systems, probable explanations, from identical validated facts. The difference between knowledge as precision and our doubt toward truth as accuracy requires our discipline to never stop questioning, verifying, and cross-checking. There is simply too much incentive to truncate and superimpose when an organization gains Information Dominance. The incentive to protect privilege skews perception in favor of self-preservation. Inquiry therefore needs observer disagreement. However self-evident, reliable, and coherent the ideas we must doubt their legitimacy. No matter how reputable the intellectual ethics of our specialists are, we must nevertheless make room, as John Stuart Mill said, for “eccentricity” in our theories (OL). Especially when serious enquiries may shape, via selection pressure, the truth-ideas that will gain future Information Dominance, we must maintain suspicion.

The contemporary need to produce ethics worthy of methodical naturalism becomes clear: the philosophy of suspicion can no longer be the isolated pessimism or ranting of the hermit that refuses to exit society. However, the “professionalization” of philosophy has fallen short, a diaspora far from our real needs. While Schopenhauer, Marx, Nietzsche, Foucault, and Baudrillard blazed our trail of suspicion, building methodical suspicion equal to power of science and technology requires an element of process control. If the world is now a simulation, philosophers must undertake semiotic hacking.

Defining Quantum Liberty as a groundwork for Machinic Agency requires more than a simple re-thinking. The digital age is unlike any other, if Empire become continuous, no longer party to a territory, ethnic, or religious group. As philosophers and scientists, our practices are shifting from those of tribal spearman in the forest to become space marines of science fiction. Despite any intensity of strength of will we may have, we still need re-tooling. One tool brought by quantum thinking is the ability to rely on the unreal symbolically to derive a probable reality without losing our pragmatist footing. Note the distinction between inserting a symbol with probable significance, such as dark matter, and miraculating an abstraction as a first-cause, such as spirit. We will tolerate symbols of significance precisely to the extent they make experiments possible and theoretical enquiry more robust.

Precisely because of the potential conflict of interest that provides a stable recording surface for theoretical, applied, experimental, and commercialized technological progress – namely, socioeconomic exchange that funds the salaries and budgets of individuals and institutions; and precisely because we can unwittingly be the origin of our own bias, indoctrination, and axiomatization due to the marginal relative incentives of Information Dominance, philosophers must play the role of facilitator, counselor, and psychoanalyst.

Philosophers, in the broad sense of anyone who will take a system operative view of sociopolitical production, are those who elucidate and criticize; whereas specialists of science and industry become too far removed in their silos of thought to see the potential synthesis and cross-pollination of ideas lack any hope objectivity. As the population of information workers continues to grow, we should seek out the specialists who dare to look over the wall of etiquette erected between components of the American Invasive Ideology.

Question Your Assumptions

“Is there any knowledge in the world which is so certain that no reasonable man could doubt it? This question, which at first sight might not seem difficult, is really one of the most difficult that can be asked. When we have realized the obstacles in the way of a straightforward and confident answer, we shall be well launched on the study of philosophy — for philosophy is merely the attempt to answer such ultimate questions, not carelessly and dogmatically, as we do in ordinary life and even in the sciences, but critically after exploring all that makes such questions puzzling, and after realizing all the vagueness and confusion that underlie our ordinary ideas.”

— Bertrand Russell

Rebellion, Bigotry, and Due Process

To build a legacy that will truly last, we need consistency of self-identification in addition to experimentation. This takes balance. An over-reactive system may adapt quickly, but it will fail to scale in complexity.

A complex system too rigorous in its resistance to change, cutting down strange attractors and emergent organic leadership that opposes its orthodoxy, will find itself maladaptive. Such a system, if made of relatively independent actors, will produce schisms, splits, and offshoots due to the excessive fundamentalism of its self-identification process. Instead, we systems builders want the robustness, strength, and adaptiveness that results from the tension between tradition-rooted cultivated practices versus the spontaneous pursuit of fashion and buzz. This tension is healthy so long as it drives the continuous experimentation engine of the organization, allowing signals to emerge and backpropogating message errors. In other words, we should expect “just enough” sibling rivalry at any level of the organization. We should expect triangulation and escalation of signaling.

Experimentation requires tension, but discovery requires due process – the fair treatment of both sides in a conflict resolution.  The essential role of the systems builder is the promotion of self-awareness. A system unable to recognize its own constructs and correct them is unable to change. We see all too often that power need not be taken from someone so long as they believe they are powerless. Similarly, an executive order is rarely an effective mechanism for introducing lasting adaptation, but it can be quite effective as a signal for the system self-organize against.

A cultivator of adaptive systems does not generate unrealistic and unreasonable new rules in an effort to artificially push the system in a new direction; rather, we make the existing rules of interaction and exchange visible and known equally. Where the rules allow differentiation, we make the logic behind such distinctions known, trusting that an adaptive system will correct itself. We do not employ attrition warfare – one ideological information system against another – instead we maneuver against the broken logic of the enemy system. In this way, organic leadership does not pursue the wholesale destruction of an opposing nation, religion, economic institution, or political party. This is folly, as the diminishing returns of attrition warfare depletes energy, resources, and public support. Those are people on the other side of our wars, after all, and our monstrosities in the pursuit of victory easily turn public support against us.

Instead, wherever there is differentiation the systems builder ensures there is equal access to knowledge of the logic behind apparent unfairness. We encourage open rebellion and take even the least realistic signal seriously, as this is preferable to letting the system stagnate while an insurgency is festering behind closed doors.

To maintain our persistence, to balance tradition and stability with experimentation and responsiveness, we must above all ensure due process and the faith of the public that due process provides fair treatment in any conflict at each level of the system. It is decentralization of local process enforcement that allows systems to experiment. Equal access to escalation of justice will resolve critical reinterpretations. We do not need, as a systems-builder within an overarching complex adaptive system, to control the rebellion of progressives and early adopters nor the backward quasi-bigotry of instinctually late adopters. We must only ensure the system is healthy enough to re-inform itself based on the outliers, trends, and signals.

Scaling Like Organic Systems

A System

A system – as we will define it – consumes resources and energy to produce something that is more than the sum of its parts. Not only does is produce value it does so in a way that sustains its own existence. If we consider Henry Ford’s early Model-T production system that assembled automobiles, the raw materials – rubber, coal, plastic, steel – were meaningless as an unformed heap. Along the way, the “intrinsic” economic value of the raw materials were destroyed and could no longer be sold for their original price as raw materials. At the time, there would have been no resale value for many of the assembly pieces, because Ford created an entirely new value network and disruptive business model to create a market that could properly assess the value of the non-luxury automobile. Yet, once assembled, the assembly line put these pieces together to create value greater than the sum of its parts.

An example of a relatively simple organic system is a single-celled organism like some species of Plankton our oceans. A plankton lacks sophisticated embryogenesis, there is no differentiation of multiple tissue types, no embedded systems, and no coordination mechanism across cells. Nevertheless, the simple biochemical processes and the internal workings that complete these processes have continued for billions of years by not only producing its own self-maintenance, but also by managing to reproduce. There is a surprising large amount of DNA for such a simple, small, organism – but why did this legacy of code begin amassing in the first place? Whether we venture to call it “divine” or not, there was certainly a spark of some kind that began an explosion that has yet to collapse back into chaos and the dark.

Even with these simple systems, where we can trace each exchange in the value-transformation process, including materials, structures, energy, and ecological context, the sum total of the Model T and the factory that produced it is more than its parts heaped separately in a pile. Our difficulty in understanding such systems is a problem of multi-fractal scaling. For now, let it suffice to say that making a variable in a system better may not result in a linear change in outcome.


A Complex System

We have major issues understanding how (or worse yet, why) a system consumes resources and energy to produce value in excess to the sum total of the elements and energy amassed in the absence of the system that produced it. This problem is only compounded when we begin embedding specialized sub-systems within an organism. In the example of an automobile factory, we could say that every cell of every person is a system, that each person is a system, and that each distinct functional area, separated by distance, is a system. The accounting and finance “system” and the inventory and assembly “system” must interplay as part of Ford Motors, a system in its own right.

So we can define a complex system as having embedded sub-systems, causing the observer to not only see that the whole is greater than the sum of its parts, but the observer may also slip into a “confusion of levels” if they attempt to manipulate a part of a system to shift the outcome of the whole. Worse yet, confusion of levels can have disastrous, non-linear results that are the opposite of the intended change due to confusion of cause and effect. When sub-systems are embedded within each other, their interrelationships may act on differing scales, either in time or place. So we must careful when attempting to improve a complex system. We must use empirical process control to chart the change in systems outcomes rather than simply optimizing subsystems in isolation.


Multi-Fractal Scaling

A fractal is a pattern that repeats self-similarly as it scales. One of the most common fractal scaling patterns in nature is branching. From the trunk of a tree, to major its major limbs, to twigs, and finally leaf structures, this fractal scaling pattern enables a lifetime of growth cycles. Leaves can bud purely based on opportunism, in a relatively disposable manner. This is because the tree, as a seed, has all the legacy of generations of trees locked inside it. The tree does not aspire to be “the perfect tree” or assume that it will grow in perfect sunlight, humidity, soil pH, and water availability. The tree does not get angry when a major branch is broken off in a storm or struck by lightning. Instead, its fractal scaling pattern is prepared for intense competition for sunlight in the sky and resources from the ground. The tree’s scaling pattern has risk mitigation “built in” because it grows the same in the middle of a field with frequent rain as it does in a dense forest.

We see this branching strategy throughout nature, from ferns to human blood vessels. However, an even more effective approach to self-similarity comes from multi-fractal scaling. The ability to adaptively select between more than one repeating pattern or differentiated patterns based on scale requires a different kind of fractal: time-cycle. It is not just the branches of a tree that result in an environment-agnostic strategy for growth, it is the adaptation to cyclical daily growth, scaled to cyclical annual growth, than scaled to multiple generations of trees that grow. This final step is an important one. Multi-fractal scaling is not only the source of novelty and adaptiveness “built in” for a single tree, it repeats at an even larger scale as a species competes for dominance of a forest. Multi-fractal scaling encourages “just enough” opportunism to enable small-scale experiments that can be forgotten without loss at a greater scale, or thrive when conditions change.


Adaptive Multi-Fractal Scaling

The strength of multi-fractal scaling, from branch to tree to forest, is its total reliance on empirical process control.  The legacy code is a confusing jumble of competing messages that a human mind, attempt to “engineer a perfect tree” would attempt to simplify and beautify. That legacy code, however, wasn’t written with any intention of crafting a perfect tree. That code was written to create a minimally viable reproductive system. It is built for one thing: continuous experimentation.

Continuous experimentation happens at each level of multi-fractal scaling, risking economics appropriate to its scale to find asymmetric payoffs. An Oak tree risks very little per leaf that grows over the entire course of its life. In a dense forest, however, that continuous experimentation of growing leaves higher and more broadly opportunistically based on local returns on investment can suddenly break through the forest canopy or unexpected fill the hole left by another tree’s broken limb. An Oak tree does not require centralized control of where leaves will grow or which limbs to invest in. Instead, the legacy of continuous experimentation enables multi-fractal scaling that competes locally and opportunistically.

Again, we do not need to understand what spark set this fire ablaze, we only need to see that it is still spreading and we are a part of it. Over-simplification of superficial outcomes will lead to poor decisions about inputs. Organic leadership relies on context, structure, and enablement of continuous experimentation. Organic leadership is a “pull” system that relies on scaling patterns for decentralized empirical process control. Artificial “push” systems force requirements and attempt to bandage the inevitable inefficiencies of a non-adaptive system.


A Complex Adaptive System

A complex adaptive system does not merely take in resources and energy to produce itself and reproduce itself as a unified “whole” that is greater than the sum of its parts. It does not merely embed subsystems with multi-fractal scaling and decentralized control. A complex adaptive system also operates with a continuous experimentation system built in to its normal framework of activities. When we make the leap from an Oak tree to the human body (or any other mammal on Earth), we can truly appreciate just how complicated it is to improve the health of an individual, or an entire population, when we observe the interrelationships of various physiological and socioeconomic systems and sub-systems. Creating lasting change is not only complicated in terms of finding the correct level and understanding the full ramifications across the entire system, each complex adaptive system is also continuously experimenting and will adjust against such changes based on short-run, local, decentralized opportunism.

To care for a complex adaptive system requires not only an understanding of inputs, processes, and outputs, but also the multi-fractal scaling of continuous experimentation that maintains long-run viability. When short-run economics are working against long-run viability, it is not sufficient to reward “correct” behavior to counteract short-run opportunism.  Instead, we must shift the context of local decisions so that short-run opportunism serves long-run viability.

Accidents Will Happen

Accidents may seem to the observer to be unintentional, but continuous experimentation is built to test the boundaries of success, to ensure that precise empirical process data is also accurate for the needs of viability. In other words, if you’ve ever accidentally tripped and fallen, or accidentally loosened your grip on an egg and dropped it on the kitchen floor, this was a natural element of complex adaptive systems quietly running experiments.

Embedded in our own human code, our sub-systems are all built for continuous experimentation as a method of calibrating precision to accuracy, using multi-fractal scaling on short, long-short, long, and distributed cycles. A short cycle is an immediate reference point for an event, using data held in working memory, and is reactive to immediate changes. A long-short cycle compares current data to immediately recognizable patterns of events, more embedded memory or conditioned responses that have proven useful over time even if we assume the event is an occasional outlier. More significant, painful events can skew our “normal” for decades and even become passed to the next generation as part of our genetic code. A long cycle has been stored to our genetic hard drive for future generations. A distributed cycle is a socioeconomic artifact that requires a medium of exchange and may last for centuries.

As humans, our multi-fractal scaling of continuous experimentation results in the creation of complex adaptive socioeconomic systems. Our legacy code drives us toward exchange, tooling, building, and reproduction because the experiments that are in motion are far from complete.

Like our occasional fumbles and falls, our social systems produce results that appear to be accidents with no guilty party, pure coincidences of circumstance, which occur due to failed experiments. Organic leadership harnesses this natural propensity for decentralized opportunistic experimentation by encouraging it but setting boundaries for it, feeding it but ensuring checks-and-balances from opposing interpretations, and guiding it by changing context and opportunity rather than directly managing outcomes.

Agile is Not Strategy

“Agile” Has Gone Mainstream

Somewhere in the fog of misapplied buzzwords and enterprise institutionalization, “Agile” has nevertheless gone mainstream. On the one hand, the core differentiating idea, “Respond faster,” is perhaps too easily applied to virtually anything. On the other hand, due to the widespread adoption of communication, productivity, and collaboration tools meant to enable agile product development, it is easy to feel like everyone can “be Agile” now. From modular home architecture to distributed automobile innovation, there has been a massive push to apply the tenets of the Agile Manifesto to virtually everything.

View More On YouTube

However, although a utopian level of rapid innovation may seem exciting for some us, it is safe to say that the majority of people would struggle to cope with the chaos of truly ever-changing market signals. In fact, despite my own energetic love of brainstorming and disruptive new technology, a need to “respond quickly” to everything in life sounds exhausting… if not terrifying. Personally, that is the real appeal of owning an autonomous car someday – one less life-critical thing requiring my constant attention and responsiveness.

Luckily, the economics of novelty do not reward chaotic change for its own sake. It does not always seem this way from the outside due to the explosive growth tech startups seem to enjoy “out of nowhere” once they gain market traction. This is because innovation-based markets are driven by payoffs that are asymmetric and such new markets are often winner-take-all. It can appear, to an outsider, that all innovation is lucrative. “If you build it they will come” is a disastrous approach to establishing an expensive new product that the market cannot effectively evaluate. As the Lean Startup community has shown us repeatedly, building something no one will demand is the greatest risk in technological innovation.

Market-Based View of Agility

Instead of a “Respond faster to everything” definition of agility, I propose a Market-Based View of agility, steeped in the economics of competitive strategy and the maneuver imperative suggested by chaos theory: Responsiveness to signals in a market with imperfect information and imperfect competition.

Oddly enough, this economic definition of agility actually spans the entire spectrum of delivery practices from the much-maligned dystopia of multi-year, big-bang “Waterfall” (with a capital-marketing-“W”) to the utopian culture of innovation and spontaneous sing-alongs promised by “Agile” (with a capital-marketing-“A”).

As Little’s Law shows us, utilization is inversely related to responsiveness. This is nearly a truism in any other industry. Tech and “digital” just has more of a sense of mystery, magic, and malarkey to it. Imagine the parallel for an economist or accountant – liquidity is inversely related to opportunity cost, demand elasticity is inversely related to price volatility. It is imperfect information and ambiguous, non-linear relationships that make things “interesting” in digital product development.

The same is true of “agility” as responsiveness. Depending on who is responding and to what they must respond, the correct amount of responsiveness will vary. It is easy for an Agile Coach to die on the sword of “20% time” because 100% (or more) capacity utilization is the entrenched norm for most IT (Information Technology) and PMO (Project Management Office) organizations, whereas digital product innovation is obviously not static. In a company where 50% capacity utilization is the norm, that same coach would recommend a rapid backlog creation workshop – to get everyone “engaged” aka busy again. The tension between these two ideals – creating slack in the process versus keeping a well-groomed backlog – is rarely accounted for explicitly, causing the team to be judged by KPIs that the Product Owner should be accountable for.

When we leave the safe waters of “responsiveness to evolving technology” preserving responsiveness starts to represent far too much opportunity cost.

Agility for its own sake is impossible to justify because there are many industries that, for now, are safe from disruptive innovation (as properly defined). These industries are typically mature, stabilized by well-established contract law, and the market maintains low-volatility price equilibrium for relatively homogenous product offerings. Agility, as responsiveness to signals in a market with imperfect information and imperfect competition is only justifiable to the extent that we are unable to trust the consistency of signals, information, and competitive forces. A stable marketplace such as crude oil can “hover” near equilibrium despite economic rents, distorted signaling, imperfect information, and imperfect competition precisely because the expected certainty is relatively stable.

Disruptive innovation, in contrast, begs for high levels of agility. Responsiveness, as economic maneuver, is critical if an established competitor wants to survive in an industry or market that is either undergoing disruption or if it is unknown if disruption is about to occur. Of course, orienting is essential to agility, so responsiveness to signals is the only way to thrive while your industry is in the process of discovering whether a new product offering or business model is a sustaining innovation or a disruptive innovation.

Agility is Not a Strategy

Strategy is tradeoffs. It is the choices we make in pursuit of a clear goal. Strategy is who we serve, knowing we do it at the expense of someone or something else, because time, money, talent, and brilliance are all zero-sum. No matter where you give your time and inspiration, or gain them from others, they taken from somewhere else. Strategy means focusing the daily decisions across all activities toward a cohesive company focus. To this end, there are three generic competitive positions – cost leadership, broad differentiation, or focused differentiation. While agile can play a major role in the fit among a company’s activities for any of the three, agile itself is not a strategy any more than “marketing” could be considered a strategy.


The problem is that some companies treat Agile as a strategy and often believe it will lead to both differentiation and cost competitiveness simultaneously. Because some companies are already incredibly behind the competition in productivity and digital presence, the dysfunctions of middle managers who lack strategy pursue every proven value offering. The result is bloated products, over-sized teams, unclear value propositions, and cyclical reboots. Without a clear end to agility, a company will pursue agility for its own sake – until the money dries up. Strategy is the art of making choices, choices that preserve a company’s identity and strategic position. It ensures that the investments of time, talent, energy, and money are flowing in a cohesive direction without being spread too thinly across all options.

Responsiveness to market signals – valuation, demand, threats, and fleeting opportunities – can obviously improve strategic position, but only if our “response” has enough alignment to maintain a position with sustainable competitive advantage. Agility can bolster the success of a strategic vision, but a blanket imperative to “respond faster” in the absence of alignment on how to respond and in what way will only exacerbate the wasted energy, time, talent, and money of an organization.

From the market-based view, higher levels of agility benefits economic maneuverability. If there is a lack of clear strategic intent, shared values, established repertoire, or cadence of synchronization, pressure to respond quickly will be disastrous. If Agile (or DevOps, or LeanUX) was your only strategy, your “golden ticket” to innovation, it may actually make your nightmare far worse. You may find you respond to the market more rapidly, but you do it at higher risk due to volatility. A hair trigger is only as good as the ability to rapidly when to pull the trigger – and when not to pull it.

Agility is Operational Effectiveness

Clearly, agility itself is not strategy. Instead, agility is operational effectiveness. As “Agile” continues spread into domains further from rapidly changing technology products, this becomes increasingly clear. Michael Porter’s “productivity frontier” has stood the test time in this regard. The productivity frontier represents all possible best practices available to a company given their cost or differentiation position. Because the frontier of better practices, better technology, and better management processes is ever-expanding, it is unsustainable for a company to chase every possibility simultaneously. It is especially noticeable in digital products, where large enterprises that far more capital to invest in the productivity frontier cannot make sense of how fit all their activities, processes, and tools. They increase quality and decrease risk, but their improved responsiveness is purely internal. The more money they spend on better tools, the more difficult it becomes to make sense of how to adopt them or keep the value stream flowing uninterrupted.

Although gains in operational effectiveness through adoption of better processes and tools is necessary to maintain profitability, it is insufficient. Through aggressive efforts to protect against disruptive innovation, established rivals in every industry have already raised the bar for everyone. How did we get here? As the clear winners for a given best practice emerge, the companies that sell the product gain market traction. As the best way to use a set of tools becomes proven, the consultants that train companies on better processes gain traction. This has the effect of raising the bar for everyone so that the consumers capture all of the increased value-add. We saw this clearly with websites then mobile apps – millions of dollars were spent to bundle an information product with every existing product, rarely with any strategic consideration of who would capture the value.

Strategy is not operational effectiveness.

Operational effectiveness goes viral. Strategy does not. Strategy is the art of being different through consistent decisions. Strategy is measured in the ability of every constituent to trust the core ideology of a socioeconomic superorganism. Like a professional athlete, agility for its own sake is meaningless. It is the coach’s responsibility to create fit in the time and energy invested in generalized strength, speed, and agility of an athlete with sufficient specificity to out-perform a competitor. No amount of expert coaching will help the athlete to win if they refuse to pick a sport. In the same way, increased operational effectiveness can increase the strength, speed, and quality of our response to market signals, but we must fit these capabilities together with a level of specificity as a competitor in a zero-sum game.

This fit between general responsiveness and specificity of repertoire is the realm of strategy. If companies did not exist in a context of exchange and finite resources, we could all pursue operational utopia for its own sake. Instead, we see that competitive strategy is the imperative to not only be responsive but also exploit responsiveness in order to out-maneuver competition and market forces. We do not need to be infinitely faster or stronger, we need to use strength and speed to make ourselves unique as a competitor.

Agility Increases Economic Maneuverability

As the rate of technology adoption continues to grow this has the effect of causing distrust, uncertainty. As disruptive innovation continues to cause havoc in various industries, companies are less able to trust that the best practices and value proposition of yesterday are still valid. In fact, while information asymmetry between established rivals was once the greatest risk in competitive strategy, this is now matched by the risk that disruptive technology can enable new entrants that no barrier to entry can stop. Today’s competitive strategy must contend with the possibility that established rivals may not even see the new entrant coming.

Competitiveness is defined by our ability to respond quickly, which requires both speed and focus. The greatest risk in any value-add process today is rapid shifts in information asymmetry between demand and supply. By the time you notice the disruptive innovation, it may be too late to effectively respond. By the time you respond, you cannot be certain will anyone still buy it. Worse yet, the assumptions made at the time of capital investment may be completely incorrect by the time you go to market. Disruptive innovation has the effect of compounding the inelasticity of supply for established rivals, then ignoring it entirely to induce new demand that the established rivals cannot contend with.

Thus, we not only have imperfect competition in the market due to economic rents and significant learning curve disadvantages, we have increasingly imperfect information in the market as well – between established rivals, across the five forces, and between supply and demand. Risk, reward, and volatility during our current pace of technological innovation and adoption have made it necessary to not only respond better and faster, but also prepare for the risk that all our information is wrong.

This is why there are clear limits to the marginal return on the marginal investments that increase responsiveness to signals in a market with imperfect information and imperfect competition. The point of diminishing returns is shaped by the market, its competitive forces, and the necessity of economic maneuverability. We cannot afford to refuse adoption or our margins will erode faster than those of our rivals in the rest of our industry. We cannot afford to adopt everything because the pace of change makes information too difficult to trust. A company must have clear strategy, strong self-identity, and narrow focus in order to compete.

Agile is an Ideology, Not Your Core Ideology

How can we tell, pragmatically, how far to take our efforts to “transform” a company’s responsiveness and maneuverability? Once we recognize the distinction between strategy – the market we will compete in, the customers we pursue, the tradeoffs we make – and operational effectiveness, it becomes apparent that the ideological frameworks of the various stages of agility must fit with the core ideology of the company. Most of the legacy of your company is not the code you failed to maintain, it’s the history of how you have self-identified as system that is somehow distinct from other systems.

The principles of operational ideologies may certainly be appropriated by company core ideology, but it can never provide sufficient identity for sustainable competitive advantage.

Despite the exciting novelty of digital product innovation, the old categorical imperative “Know Thyself” remains critically important. Choose who you are, what you “live” for, and stay focused on your goal, no matter how responsive you become.

Keener’s 6 Stages of Competitive Agility

Economic Stages of Competitive Agility

If we explore operational excellence “transformation” using our economic definition of agility: responsiveness to signals in a market with imperfect information and imperfect competition we see six major phases. You will note that some ideological camps – perhaps even the one you are currently involved with – tend to promise, but frequently fall short, of any realistic definition of Agile Utopia.

Stage 1 – Big Bang Waterfall

Big Bang Waterfall is the post-apocalyptic, dystopian, late-adopter, core-competence-turned-mass-layoff nonsense that every marketer, sales person, or consultant uses to sell the need for Agile (with a capital-marketing-“A”) utopia. Although serious blunders have taken place by late adopters, even contractors have gained enough operational best practices to assist in limiting the damage done by “Big Bang” – after all, extremely long-term contracts must be extremely defensible or they are poorly enforceable. This would be overhead few development consultancies could afford. Instead, like your average apartment in the city, everyone agrees that a commitment horizon of greater than one year with no clause for early cancellation is foolhardy and unrealistic.

  1. Reliance on top-down internal signals
  2. No concept of holding costs for information and knowledge
  3. Unknown economies of scale are assumed infinite
  4. No control over Work-in-Progress

Stage Gate Control

This is the real “waterfall” – in the real world – that you are likely to see. As it turns out, it is socioeconomically defensible: if a company clearly understands its strategic position, understands the necessary tradeoffs of that strategic position, and that position itself limits its digital product presence, it may make sense to fully outsource digital product design and development. Likewise, if and only if that position makes it economically advantageous to take a risk-intolerant late-adopter position for new technology, it makes sense to invest only in short bursts to keep up with the minimum expectation of that industry. It would even make sense to copy proven market demand and the value offering of proven winners in digital because innovation is very often a winner-takes-all competition. Unfortunately, the majority of companies we see that behave this way do not do it because of strong strategic focus. Many of them may have built their PMO on this foundation and did not notice the industry change around them.

  1. Reliance on top-down internal signals, with additional “resonators” added
  2. Increased overhead makes holding costs for information and knowledge apparent
  3. Unknown economies of scale are assumed infinite even though batch size is limited
  4. Project duration limits provide control over Work-in-Progress

Cost Center Agility (Agile XP, Kanban, Scrum)

This is the agility the sales reps deliver after promising utopia. I will not name any names. The tool, the process, or the framework can be very lucrative when sold at the right price, while a handful of true believers can reinforce the value proposition on behalf of the sales team for a lifetime. From our economic view, the real benefit of all such systems originates with two shifts: 1 – The gradual shortening of planning horizons until they are realistic, suited to the volatility of their market. 2 – The visibility and progressive restriction of work-in-progress, controlling previously ignored holding costs for information.

  1. Tempo of internal signaling is increased
  2. Sequence or prioritization can manipulate holding cost for information and knowledge
  3. Stable teams can collect enough data to reveal actual scale economies
  4. Time-boxed incremental effort provides control over Work-in-Progress

Continuous Delivery (CI/CD, ATDD, DevOps)

Continuous Delivery is focused on automation of manual activities. These activities were economically appropriate when short-run optimum batch size and long-run were equal (i.e. one project building something that lasts a long useful life). In Cost Center Agility, it became apparent that the economics of short-run manual transactions like testing were very different from the long-run economics, which justify automation. Throughout this process, transaction costs for testing internal and external signals of value must be minimized. The most important shift that occurs due to this reduction of transaction costs is an increasing definition of quality and an increasing check for signals. It is the first time in the pursuit of agility that the organization begins to seriously and methodically consider the possibility “we might be wrong.”

  1. Internal signaling is formalized and shifts toward instantaneous
  2. Decentralized control diminishes transaction cost considerations
  3.  Holding cost for information and knowledge is primarily within the planning and design portion of the value stream, making product marketing behavior patterns from Big Bang Waterfall unfit
  4. Product operational data is aggregated, allowing multi-fractal pattern analysis
  5. Canary analysis, A/B deployment, and automated rollout remove Work-in-Progress pressures

Hypothesis Driven

Once we move from the assumption we, as rationalists, can create and deliver against a “perfect” solution plan, and work instead from the assumption that uncertainty makes it necessary to validate continuously whether or not our assumptions are correct, we can then ramp up our attention and responsiveness to market signals directly. It is not sufficient to listen to complaints and work “very, very hard” to please people. We must be relentlessly scientific and maintain strategic focus at all times.

  1. Internal signaling is consistent, reliable, and part of organization self-identity
  2. External market signaling replaces extensive planning because product marketing cannot maintain the same pace as technical delivery
  3. Holding cost for information and knowledge is diminished through direct market responsiveness
  4. Aggregated multi-fractal pattern analysis now combines marketing and operational
  5. Distributed control and single-piece flow reveals and removes value stream inefficiencies

Culture of Innovation

The Culture of Innovation, frequently promised as part of “agile utopia” is really not necessary for most businesses. This is because, unless you have established a strong strategic position that necessitates continuous innovation – that is, unless you are a technology company –the risk of such novelty is unjustified. Most organizations are wise to encourage “innovation” as a benefit to employees while maintaining tight control over administrative context and strategic fit. Especially for a mature publicly-traded company, this typically implies spinning off the new business unit because it no longer fits well with the historic risk profile of its stocks.

The Limits of Agile as Operational Effectiveness

Most organizations end up “stuck” in stages three and four. Without a clear of understanding the economics of operational effectiveness, this is the source of years of frustration for the consultants and coaches working diligently to encourage best practices that have diminishing marginal utility.

The standard of Operational Excellence for the majority of companies will likely fall in the Continuous Delivery stage, occasionally flirting with Hypothesis-Driven design-development (also called Lean UX by some). The Culture of Innovation, when looked at closely, is actually quite extreme. As mentioned above, we really would not want this level of agility in most of lives or in most of the economy. There is an element of controlled gambling because the economics of this stage rely upon asymmetric payoff and induced demand, creating or expanding demand for something no one had asked for. It requires such an intimate knowledge of the market that a company can go above and beyond extreme responsiveness and attempt to predict or even invent non-existent future demand.

None of these phases (or their ideological camps) are intrinsically correct or incorrect for a company’s digital product delivery. Instead, the validity of the philosophies, processes, and tools at each phase depend on the economics of interaction with the market. While none of these “phases” are intrinsically or ideologically correct, the firm that pursues a differentiation strategy dependent on superiority in digital product innovation as a competitive advantage will fail without guiding the economics of responsiveness to market signals. If a company chooses to pursue innovation as a competitive strategy, it will go through these phases to get there.

Orienting is Essential to Agility

Responsiveness and disruptive influence are the cornerstone of agility, because change through continuous experimentation is fundamental to life. Healthy and viable systems maintain their complexity far from equilibrium, relentlessly fighting collapse and death. After all, “poised” on the brink of chaos, there is an obvious business definition for agility:

Responsiveness to signals in a market with imperfect information and imperfect competition.

This context necessitates process control that keeps identity and novelty in constant tension – even against our most brilliant ideas. Thus, our tactical principles for general preparedness, quick orientation, and powerful responsiveness will be rooted in the need to orient faster than the enemy system, our ideological competition. Only the working product of our efforts can provide a pragmatic judgment of the value we have created, so the ultimate measure of our success as a Disruptive Influence is the actual change in behavior we have caused.

Because individuals and interactions are inherently complex, adaptive, and difficult to predict in the reality of socioeconomic competition, we value knowing them directly, studying them and interpreting their position ourselves. We value this over relying on their predictability, likelihood of adherence to an agreed-upon process, or correct use of the best possible tool for any given job. Although we assume processes and tools taken at face value will deceive us into a false sense of stability, we also recognize that individuals and interactions cannot always be taken at face value either.

Because responsiveness, both in decisiveness of action in an unexpected situation and as adaptation over a long-term investment horizon, will consistently be awarded with asymmetric payoffs, we can only trust a plan to the extent it includes contingencies, delays commitment, and distributes control to the individual with the best understanding of the situation at the time a decision must be made.

Because compromise is the inevitable and unsavory outcome of “contract” negotiation, while creative endeavors in contradistinction rely on the energy of tension, cognitive dissonance, intra-organizational paradoxes, and conflicting interpretations, we invest our time and effort in social exchanges while delaying formalization. A contract relies on an external locus of control for its power and validity, whereas we must prioritize a social and socioeconomic view of the complex system we hope to lead into adaptation.

Because a socioeconomic “factor of production” is defined by its output, evaluated on how much more value “the whole” can add in excess of its “parts”, and because digital products are continuously created and maintained but never mass-produced, we take the tangible product of our endeavors as the only valid measure of its worth.  However good the product design looks on paper, however well-defined the future state is documented, only exchange in the marketplace can determine the economic value of the product we have actually created

Drawing the Line Between PO and BA

The Scrum Business Analyst

I have heard more than once “There is no BA in Scrum.” Imagine how your BA’s feel when a transformation starts!  At best, they are uncertain what their role ought to be. At worst, it is made clear by everyone else in the process that the BA is no longer needed or wanted.

The irony, for an agile coach viewing this as an outsider, is that numerous individuals throughout the value stream who are also struggling to cope with the shifting sands of transformation, frequently report that mistakes, lack of prioritization, failure to clear dependencies, and miscommunication are due to “being too busy.”

Obviously, just from this “too busy” problem, there are two important things the BA ought to do as an active member of a Scrum Team in a scaled environment:

  1. Act in a WIP-clearing capacity to the extent their t-shaped skills allow.  To whatever extent they do not have T-shaped skills, the moment they are not clear on how to utilize their time is the perfect opportunity to develop these skills.
  2.  Capture the very broad “reminders of a conversation” about a story that, in a large enterprise, occur across a larger number of individuals, over a longer time period, and in more geographically distributed locations than “core scrum” implies.

Roles and Accountability

Now we can draw the line between the Product Owner and the Business Analyst.

The Product Owner is accountable for decomposing an Epic or expressing a single enhancement as User Stories.  The Product Owner creates a Story card in JIRA for this initial Story list that includes a JIRA Summary and the User Story in classic format:

As a {user persona} I want {action} so that {expected value to the user}.

This is an expression of “Commanders Intent” and represents why the story is being developed and who cares whether or not it is developed.  Thus, the User Story is an expression of product strategy, and represents trade-off choices and prioritization.  The decision to expend finite on and expiring resources – time, energy, money, and talent – on one product change versus another is the most critical accountability of the Product Owner.

Although the what and how is negotiable, the intention of the Product Owner serves as a litmus test for all subsequent decisions.  The what and how are the realm of operational effectiveness rather than strategy.  It includes the framework of economic decision making and the processes, practices, and tools that streamline communication and align strategic direction of a distributed control system.

The Business Analyst uses the Description to succinctly express the what and how that has already been determined so that no context is lost in subsequent decisions.  The what and how remain negotiable to the extent these better serve the “Commanders Intent” of the User Story.

In an analog Scrum board, there is typically an agreement on “front of the card” and “back of the card” content that serves as the “reminder of a conversation” for the team.  In a scaled environment relying on a digital board like JIRA, the Summary and Description fields serve a similar purpose.  As the number of individuals contributing to the value stream increase, the need to detail the conversations that have already occurred increases as well.

In the process of detailing each Story Description, it will often be apparent – due to test data or testing scenario coverage – that a Story ought to be split into two or more stories.  The Business Analyst completes this activity and is accountable for communicating the split to the Product Owner.

 Stories may also be further split during Backlog Refinement or Sprint Planning based on additional insights from the team. Attendees should collaboratively decide who will capture this decomposition within the tool, but the Product Owner is accountable for prioritization decisions (if the split impacts this).  

Purpose of the Story Description

So, to meaningfully define the role of the Business Analyst, we need an understanding of what value is created if one individual “owns” capturing the elements of a Story Description as the number of these predetermined elements continue to grow. To the extent at scale that the team is unable to economically interact with every other value add activity in the value stream, the purpose of the Description is a succinct expression all value-add activities and decisions that have influenced the User Story prior to development. While we want to express these in the fewest words possible, and work toward distributed control of decisions, we do not want previous insights “hidden” unnecessarily from the Scrum Team.

Several important activities have likely occurred prior to our Sprint:

  1. Business decisions fundamental to the economics of our interaction with the customer.
  2. Funding based on an overarching strategic initiative.
  3. Customer research and analysis of product metrics.
  4. User Persona definition and Empathy Mapping.
  5. UX Proofs of Concept and/or A/B Testing.
  6. Stakeholder meetings.
  7. Success Metrics defined.
  8. Technical dependencies fulfilled (such as a new or updated web service API).
  9. User Story decomposition.
  10. Other Stories already developed related to the feature.

Thus, many details needed “downstream” should be easily expressed in advance of the Sprint:

  1. Why are we building this story?
  2. Who is the User?
  3. How is this User unique in our Product (i.e. relate persona to an account type)?
  4. What Test Data will need to be requested to test the story?
  5. What steps does the User follow to obtain the value of the story?
  6. What will the User see when they finish the story?

7 Simple Steps to Agile Transformation

I am never sure how to answer someone who says “What is agile?” After all, my mind is racing so fast that my ultimate, simple explanation – “A way to innovate and deliver products more effectively” leaves me wishing I could kidnap people for a 3-day course on lean-agile and continuous delivery.

What I can simplify (for someone who has a basic understanding of agile) are the steps in a true transformation, so that they can let me know where they are in the process.  Note that I have ordered these quite logically, while the real world is full of resistance, grey area, and co-evolution.

  1. Establish a cadence of synchronization (typically, this is scrum). Hypothesize the results of every change ahead of making it, test it, and validate or invalidate the hypothesis.  Inspect and adapt.
  2. Change from a human resource allocation mindset to a well-formed team mindset.
  3. Change from a finite project mindset to a living product mindset.
  4. Sell who you are, not what you plan to have on a shelf in X months.
  5. Change from a P&L and ROI mindset to an Economic Value Flow across the organization mindset (including upgrades in equipment, training for knowledge workers, benefits that raise barriers to exit).
  6. Change from centralized (top-down) market research, innovation planning, and risk assessment to distributed control over prudent risks.  This requires a framework for self-validation of discoveries, exploitation of opportunities, and communication of results.
  7. Change from performance tracking and formal leadership to systems optimization and organic leadership.

Hit Contact if you’d like to discuss your scenario or any of these points – I’m always available.