Axioms of Quantum Liberty

Many philosophers, teachers, coaches, and priests attempt to hide that their arguments reach a conclusion they held from the beginning. Inspired by the scientific method, like any father who gains a moment of insight from the simple wisdom of his child, we as philosophers should be forthcoming at the outset regarding our axiomatics; we can all join this game on equal footing and with adequate forewarning, knowing the table stakes and the half-time accoutrements up front; or feel free not to play.


Axiom 1 – Metaphysical Agnosticism

We cannot know what is “behind” the world of physicality, but one paradigm has proven most valuable for information discovery. Methodical Naturalism, in practice, is the assumption there is nothing metaphysical. There always exists a sufficient reason for any idea, explained through causation, physicality, and semiotics. Therefore, we will treat the mind-matter continuum as one substance experienced two ways.

We cannot guarantee the origin of our perceptual reality prior to our participation in understanding it. The ubiquitous consistency of truth-value attains many explanations throughout the history of philosophy:

  1. Categories of the mental machine and its physical method of processing the world “behind” our experiences (without color, light, texture, or space-time)
  2. Equilibrium truth-values already socially engendered, becoming quietly ready for disruption (example: the Copernican revolution)
  3. The world is exactly as we perceive it “under” the light and color (naïve realism) or close enough that technology can supplement the remaining perception (speculative realism)
  4. We are in a video game or a long dream.

The simplest explanation lies in a refusal to become carried away, caught up in the distinctions between any of these possibilities. These distinctions always lie in conceptual inconsistencies rather than genuine experience. Whatever the cosmos is, we play a consistent game with rules that we may discover through diligence, discipline, and a dedication to proper questioning. Moreover, anywhere we find a metaphysical explanation we are prudent to approach its purveyors with a cautious suspicion of the power their system of belief seeks in the world. The bulk of metaphysical explanations are not only intellectually lazy but also party to a history of ideological domination and abuse. We will seek out and exploit axiomatics of the economic game of consciousness while maintaining this suspicion, even doubting ourselves.

Axiom 2 – Pragmatist Epistenomics

To the arborescent mind of the mathematician, physicist, techn0logist, or philosophical logician, the limits of valuation-signification are far from unsettling. Instead, the analyst considers valuation of hypothesis, error, and backpropogation the basis of an information-rich cosmos. The quanta of pragmatist Epistenomics is the encoded truth-idea. This code is an information commodity, always produced by a system. The truth-idea gains market dominance through exchange; there is no unexchanged truth. The equilibrium price of an idea is its marginal cost in believer actions.Axiom 3 – Fractal Cascade Ontology

Existence as we can perceive it, as endless revolutions of becoming, constantly produces self-similarity. When we observe our universe under the assumption that we distort all perception by the methods of the mind, we create valuable new paths of hypothesis. By looking for patterns, fractals, and ratios we uncover what others miss. Smashing ideas together to see what feels theoretically elegant is a reasonable path for brainstorming.

Axiom 4 – Machinic Operability

When we attempt to confirm the hypotheses we make, it is fortuitous to do so under the assumption that the cosmos is pure information-physicality, and experiment with due diligence that we may cause outcomes that exceed our control. We thus treat any multi-actor economy as capable of producing Quantum Liberty, in which Machinic Agency at one plane and apparent Machinic Operability at its Quantum are co-determinate and freely exchanged throughout.

We will treat liberty of will-to-power, exchange under representation, as an emergent property of the cosmic system. For any given particularized Level of Observation, we will find agency generated out of sufficient freedom for choice and sufficient determinism for responsibility. Moreover, we will treat this as a category of mind that does not undermine humanistic free will, but treats it as a sociopolitical construct that requires stability of semiotic laws. This further stabilizes systems that are orders of magnitude above or below the “peer” phase space.

Axiom 5 – Existential Psychodynamics

The distinctions of the mind-body duality are purely existential processes: a problem of focus rather than ghosts, caricatures, and dreams. We build axiomatized arborescence where we focus our particularized observations; all tangentially unobserved probabilities spread like intertwined rhizomes “just behind” intelligent consciousness.

Axiom 6 – The Will-to-Power

Because we cannot directly observe a Level of Observation lower than quantum mechanics, we will not discuss the substance of the cosmos. However, its clear tendency to generate creativity lies in three axiomatic nodes of Continuous Experimentation: representation, expansion, and acceleration. We will discuss Will-to-Power and its ramifications for our Fractal Ontology based upon this foundation. There is no substance “under” the quantum waves and particles, only the leaning, propensity, or vector of expansion. This emerges as first-principle and categorical imperative; to not only reproduce, but to become more. To the extent this is a probable motion, rather than a substance, we will take competing notions of fundamental substance – Truth, Spirit, Capital, Energy, and Libido – and treat them as facets of Will-to-Power as if the many heads of a single monster, all incapable of speaking the same language.

Axiom 7 – The Labor Physics of Information

The Information Age and the Postmodern Era have come to fruition, out of century of Quantum Physics and the technological revolution that it spawned from its axiomatics. The task we have yet to do as philosophers lies in backpropagating the pragmatic Epistenomics implied by quantum mechanics, its paradigm of waves and particles. Moreover, this ripple effect likely will go “the long way around” to fully cross-pollinate with our other sciences. Therefore, our goal is to merge quantum mechanics with economics to better understand the needs of a post-singularity humanity. Every philosopher up to Schopenhauer, William James, and Bertrand Russell believe that the law of contradiction was a priori infallible. Quantum Liberty, in contrast, must experiment with the problem of superposition.

Quantum Liberty

“Some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them—and at the risk of making fools of ourselves.”

– Erwin Schrödinger

Groundwork for an Ethics of Machinic Agency

While freedom in action, predicated upon equalities that never manifest empirically but instead follow predictable laws, we can nevertheless build a case for quantum liberty. Even if the physics of lawful activity, determined within a probability density of particle-laborers, suggests we are not free, we have an innate sense of responsibility for consequences. This responsibility in ourselves and others is Agency. The paradox of Agency is that it requires us to believe in free will and determinism simultaneously. However, this is only a paradox when we apply abstraction that places our ideas on a single plane. Without this confusion of levels, the system of freedom and determination becomes clear.

While freedom is a homogenous lack of hindrance predicated upon categorical non-individuality, liberty is the emergent process of relative socioeconomic non-hindrance catalyzed by the sociopolitical power-laws that maintain the stability of non-equilibrium exchange. Quantum Liberty means that cosmic expansion ripples into a system of inequalities that, through capitalistic exchange, generates the rules that make us free. Are we free to fly? How silly – of course not – but the laws of physics liberate us to the extent we exploit some superpositions against others. Liberty is, in practice, the exploitation at one Level of Observation the power-laws and constants we find true at other levels.

Our emotional sentiments toward the freedom-signal and the liberty-signal stir some rebellion to this truth-idea; but, as Marx and Engel said about so many platforms of the Communist party – anti-property, anti-marriage, anti-nationalism – we do not freely bring these abstract commodities, these wave functions of justice, independently into being. A crowd of assemblages, possessing capital-mass and Information Dominance, lead and control these concepts. We concern ourselves little with DNA and Hormones as laborers of the human body, concern ourselves even less with photons and electrons as laborers of the human cosmos, and only recently concerned the middle class with the citizen as laborers of a socioeconomic system. This is precisely why liberty is an anti-freedom; a tradition in philosophy that authors express in fluffy, optimistic, utopian crescendos. More specifically, the hegemonic majority, within one normalized standard deviation of the liberated “average” citizen, enjoys far more freedom than those “long tails” of the sixth sigma, the asymptotic minorities, the socially dead.

Before we conclude in favor of revolution on the one hand, or fascism on the other hand, let us understand what freedom through rules, and therefore quantum liberty, implies for reality and human life. It is not simply the axiomatics of exchange that make “free” markets stabilize around their electromagnetic equilibriums. Equal freedom of exchange does not create Liberty on its own. We also cannot justify totalitarian inequalities or anarchistic freedom based on the differentiation of vectors. The individual narrative of the egoist, as shown by Max Stirner, is always at the expense of others. Even if we were all equal in our labor upon a common claim of the resources of Earth, liberty is far from individual. The problem of liberty lies in the sphere of morality, and the consequences that arise when all things freely exchange in accordance with identical rules. As Bertrand Russell described, coherence is not sufficient evidence that our beliefs are true, as multiple coherent systems of belief accurately using the same data are possible, yet these systems are nevertheless incompatible with one another, implying only one is correct or all are incomplete (PP). Likewise, if Liberty is “Freedom maximized by Rules” we will quickly see that many coherent axiomatic systems of liberty result in different social consequences in practice. We should pay special attention, though Baudrillard analyzes this in hyperbole and pessimistic tones, to our realization that our systems of exchange are so ubiquitously managed that even the absence of a rule is judgement regarding the morality of that rule.

We shape the plane of socioeconomic inequalities primarily not by rules “among equals” but by encoded laws so far removed from the reality of their enforcement as to encourage ignorance or passive acceptance. It seems the Universe and the State have this in common. Few question the validity of gravity or the stop sign once their context socializes them to accept such external control. The apparent power-law constants of molar aggregation and the emergent anti-entropy of the quantum level constantly expand. The rules are the pipeline that secures the flow of liberty, but the original free play becomes something distinct in the resulting markets of exchange. We find this system beholden to coherence in motion rather than identity. The rules of at the level that we can predict are unequal to our personal level of observation. The continuous functions of Information Dominance; non-exchangeable in any scenario, are the rules that liberate us for exchange at our own level of singularity.

State of Nature philosophy puts the information equality of abstract citizenship precisely in this way – the king and the peasant die equally well on guillotine. In more recent media, everyone becomes equal with a gun in their mouth. What a simulacrum indeed! The exchange-value of human vitalism, the cosmic citizen-as-particle, meets its final market correction in contrast only to the State of War. Locke justifies slavery based on prisoners of abstract war, involuntary servitude limited to byproducts of The War Machine (STG), while Deleuze & Guattari poignantly speak on behalf of postmodern capitalism-citizens, that we are all slaves, slaves of slaves, bound to our facticity of death (AO).

Irreducibility is a pattern superimposed by the human mind, which in observation of gradation consistently loses track of relevance. It is far easier (and lazier) to establish dogmatic planes of signification. Mastery, whether a painter or a chemist, lies in the practice of layering gradations to create coherence. To the rest of us, the “irreducible” components of any system behave in a wave-like manner, a great ocean we barely know. With sufficient opportunities, when given the “breathing room” of sufficient space-time within the phase of existential instantiation, the components behave like particles. These waves crash onto the shore of our consciousness, impressing us and moving our sands. The wavelike components of reality thrust upon and collapse onto the beach of our mind as so many particularized objects – particles “in principle” only, because their irreducibility is as much a fiction of the excitable mind as the further reducibility on another plane of observation. Creating a continuous reduction leads to confusion of levels, because abstraction treats the ocean, its motion, and the crashing waves as one sign. Observing planes, like gradation between primary colors, confuses the observer unless they may jump from one order of magnitude to another, sweeping the fuzzy vertical under the epistemological rug.

The trouble with any system of coordinates is the implicit role of a coordinating system that controls the orientation of the coordinate system. For instance, while a fighter pilot during a dog fight works to complete complex maneuvers against the enemy, applying a fluidity of spatiotemporal orientation to generate and exploit opportunities, we must recognize that the orientation of the coordinate system, the fighter jet, orients under the control of a coordinating system, the pilot. Changing the orientation of a system of coordinates may change nothing about the components of the system, but it shifts the observation available and opens new planes of significance we previously overlooked due to gradation errors.

These problems of conception reveal a first principle: Quantum Liberty is skewed emergence of the probability density of component particle-becoming. The orientation of the Observer skews the concretized outputs of each sociopolitical production system. We can begin with a soft subjectivist assumption most components have an incomplete understanding of their system, and some components have an orientation that produces Information Dominance against other components and other systems. Therefore, we should begin any analysis with a healthy scientific skepticism of the Observer – especially of ourselves.

This analysis spans all of philosophy. First, the question of what cosmic laws may tell us about our own laws. Second, the question of what cosmic freedom may tell us about social, economic, and political freedom. Philosophy does not provide permanent answers, though many sciences are “spin offs” from the continuous improvement of the body of philosophical questions available. Most frequently, when we collapse planes of observation in our abstractions, we conceal the unanswered question and the analyst that asked it.

Berkeley assigned this cover-up to his monotheistic deity, while Hegel made us participants in this deity as a collective. Some agree with Schopenhauer, that questions and analysts are an unfortunate mistake of the cosmos, of which we intelligent self-reflective beings are the worst of all Observers. Others conclude with Nietzsche that the cosmic machine is amoral, so that a human’s Machinic Agency must be highly personal in its definition of values. First, we should play a detective game, in search of the lost Observers of semiotic abstraction. The Observer, as we have concealed it through invention, is the orientating system of any exchange-triangulation.

When we say that particles possess free will or exhibit mechanical determinism, in each case we are losing the metadata regarding the orientation and signification of the Observer. The abstractions of observation proceeds in truncating the parameters, cancelling the noise, leading the witness, and selecting the level of observation. The components of the system produced behave like particles under observation, relative to the system as a continuous function. Though wave-like prior to semiotic abstractions, they become particularized through the choices of the Observer and categorized based on level of observation and orientation of the coordinate system. The Observer, as scientist, philosopher, et al, superimposes a dialectical manipulation, over-codes an axiomatization, of a system that behaves wave-like until it becomes particularized.

Therefore, Machinic Agency emerges out of the suspension between antithetical oppositions, ones that must never resolve. To resolve them would cease the revolutions of the system and its complexity, annihilating the cosmos. Of course, no component can achieve this. The system moves along all the same. Machinic Agency manifests at some system equilibria, neither predicated on the subject by a synthesis of a universal totality, nor an uncaused cause of the soul, but a suspension between systems of rules and their freedom of exchange. Unobserved, the person is not a citizen, a father, a philosopher, these relations particularize an individual as a component of each coordinate system. Unobserved, or without self-reflective intelligent consciousness, the components are free. Free play herein as being, a moment of potentiality, an unrestricted market of wills crashing and churning like so many ocean currents. Taken in aggregate, homogenized through abstraction, we can extrapolate wave-like probabilities of being and becoming. These uncollapsed truth-value densities, like a tropical storm one week prior to landfall, we may then predict from afar.

It is this capacity for prediction and communication that bring together philosophy and science as strange bedfellows. As Schopenhauer observed, there lies a gulf between knowing something innately through practice or knowing something abstractly through generalized rules and reason, such as the difference between a carpenter cutting down a tree and building an ornamented rocking chair and an engineer studying the product of this endeavor with geometry and physics to mass produce it. The only thing gained by physics, mathematics, predicate logic, and other abstract methods is the ability to communicate and reproduce what an expert practitioner already gained, whether kickboxer, billiard player, or farmer, without any need. We can feel some nostalgia here, as he wrote The World as Will and Representation before the major industrialization, modernization, and globalization we know today. Today technology has allowed a form of capitalism, in which the applied sciences, general research, and development of artificial intelligence has made abstract efforts its own domain of creativity for its practitioners.

The above metaphor regarding the prediction of hurricanes also provides an excellent example of the goals of abstract reason when taken as a literal fact. Prior to computers, networks, algorithms, GPS, satellites, Doppler systems, and several radars connected globally, the oral traditions of Caribbean islanders and the practical wisdom of elders read signs of hurricanes. Science and technology standardized this wisdom, validated what data to gather, and stored hypothesis, error, and conclusions in a consistent manner so that despite geographic distribution, early warnings could become communicable predictions. Due to the methodological rigor of science, these predictions become trusted even between nations.

Science is the ability to standardize what we communicate and how we trust the meaning of its communication, even when we conclude together – “That was obvious! We already knew that!” Philosophy is the art of analyzing the inconsistencies, shortcuts, conflicts of interest, and moral implications of how these questions gain attention, the means of deliberation, and the consequences of the myriad of conclusions. Science and Philosophy represent two forms of collective observation, one regarding practical understanding the other regarding the process of knowledge production.

Observation is axiomatization. It takes knowledge that a master practitioner knows as self-evident through the body and the senses, then generalizes this knowledge in terms of the self-evidence of collective intelligence. The problem of truth-value is a problem of trust. Truth is the dominant information of a trustworthy system of coherent facts, backed by probability, experiments, debate, and sanitized data sets. The role of the observer and the conflict of interest inherent in a brilliant individual or the nostalgia of an entire generation we must interrogate with a mix of skepticism, doubt, and suspicion.

Too much Information Dominance in the hands of a solitary group is certain to divorce precision of truth-value from accuracy of truth-value. Each may become coherent systems, probable explanations, from identical validated facts. The difference between knowledge as precision and our doubt toward truth as accuracy requires our discipline to never stop questioning, verifying, and cross-checking. There is simply too much incentive to truncate and superimpose when an organization gains Information Dominance. The incentive to protect privilege skews perception in favor of self-preservation. Inquiry therefore needs observer disagreement. However self-evident, reliable, and coherent the ideas we must doubt their legitimacy. No matter how reputable the intellectual ethics of our specialists are, we must nevertheless make room, as John Stuart Mill said, for “eccentricity” in our theories (OL). Especially when serious enquiries may shape, via selection pressure, the truth-ideas that will gain future Information Dominance, we must maintain suspicion.

The contemporary need to produce ethics worthy of methodical naturalism becomes clear: the philosophy of suspicion can no longer be the isolated pessimism or ranting of the hermit that refuses to exit society. However, the “professionalization” of philosophy has fallen short, a diaspora far from our real needs. While Schopenhauer, Marx, Nietzsche, Foucault, and Baudrillard blazed our trail of suspicion, building methodical suspicion equal to power of science and technology requires an element of process control. If the world is now a simulation, philosophers must undertake semiotic hacking.

Defining Quantum Liberty as a groundwork for Machinic Agency requires more than a simple re-thinking. The digital age is unlike any other, if Empire become continuous, no longer party to a territory, ethnic, or religious group. As philosophers and scientists, our practices are shifting from those of tribal spearman in the forest to become space marines of science fiction. Despite any intensity of strength of will we may have, we still need re-tooling. One tool brought by quantum thinking is the ability to rely on the unreal symbolically to derive a probable reality without losing our pragmatist footing. Note the distinction between inserting a symbol with probable significance, such as dark matter, and miraculating an abstraction as a first-cause, such as spirit. We will tolerate symbols of significance precisely to the extent they make experiments possible and theoretical enquiry more robust.

Precisely because of the potential conflict of interest that provides a stable recording surface for theoretical, applied, experimental, and commercialized technological progress – namely, socioeconomic exchange that funds the salaries and budgets of individuals and institutions; and precisely because we can unwittingly be the origin of our own bias, indoctrination, and axiomatization due to the marginal relative incentives of Information Dominance, philosophers must play the role of facilitator, counselor, and psychoanalyst.

Philosophers, in the broad sense of anyone who will take a system operative view of sociopolitical production, are those who elucidate and criticize; whereas specialists of science and industry become too far removed in their silos of thought to see the potential synthesis and cross-pollination of ideas lack any hope objectivity. As the population of information workers continues to grow, we should seek out the specialists who dare to look over the wall of etiquette erected between components of the American Invasive Ideology.

Karl Marx vs Dan Pink

Somewhere between Karl Marx and Dan Pink, we see a loss of “code coverage” in the behavioral economics of the knowledge worker. On the one hand, postmodern capitalism has largely mitigated the strength of the Marxist surplus labor value argument. Everything is now becoming so progressively commoditized that capitalism has turned rhizomatically back toward shock, grit, and authenticity as a customizable product. Meanwhile, the intrinsic motivation to create exhibited by the knowledge worker leads Pink to conclude that we only need to provide financial sustainability that is roughly triple the poverty line and money ceases to play a motivational role.

Between the two, we see the same problem that has always plagued the time-value of money and the surplus value produced. Some institutions, housing elite knowledge professions well-established as such, understand this remuneration is not monetary. It is not cash that miraculates capital; it is equity, patents, and partial ownership of economic rents. Only a very small number of knowledge workers can trace their right to the surplus value of information-capital, the remaining few that capture it own (or partially own) their company.

This post is not a critique of capitalism and the perplexing behavioral economics of surplus value, socialism has already made immense retributive efforts in that regard and belongs to a separate debate.

No, what’s missing from Marx and Pink is mediocre middle – the knowledge workers with untraceable but recognized value-add that accumulates as surplus despite the reduction of the duration of hands-on time. The salaries of the middle America typically purchase the surplus value of responsiveness, not the active time spent producing new value.

Marx is noticeably, and rightfully, outraged by the coal miner that is all but whipped to chisel and hammer on a death march 16 hours per day, while Pink romanticizes the owner-artist building Apache and Linux for free. In between the are the billions of postmodern knowledge workers who produce value in their availability for 8hrs, not through 8hrs of economic productivity.

Our universal obsession with equating all labor back to hours and dollars is the problem; we haven’t even begun asking the right questions, despite all the passionate fundamentalist rhetoric we hear based on our incomplete assessment of the situation.

Rebellion, Bigotry, and Due Process

To build a legacy that will truly last, we need consistency of self-identification in addition to experimentation. This takes balance. An over-reactive system may adapt quickly, but it will fail to scale in complexity.

A complex system too rigorous in its resistance to change, cutting down strange attractors and emergent organic leadership that opposes its orthodoxy, will find itself maladaptive. Such a system, if made of relatively independent actors, will produce schisms, splits, and offshoots due to the excessive fundamentalism of its self-identification process. Instead, we systems builders want the robustness, strength, and adaptiveness that results from the tension between tradition-rooted cultivated practices versus the spontaneous pursuit of fashion and buzz. This tension is healthy so long as it drives the continuous experimentation engine of the organization, allowing signals to emerge and backpropogating message errors. In other words, we should expect “just enough” sibling rivalry at any level of the organization. We should expect triangulation and escalation of signaling.

Experimentation requires tension, but discovery requires due process – the fair treatment of both sides in a conflict resolution.  The essential role of the systems builder is the promotion of self-awareness. A system unable to recognize its own constructs and correct them is unable to change. We see all too often that power need not be taken from someone so long as they believe they are powerless. Similarly, an executive order is rarely an effective mechanism for introducing lasting adaptation, but it can be quite effective as a signal for the system self-organize against.

A cultivator of adaptive systems does not generate unrealistic and unreasonable new rules in an effort to artificially push the system in a new direction; rather, we make the existing rules of interaction and exchange visible and known equally. Where the rules allow differentiation, we make the logic behind such distinctions known, trusting that an adaptive system will correct itself. We do not employ attrition warfare – one ideological information system against another – instead we maneuver against the broken logic of the enemy system. In this way, organic leadership does not pursue the wholesale destruction of an opposing nation, religion, economic institution, or political party. This is folly, as the diminishing returns of attrition warfare depletes energy, resources, and public support. Those are people on the other side of our wars, after all, and our monstrosities in the pursuit of victory easily turn public support against us.

Instead, wherever there is differentiation the systems builder ensures there is equal access to knowledge of the logic behind apparent unfairness. We encourage open rebellion and take even the least realistic signal seriously, as this is preferable to letting the system stagnate while an insurgency is festering behind closed doors.

To maintain our persistence, to balance tradition and stability with experimentation and responsiveness, we must above all ensure due process and the faith of the public that due process provides fair treatment in any conflict at each level of the system. It is decentralization of local process enforcement that allows systems to experiment. Equal access to escalation of justice will resolve critical reinterpretations. We do not need, as a systems-builder within an overarching complex adaptive system, to control the rebellion of progressives and early adopters nor the backward quasi-bigotry of instinctually late adopters. We must only ensure the system is healthy enough to re-inform itself based on the outliers, trends, and signals.

Defect Prioritization

Defect Prioritization: Everything you ever wanted to know but were too afraid to admit that you needed to ask.

One of the biggest agile religious debates that seems to get people up in arms is backlog prioritization when planning has to balance known defects (especially in production) against new feature work.  Let’s dig in and find a sensible approach here.

First of all, realize that it isn’t a fun situation for developers: fixing a defect older than two weeks, even if you wrote the code yourself, is like looking at someone else’s work.  Especially if it is the cause of a problem, it hurts inside to look at it.  You wish you could re-write the whole thing because you’ve grown and learned and you’re a better developer than you were then!  I get it.  Naturally, it sucks that you can’t do that because you’d end up down a refactoring rabbit hole due to all the other code that depends on your code.  So you end up feeling like you’re adding duct tape to a hole in the hoover dam.  If you’re fixing a bug in code you’ve never seen before, its like your Product Owner told you to fix a hole in the Hoover dam, but said it in a language you don’t speak, while handing you a box of duct tape and shoving you in the opposite direction of where you need to go.  Support engineers – if they actually like what they do – are a very special breed and should be your best friend.  Get them a Snickers bar and a thank you card sometime.

Second, the “triage” work for identifying the importance of bugs (if that happens at all) in which a manual QA tester writes up a ticket and picks a “Criticality” level is a joke.  Even if you had an elaborate definition for each one, who cares?  A crash that impacts 70% of users is “critical” why?  The defect is critical to… what?  To who?  How many people?  How much money?

Now, the lazy moral high ground of newly trained agilists is to insist you should’ve never let the bugs out at all.  Six Sigma Quality baby!!!  That’s a nice thought, and a very valuable standard if you are starting a completely new project on the latest and greatest stacks.  You know, an iOS 9+ iPhone 6s or later mobile application from scratch.  Then, I do advise you to build less than you think you should, think harder about whether or not each feature is actually important, and ensure that no defect gets into the App Store. 

That isn’t most software and that isn’t the problem established software companies are grappling with while in the middle of an agile-at-scale transformation.  If are a Product Owner for 10% of an application older than three years, you definitely have defects and you definitely need a rule of thumb for what to do about them.  It isn’t your fault, but it is your responsibility.  Operating systems evolved underneath you.  Hardware was replaced.  Vendors changed.  SDKs stopped getting updated.  People changed.  Deprecations occurred.  Now you have a list 1,000 decisions to make.  In that scenario, the moral high ground “you shouldn’t have made any defects!” is lazy and unhelpful.  That’s not the reality and it provides no answer for what to do once you already have defects in production.  There’s really four approaches to consider.

1 – All About the Money:

On the one hand, calculating the ROI of every User Story then attempting to apply the same methods to your production or other leftover defects will require a pretty rigorous approach to finance, accounting, and statistics.  A simple example – if a LinkedIn share crashes every 100th time I cross-post to Twitter, what is the ROI of fixing it?  I’m not a paying customer nor is Twitter.  Should I just leave the crash and hope people don’t complain too much?  No, I don’t think anyone would suggest that.  That said, there definitely is a statistical algorithm for whether or not that crash is likely to impact my decision to become a Premium Member in the future.  But if the crash takes 11hrs to fix, test, and deploy while the data mining, analysis, etc takes 60hrs to gain a certainty of 75% – why on earth would anyone not fix the defect and move on?  Eric Reis popularized the saying “Metrics are people too” while Ash Maurya adapts this to say “Metrics are people first.”  That is to say, if you have crash count of 13, not a very actionable metric.  If you have a percentage of engaged users experiencing a crash in version 1.9.3 – you have an actionable metric, but that metric represents real people who are annoyed in real life about that crash!  Quantitative data needs to drive qualitative insight, not ever-more-complicated quantitative analysis.

On the other hand, there are important occasions when the money makes a difference.  If you have customers with a Service License Agreement or paid SAAS users threatening to leave or your actionable product metrics are moving in a scary direction on account of a defect or the perception of poor performance, the money should be the incentive you need to prioritize fixes over any new feature.  Once a customer is gone they are incredibly unlikely to come back.

2 – Actionable Product Metric: Oops

Oops!  We stopped talking about money and started talking about product metrics!  There is a good reason for that – if you prioritize the development effort that improves Acquisition, Activation, Retention, Referral & Revenue (AARRR!) then you are by default increasing the money.  ROI is not even the money question to solve, is it?  If you have a fixed team contributing to the revenue of a product, ROI variability or Gross Margin variability is what you actually want to track – as long as the costs per month to maintain and improve my product is outpaced by the growth of revenue from paying customers, the ROI is there and the Margins are there and everyone is happy!  The problem is that revenue, ROI, and Gross Margin are extremely lagging indicators of success.  They are a good indication of the stability of the performance of a company over time, which gives investors the confidence they need to keep the money there, but multi-year lagging indicators are very poor metrics for the decision-making of the teams that managing, maintaining, and improving the product.  Growth of total users or active users can be a great indication of possible network effects long-term.  Both of these long-term performance indicators are symptoms of competitive advantage.  NOT the cause.

Now we have the cart before the horse though.  If you have legacy production defects in your backlog and want to move to cohort-based split-testing, your defects are the NOISE in your SOUND.  In the example crash above, if you knew cross-posting to Twitter was an important proxy metric for Referral, the possibility of a crash is also the possibility that I don’t try to engage a second time.  If that defect existed before you began using cohort analysis and split-testing, your viral coefficient is already distorted.  So if your agile release train is making an exciting and important stop at the actionable metrics station, make sure to prioritize any defects that could distort the reliability of your pirate metrics and future experimentation.

3 – Fuzzy-Weighted Economic Value Algorithm: 

A core concept for how the backlog should be prioritize in a lean-agile environment is maximizing the flow of value-add and minimizing waste.  If you are continuously deploying, the scope of feature releases can take a back seat to actually committing to the long-term awesomeness of the product.  Of course, as Ash Maurya says, the product isn’t the product, the product is the sustainable business model – and that’s not a fixed-duration project, that’s a commitment to continuous improvement of your unique value proposition.  So in good lean-agile, at any given Program Increment planning session, you do not need to be certain that your ROI calculations are perfect or your developers will be fully allocated or your that your schedule is on track.  You have a fixed release cadence, only scope may vary.  You have a large backlog, selecting the best possible thing to build matters.  As we said above, if your revenue growth outpaces your cost growth, the ROI takes care of itself.

The Weighted Shortest Job First approach is very useful for exactly this.  Because everything else is a lagging indication of good decisions, the Relative Cost of Delay and the Relative Job Size are the most important factors in job sequencing.  Let me reiterate.  Sequence = prioritization.  Under the assumption that small legacy defects require small individual effort while large value-add features take multiple sprints to roll out, you’ll likely always fix your bugs FIRST.  Which is good.  No one likes a crappy product, no matter how much “SOCIAL!!” you add to it.  Burn your customers long enough and they will abandon you.  Every software product is replaceable if you make the pain of use greater than the pain of switching to an alternative.

Over time, three things will happen.  You’ll get to a point where the outstanding defects will require large-scale refactoring effort while your stakeholder (hopefully) get wise to the fact that a smaller improvement with more certain economic value add is more likely to get prioritized.  At that point, you have flow and hopefully rational planning and discussion will rule the day on deciding what to do next.  On that note….

4 – Politically-Intelligent Fuzzy-Weighted Economic Value Algorithm: 

If you aren’t entirely lean-agile, aka you are still mid-transformation, aka “the top” still works using their old plan-driven paradigm while somewhere down the line an agile-savvy person tries to smooth the flow of that work, you need to add something for portfolio-level politics that impact your program-level prioritization.  In this case, while you may not share it too publicly, adding more scores for which stakeholder you are pleasing should be considered.  You can then weight each of the relative scores, like proxy-voting for your stakeholders.

Yes, that means the old-school problems will be continued because you are giving the important people at the top some blank-check preferred stock when it comes to your backlog prioritization.   The unfortunate alternative is that you live in silly denial that their perception matters or that backlog prioritization is not a political question as much as an economic value you question and the people “at the top” or “in the business” continue to hate agile and second-guess every decision you make.  Concessions to a powerful VP today help earn you the trust you need in order to move prioritization to a more rational approach later.  Admit where you are, challenge it bit-by-bit, and work to improve it.

What does that look like in practice?  Hopefully there is an IT leader at the top too in your large-scale silo-heavy organization.  Hopefully that IT person or a Quality person up “at the top” can be one of the political variables in your weighting approach.  Don’t pit the CEO against the CTO as a Product Owner, that just makes you look like a chump who can’t make difficult decisions yourself.  Gain buy-in and provide enough visibility before planning sessions that no one gets blindsided by your decision to prioritize refactoring over that VP’s screams for “SOCIAL!” 


Your solution is only partially economic or financial or social.  This is not a democracy.  Don’t go asking for votes.  It really isn’t a democratic republic, either.  And it is definitely isn’t just user story meritocracy.  Slapping a relative business value tag and sorting is begging for failure and distrust at your methods.  It looks lazy and it is.  You have to influence the right people to make the right decisions and that takes work.  If you’re lucky, its close to a full-time job and you have job security now.  Congratulations.  But seriously go fix those bugs.  They’re lame.

What Westside Barbell has taught me about Scaling Agile

Agile Portfolio Management:

There is a new way of doing things in delivering a complex product portfolio.  It focuses on delivering value both incrementally and iteratively.  It utilizes empirical process control and hypothesis-driven planning.  It utilizes test-driven development in both convergent and emergent delivery, even when budget and scope are fixed.  It utilizes a Lean kaizen approach to maximize velocity.

This philosophy is by nature, object-oriented and modular.  No one framework is right for every product, so it is highly customizable.  It may sound new to you, but it has been around for quite awhile.  But wait – I’m not talking about Agile, Scrum, or Lean software principles – I’m talking about Westside Barbell’s approach to powerlifting.

Waterfall Weightlifting:

Powerlifting is a sport in which the lifter competes for the highest single-repetition maximum in the Squat, Deadlift, and Bench Press for their weight class.  The traditional approach to training powerlifters relied on linear periodization – a method still very valuable for beginning athletes because each phase builds on the last while progressing toward competition-specific strength.

At a basic level, here is a 12-week competition plan:

3 Week Hypertrophy Phase (muscle size, stamina): Sets of 12 to 15
3 Week Strength Phase (movement form, ability to move weight): Sets of 5 to 7
3 Week Power Phase (Explosive speed, maximum weight at progressively higher volume): Sets of 1 to 3
3 Week Peak & Rest (Highest weight, lowest volume): Sets of 1 to 3, tapering off to a few rest days
Competition: Three chances to get three lifts correct, competing against others who are doing the same

As agilists, this correlates perfectly with the “waterfall” approach we try to leave behind:

Hypertrophy phase: Business planning, creative design, and thorough documentation
Strength phase: Database layer, middle-tier
Power phase: Client-side logic, front end development
Peaking phase: Testing, beta release, focus group and stakeholder reviews
Rest days: Code freeze and marketing
Competition: Release to the market, in which you may not recover from failure

Then the lifter starts over.  If there was a big loss (e.g. an injury) pre-competition, the weight lifter might not compete at all – just like software project that gets cancelled after key engineers leave or technical debt gets too high to meet the release date.  More problematically, if there is a big loss or injury at the competition, the lifter may never compete again- just like the software team with a botched release that gets “reassigned” or laid off.

Repeating the Cycle:

The weightlifter who perseveres, win or lose, still has big “waterfall” problems.  The lifter rests a little and repeats the linear progression cycle, an exercise in bodily context-switching.  When the next hypertrophy phase starts post competition, most of what was developed in the previous cycle is gone!  The same is true of each phase.  When the lifter resumes focus on 3-rep max, some hypertrophy and stamina is lost.  As the lifter peaks for competition, the 1-rep max may increase but the 5-7 rep range decreases.  Studies show that after a few weeks in the subsequent hypertrophy phase, up to 15% of single-repetition strength is lost.  The disconnect between foundational planning (by increasing stamina and size) sacrifices a considerable amount of value captured (ability to perform the same single-rep max).

What does this specificity-switching cost the lifter?  As a beginner, not very much – any work will improve size, conditioning, and maximal strength, and fantastic progress can occur.  The discipline of repeating the movement pattern likewise increases maximal strength even with little planning.  However, once the lifter goes from a beginning athlete – a time when nearly anything will improve the lifts – to an intermediate athlete – subsequent peaking phases will see little or no increase.

The process requires disruption if total stagnation is to be avoided.

If this sounds like delivering software in waterfall, it is!  As you read this quote from a strength coach describing the “waterfall” lifting approach, think about the Waterfall PMO:

Having now gotten away from this type of training and looking back as an outsider, I can see where the program is lacking and why I had so many problems. I used to feel it was the only way to train (mostly because it was all I ever knew). It was also the only type of program for which I could find a lot of research. Some of the limitations to this linear style of periodization include:

  • It’s a percentage-based program
  • It starts with a high volume
  • It only has one peak
  • Your abilities aren’t maintained
  • The program has no direction to the future

– Dave Tate via

Here are the parallel problems we see with waterfall:

  • “It’s a percentage-based program” – accounting-based statistical process controls are applied to an emergent system
  • “It starts with a high volume” – a significant portion of the budget is spent planning, designing, and fighting about features that no user wants (and if the project is cancelled, 100% of this sunk cost never drives user- or owner- value capture)
  • “It only has one peak” – A major release attempts to market itself to all segments simultaneously and a flop may kill the product line completely
  • “Your abilities aren’t maintained” – once the waterfall project plan is set in motion, market evaluation, user feedback, and stakeholder review is non-existent
  • “The program has no direction to the future” – a waterfall project plan is delivered based on the knowledge available at the beginning of the project when the least is known and has no intrinsic method of looking to the future relationship between the user market that might exist and the software that could be produced.

Westside Barbell’s “Conjugate Method”

The Conjugate Method attempts to balance all phases across preparation for competition. At the “enterprise level” three movement patterns are continuously tested as the measure of the process. At the “business level” a new variation of a similar movement may become the focus for 3 to 5 weeks (e.g. training rack pulls instead of full deadlifts when “lock out”, the upper portion of the movement, is the weak link). At the “team level” (the lifter + coach), the two-week sprint has a consistent set of ceremonies and artifacts (workout plan, workout log, the workout, etc).

Here is an example:

Week 1
Monday – Max effort lower body day (squat + low back + hamstrings), focus on strength and power
Wednesday – Max effort upper body (bench press), focuses on strength and power
Friday – Dynamic effort lower body (squat, deadlift), focuses on speed and hypertrophy
Sunday – Dynamic effort upper body (bench press), focuses on speed and hypertrophy
Week 2
Monday – Max effort lower body day (deadlift + low back + hamstrings), focus on strength and power
Wednesday – Max effort upper body (bench press), focuses on strength and power
Friday – Dynamic effort lower body (squat, deadlift), focuses on speed and hypertrophy
Sunday – Dynamic effort upper body (bench press), focuses on speed and hypertrophy

This correlates nicely with “core” Scrum concepts:

  1. Maximal strength is tested every week – working software every sprint
  2. The metric (1-rep max / story points delivered), is improved (strength / velocity over time), through hypothesis and experiments (empirical process control)
  3. The entire body is trained for size, stamina, strength, and power per every week – vertical slicing and user stories
  4. The lifter gets to experiment with new exercises without fear of wrecking a 15-week cycle – sprint retrospective, sprint planning
  5. The coach focuses exercise planning on addressing weak points – a ScrumMaster, removing impediments
  6. The Power Lifting competition is not a unique event with a long lead time – working software every sprint, TDD, XP, continuous integration and release

Now the lifter, like our Scrum team, gets to plan, experiment, and deliver often.  The overall roadmap (Lean + Scrum) might have a basic end-game or vision (increasing 1-rep competition max performed on 3 lifts the same day is equivalent to convergent product delivery), but planning only looks forward up to 5 weeks, commitment at 1 to 2 weeks.  Likewise, the lifter and coach is always looking at the most recent data, the newest lessons learned, and quickly reacts to whether a behavior, practice, or process should be continued or not – just like the Product Owner, ScrumMaster, and Team are always planning and executing based on the most recent market and team data.

Applications to the SDLC:

Now we can extend the metaphor and draw conclusions.  The powerlifter’s body equates to a complex large-scale digital portfolio.  The lifter needs to increase value three programs that focus on convergent product delivery while also developing several programs that utilize emergent product delivery.  In waterfall these two program methods are separated by functional division and project lifecycle, in conjugate (Scrum) these two are handled in tandem.

For the powerlifter, the three convergent products are squat, deadlift, and bench press.  Quality must stay constant or the increase in value does not qualify.  The same is true in software products – adding a high-value feature while allowing a 50% increase in crash on launch is absolutely unacceptable.  Your users will disqualify you!  Whether your have a three-application enterprise CRM program or a three-iOS app consumer program (see LinkedIn or Facebook as examples), adding an exciting feature to an app that causes mass user drop out is a risk no business can tolerate in today’s market.  The competition is too fierce, barrier to entry too low; someone will blow you away.

At the same time, the powerlifter needs to maintain several emergent delivery programs, some for function (increasing grip strength), some for fun (increasing bicep size).  Ongoing workout plans, building size, stamina, and maintaining joint health, addressing weak points by focusing on a new accessory exercise for 5 weeks – all of these priorities must be balanced and evolved.  Keeping a workout log is the only way to be sure that exercise volume, intensity, and density are increasing.  The relationship between the convergent product value and the emergent product investment is the only metric rationally applicable.  The same is true in software delivery.  Emergent-delivery programs like R&D, marketing, UX, product planning are all critical to the health and success of the portfolio as a whole – but the end goal must be clear.

  • Over-planning and under-delivering is not acceptable.
  • Over-researching and under-user-pleasing is not acceptable.
  • Over-designing and under-testing is not acceptable.
  • Over-marketing and and under-releasing is not acceptable.


The Conjugate Method as an analogy for Agile, Scrum, XP, and Lean at scale works for me because I love lifting.  I realize it may not be right for you, especially if neither agile or weightlifting are familiar territory.  So, like everything, find how this applies to your life so that you can find inspiration in ordinary – then start a conversation about it.  I’m happy to discuss anytime:  224.223.5248

Enterprise Mobility – You and your team need more green time!

As a consultant and delivery owner for custom mobile application development, discussing with people outside the industry regarding the various ways in which people think about “enterprise mobility” is always a great conversation. There are three basic ways the terminology is used:

  1. Mobile Device Management:  Devices and data security management in enterprises environment that COPE (corporate-owned, personally enabled) versus BYOD (bring your own device)
  2. Consumer Marketplace Apps: Consumer facing solutions created custom by an independent contractor (I would typically point out this is not what I consider enterprise mobility)
  3. Internal Enterprise Apps:  Proprietary solutions used exclusively by a single enterprise, in which the operational effectiveness gained with on-the-go, on-demand employee activity that is supported by one or more mobile applications

Let’s talk about #3.  I love enterprise mobility and being a member of the mobile workforce.  I love to build and manage employee solutions for enterprises.  Every day is a good day when I am consulting, planning, and delivering on-the-go, on-demand, notification-driven enterprise user experiences – that is my specialty.  Part of how I stay “in touch” with the users these solutions serve is by finding and using the marketplace and proprietary apps they use, getting out of the office and off wifi, making every effort to be an ultra-mobile worker – and I love doing it!

Worker Mobility

Talking Stick Resort – Scottsdale, AZ

Why do I love being an on-the-go, on-demand employee?  Less screen time, more green time.  As valuable as note-taking can be when it truly captures the full context of what is discussed, I find conference calls and meetings where I have a computer in front of me are consistently sub-par.  So when I have several calls in a row (with no screen sharing), I plan to spend that time at a park or forest preserve.  I love walking meetings – not the indoor kind on treadmills (though I support that option too) – so when I need a one-on-one “meeting of the minds” with a mentor or mentee, we take it outside and walk around.  I go to the gym at lunch most days to give my eyes a break and get my body awake.  After all, I can chat, text, and email as easily by mobile between sets as I can between bites at a desk.  When there is a tough problem to solve, a couple beers and a whiteboard can help a small group hash out a solution better than a chat group.

We are part of an increasingly mobile workforce.  Take advantage of that freedom, encourage it with your peers, empower your employees.  You will see immediate benefits!

Walking Meeting

Illinois Forest Preserve – Chicago, IL

Connect With Me:

LinkedIn       Twitter