Our Time in the Desert

Spending “our time in the desert” carries a long-running history in Western religious and philosophical literature. The desert provides clarity of analysis to the Observer by escaping the subjectivity of densely-populated areas. Whether prophet or philologist, escaping the world of privileged life to find an alien world without our feelings, fears, and troubles; this has long been a clarifying moment. However, as we will find, even the “desert of the real” no longer holds the same significance. We have experienced to many living deserts, too many virtualizations of false lifelessness, smiling at us and walking around out of habit.

In Western philosophy, the desert represents a partial answer to what the world might be when it is absent of life. Many of the problems of philosophy emerge out of linguistic or stylistic flaws, existential particularized instances that thought transforms into generalization prematurely, or abstractions that take on a “life of their own” and run amok in the civilized mind. The spectacle of human society is too full of symbols and signs, leaving the philosopher in search of “bare life” in the wilderness, to at last secure a hold on the sublime. There is immediately a textual question, were one to note it: why the desert? Nietzsche, like his own retreat to Switzerland, has Zarathustra retire to the mountains. Henry David Thoreau escapes to Walden pond, painting a scene of a small cabin among American pines, praising self-sufficiency. In similar fashion, we may try our own hermitage to mountains or forests to escape the confused misrepresentations of society and fashion. The desert, in contrast, represents an alien reality, one that does not welcome us or praise us, a physicality that humbles the consciousness that believes reality manifests for life.

In the process of enduring the desert, we see an escape of the noise, light, and concerns of Others. Yet this escape requires there be something to escape into as well. The desert holds the appeal of an absence of signs, representation, and symbolic exchange. The comforts of the mountain or the forest still let us believe we can make a home, then construct a metaphysics that justifies our selfish human privilege. The alien forms of the desert, self-sufficient without the presence of human mechanization and machination, reveals the Observer’s alienation. The unintended consequences of society become clear in the desert of the real. The alien landscape of human lifelessness reveals the alienation of human society. Then we see that enclosure within the social machine encroaches upon individual moral systems of valuation and signification.

For this, we must strip representation down to bare life, then even forget life itself. At the extremes, the cosmos is a lush paradise, phenomenon created by the human mind and for the human mind; else it is an enormous desert, a system of objects that entraps us, an enormous machine in which matter is more real than our lives ever might be.

The inescapable social machine creates the need to distance thought from its comfortable privilege, opening the individual value system to the experiments of alien reality. The long-running contemplation of inhuman reality as a desert represents a stance on metaphysics. The weight of our decisions in the desert are the moral responsibility of bare life; every metaphysics carries extreme implications for moral systems.

Plato told us that there is a perfect and sublime realm of pure forms, triangles, circles, concepts, and virtues, all complete and wholesome in the full light of the sun. Meanwhile human existence is a sad misrepresentation of the true reality, like shadows cast on the wall of cave, create by puppets and trifles in a flickering fire. The allegory of the cave inspires a long lineage of mathematicians, astronomers, and rationalists, all trying to wake up from the dream of this world so that they may see the true world in all its sublime glory. This effort to deny the significance of bodily life makes its way to the Rationalists, like Descartes and Spinoza. The rationalists insisted that a perfect reality lay outside the material reach of humanity, except through total conceptualization and pure reason.

Aristotle takes a more encyclopedic approach (an apt description of the method by William James). Describing the attributes of human experience, cataloging the ideas found in agreement, and attempting to summarize the most probable and consistent explanation for the full sum of human belief, Aristotle established the framework for the division of our major sciences. The lineage of Aristotle, ending with the British Empiricists, insist that the material perception of humanity is the only reality upon which we can base our judgements. Anything abstract is either self-evident, as the result of a system of abstract machines like 1+1=2, or they are generalizations of experience, hypotheses that must undergo continuous experimentation for validity.

Insisting on exclusively a priori grounds, Descartes builds out a moral system based on the perfection of axiomatization, aspiring to find God-given precepts as pure as mathematics. Descartes wants an ontologically self-evident deity, with a moral code as self-contained – in the absence of any believer – as Euclidean geometry. Insisting on exclusively a posteriori grounds, Hume insists that human nature and justice must arise from probability, experiments, and patterns.

As good literary critics, we must look to the context of these arguments and read between the lines. The foundations of metaphysics and physics, its implications for ontology and epistemology, these were the formal concerns of their arguments. Between the lines, the first modern philosophers were finding that the “pagans” of Rome and Greece were not so different from Europeans and that the divine right of kings ought not trump the sovereignty of individuals. On the one side, the rationalist denial of the validity of human life and the Christian attitude toward worldly pain and desire, whatever the intended consequences, had resulted in abuses of despotism, outlandish inequality, disposability of slaves and peasants, as well as a long series of wars, killing and torturing lives in the name of the Kingdom of Heaven.

Hume’s skepticism laid out a groundwork for methodical naturalism that had terrible implications for personal beliefs about the burden of moral responsibility humanity bears. By what means do we justify enslavement, castration, starvation, domestication, or carnism – there is no grounds for any of these injustices without a social machine producing it. Empirical logic dictated that the ontological argument for a deity only gave the cosmos itself the name of God. All the injustices of human life, and many abuses against nature, originate in human prejudices, perpetuated by justifications provided by organized religion.

Hume awoke Kant from his “dogmatic slumber” and likewise startled into action all Western philosophy that followed. Hume stated, “All knowledge degenerates into probability.” Indeed, centuries of improvement in stochastic econometrics proves above all that the average human keeps economics and statistics as far away from their domesticated habits as they can. Probability of two united representations of the senses provide us with increasing certainty, but generalization of correlation into causality can only be an optimism bias imposed by the mind itself. Necessity, power, force, causal agency are thus projections of the mind superimposed on the consistent union of representations in the constant conjunction. Like heat, color, weight, sound, taste, and smell gain signification relative to the context of the Observer, Hume closes the book on generalization from certainty of probability. There is no cause and effect, nor causality and causal agency at all, only a probability we forecast and trust based on consistency of experience; “Anything may produce anything,” and by implication, any king, master, government, or religion who tells your otherwise are deluding you for the purposes of undeserved access to resources, labor, and moral hypocrisy.

Kant takes the extremes of the two approaches and attempts a “Copernican Revolution” by embracing both sides wholesale. Kant argues that the mind produces causality, not as a forecasted probability, but as a category of the mind itself. The representations of the senses, cause and effect, are all produced by the mind, as are space and time, but the mechanical determinism we see outside the mind tells us nothing about the freedom of the will “inside” the mind. The machine may look predetermined and predictable from the outside, reactive within a chain of causes and effects, but the ghost within this shell is free and moral. While causality is consistent beyond a reasonable doubt, the feeling of freedom of the will and moral valuation is likewise consistent beyond a reasonable doubt. Thus, he argues, it must be the mind itself that adds everything other than freedom of will and pure reason to our representations of space, time, appearance, and causality. This lensing applies to the perception of other rational agents, and any of our interactions among intelligent beings, so their determinism and our freedom cannot contradict one another.

Based on this approach to bridging the gap between free will and determinism, Kant builds causal agency upon the synthesis of internally true freedom and externally apparent determinism. Without insisting on the rationalist freedom necessary for moral choices or insisting on the naturalist determinism necessary for moral consequences, Kant breaks the world in two. On one side of life we find the phenomena that the mind generates, but on the other side the mind builds this upon the numen of metaphysics, the thing-in-itself about which we can reach no conclusions. This separation is essential to the moral agency we take for granted anyway, because in a purely deterministic world we would have no ability to make choices, and therefore bear no burden of responsibility; while in a purely free world we would have no control over the outcome of our choices, and therefore bear no burden of responsibility. When we begin with the axiomatics of Western philosophy, it is only if we are both free to make choices and the world contains enough determinism to link our choices to consequences that we bear any moral responsibility for actions.

Kant short-circuits the arguments for either extreme by separating human reality from actual reality. This allows for the belief that each choice is its own causa prima without undermining our responsibility for the consequences in deterministic perception. However, this separation, and the postulated numen as a thing-in-itself devoid of human perceptions, built a wall between humanity and the metaphysical realm. The intended consequences of this mechanization lay in finding a logically necessary system of morals. The unintended consequences of this machination are precisely where philosophy finds its desert: a world of numen in which mind refuses to live.

While Kant placed a wall in the individual mind, separating the senses and intellect from the metaphysical reality of the thing-in-itself, Hegel takes this license into senseless material abstraction, under the premise that any narrow view of the material whole may find through its self-reflection the complete understanding of the whole.

Schopenhauer criticizes the entirety of Kant’s approach, saying that it is recycled Platonism. Ironically, it was only Kant’s popularity that drew so much attention to Hume’s methodological naturalist skepticism. Schopenhauer surveyed the full history available from multiple cultures for the first time since the fall of Rome, finding new insights in Buddhism, Hinduism, Confucius, and Taoism. In practice, Kant’s method was too convenient for the morality that submits to the prevailing ideology. If the creation of phenomena occurs in the mind of every self-conscious rational observer, and moral imperatives only apply to self-conscious intelligence, Kant’s prioritization of human valuation over the will expressed in all forms-of-life violated the principle of sufficient reason; instead, Schopenhauer argued our physical experience itself alienates us, the world of representation separates itself from the metaphysical will as a lonely expression of selfish altruism among the collective desire for consciousness.

The will was Schopenhauer’s thing-in-itself, and the will-to-live was far more coextensive than humans or civilization. In the world of will and representation, we experience thorough determinism of signs and even the choices we believe we make are representative interpretations of the movement of the one will; as generator of the forces driving all representational things. Finally, we arrive at the desert of Western philosophy. Stripping away the layers of representation, removing the system of values, both in concept and precept, and anything specific to the strategic goals of the human species, he lands upon the will by wandering into the desert, realizing the will cannot stop willing. Simply, being cannot stop becoming even throughout infinite revolutions and recurrence:

But let us suppose such a scene, stripped also of vegetation, and showing only naked rocks; then from the entire absence of that organic life which is necessary for existence, the will at once becomes uneasy, the desert assumes a terrible aspect, our mood becomes more tragic; the elevation to the sphere of pure knowing takes place with a more decided tearing of ourselves away from the interests of the will; and because we persist in continuing in the state of pure knowing, the sense of the sublime distinctly appears.

Schopenhauer, World as Will and Idea Vol. 1

The inescapable desert of pure knowing led him to immense pessimism, and he believed even the honesty of systems like Stoicism and Buddhism were insufficient for this desert. At one point he articulates this as a conversation among two friends, one wishing to be certain of the eternity of the soul, the other explaining the foolishness of wanting such assurance. In the end, the two call each other childish and part ways with no resolution; this may have been the underlying insight of all his philosophy, that all representation is childish non-sense. The will-to-live expressed in any one life was helplessly biased, and only self-conscious intelligent humanity was fully aware of the terrible burden of moral responsibility implicit in the recurrence.

Supposing anyone agrees to the groundwork of the pessimistic view reacts in the negative, treating its conclusions with any level of anger, indignation, and indolence, where might such a warrior take his passion? For this we find Friedrich Nietzsche, ready to reject the asceticism of any collective religion. He paves the way for a new method of nihilist existentialism that requires individualist positivism. While religious systems had long founded their origins on the ideas of prophets spending their time in the desert, seeking the truth-in-itself, Nietzsche rejected the notion that anyone may meaningfully appropriate these insights from another.

Going even further than Feuerbach or Schopenhauer, Nietzsche deploys his powers of literary criticism to show how the organization of religions around the insights of prophets provides us with the opposite guidance exemplified by their embrace of the desert. We ought to echo these as free spirits, creating our own system of values, not follow blindly the dogma institutionalized complacency. Within the mechanization of an ideological, dogmatic, axiomatized belief system, built in the shadow of these warrior-philosophers, we find the machination of the priests and clerics who, too weak to spend their own time in the desert, prevent all others as well.

The only answer for Nietzsche is to run into the desert, like a camel that has escaped with its burden, shrug it off, become a lion, and battle the enormous dragon “Thou Shalt” so that one may become a child, making new games and values:

“In that the NEW psychologist is about to put an end to the superstitions which have hitherto flourished with almost tropical luxuriance around the idea of the soul, he is really, as it were, thrusting himself into a new desert and a new distrust […] he finds that precisely thereby he is also condemned to INVENT—and, who knows? perhaps to DISCOVER the new.”

– Nietzsche, Beyond Good & Evil

Nietzsche sets the tone for the personal responsibility to become our own prophet in the desert, a warrior-philosopher far removed from the falsehoods of entrapment in the social machine. Albert Camus, who fought as a rebel during the Nazi occupation of France in WWII, took this moral responsibility as the essential meaning of human existence.

In the face of immense human suffering and depravity, surrounded by casualties of war and hopelessness actualized through countless suicides, Camus likewise found a desert in which we must fight for meaning and purpose. He called this desert the “absurd” – the self-consciousness speculative reality we experience, that is neither the material objects nor pure representation of mind. Representation distances us from the simple possibility that consciousness can distrust itself for some strategic reason; or that humanity repeatedly utilizes abstractions to justify murder. Therefore, we must revolt against the absurd and continuously fight for meaning.

It is here that the full history of philosophers rejecting naïve realism, with comprehensive skepticism that we may ever attain objectivity, finally reaches its absurd conclusion from the phenomenologists, that nothing is certain, “evoking after many others those waterless deserts where thought reaches its confines. After many others, yes indeed, but how eager they were to get out of them!” The desert of the real is the end of the power of thought, a limitation few philosophers were willing to accept.

This inability to find justification in knowledge of reality forces the burden of responsibility for our actions on our own shoulders. Thought will not attain certainty of material determinism or spiritual unity. We can only look to other humans for the depravity of the absurd. The mechanization of institutionalized values, which machinate unintended consequences, should not become our complacent acceptance.

“At that last crossroad where thought hesitates, many men have arrived and even some of the humblest. They then abdicated what was most precious to them, their life. Others, princes of the mind, abdicated likewise, but they initiated the suicide of their thought in its purest revolt. The real effort is to stay there […] to examine closely the odd vegetation of those distant regions. Tenacity and acumen are privileged spectators of this inhuman show in which absurdity, hope, and death carry on their dialogue. The mind can then analyze the figures of that elementary yet subtle dance before illustrating them and reliving them itself.”

– Albert Camus, The Myth of Sisyphus

When we reach this realization, that nothing human can be certain, that nothing behind or under perception justifies our life, pleasure, suffering, or death; this is where all the interesting and dramatic intricacies of systems of living representations occur.

The absurd is a desert of the mind, the distance or distortion that lies between what the material of the cosmos might be without representation in consciousness and signification by intelligence. The absurd is everything that painfully fails to make sense, such that we reject the validity of our senses, or even put an end to sensory experience. The revolt against this denial and delusion described by Camus, as well as the reality of our moral systems within the social machine, reflects the prophetic independence of Nietzsche’s warrior-philosopher.

Camus concludes that if the absurd is the quintessential defining attribute of human life, he must maintain the discipline of methodological naturalism in his authentic appraisal of the system: “I must sacrifice everything to these certainties and I must see them squarely to be able to maintain them. Above all, I must adapt my behavior to them and pursue them in all their consequences” (Ibid).

He likewise takes stock of the problem of re-valuation of all values and the cowardice to do so. While Nietzsche treats this fear with disgust, Camus treats it with empathy. The desert of the real, the fact that we and all those we love will die, that the world will forget us and everything we ever hoped or desired; to fear the reality of this supposition is only natural:

“But I want to know beforehand if thought can live in those deserts. I already know that thought has at least entered those deserts. There it found its bread. There it realized that it had previously been feeding on phantoms. It justified some of the most urgent themes of human reflection.” Ibid.

For Camus, there is no doubt of how difficult and terrifying it may be to reconsider everything once held valuable, meaningful, and true. An individual re-valuation of all values must proceed when we finally strip away the mechanization and machination that filter our reality. Our time in the desert reveals the alienation and denial that it has brought us, that we are party to the machine, and it prevents us from prioritizing with any lucidity or acumen.

Bertrand Russell summarizes the long-running battle for objectivity similarly in Some Problems of Philosophy, and the alienation it represents, saying, “If we cannot be sure of the independent existence of objects, we shall be left alone in a desert — it may be that the whole outer world is nothing but a dream, and that we alone exist.”

Unfortunately, we have a new problem today. The same mechanization of general intellect implicit in capitalism is a machination that undermines virtuosity and moral responsibility. The interlinked supercomputers in our pockets free us to access more information than ever, but too much information too fast leaves us unable to find any significance in it. This is the decisive step in the process of alienation humanity pursued with the successive objects placed between us: tools, weapons, religion, governments, enclosure, property, currency, contracts, machinery, corporations, computers, the spectacle. The “war of all against all” described by Hobbes, the social machine can finally reduce our natural state of civil war to isolated individuals, so long as they carry their own chains of self-enslavement in their pocket.

We no longer find enclosure in the social machine mechanization of labor, we enclose the machination alienation within our personal machine. The spectacle and virtualization prevent us from reaching any desert of thought and any authentic life. In Simulacra & Simulation, Jean Baudrillard calls this problem hyperreality: “Abstraction today is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being or a substance. It is the generation by models of a real without origin or reality: a hyperreal.” When social engineering precedes our understanding of rational normative valuation, when the full globalization of economic Oedipalization leaves us with no unaltered experience, we are only able to recognize patterns that Others created ahead of time for us to recognize.

Hyperreality is the universally unauthenticated life. It represents a loss of significance by managing all mystery ahead of time. We do not experience any event authentically because the genuine physicality experience is not the anchor, a virtual experience anchors us ahead of time. If we go camping, virtualization anchors us to what camping is and who campers are through movies, commercials, and social media. To be certain, this is not a new and unexpected result of technology, it is the very essence of technology. Where we once spent time in the desert to escape the representations of the social machine, now we recognize its total inescapability.

Philosophers once inspected the distinction between the world of the mind and the world the mind perceives, some claiming everything was virtual, others claiming everything was machines. Repeatedly, some dualism became established, such that our virtualization, though developed and enclosed by machines, we could feel confident we could escape them. Today our understanding of either loses its innocence, precisely because we finally know how to engineer the patterns. It is no longer a few power-hungry men and the herd instinct of the masses that develops the unintended consequences of our morality, we can no longer claim ignorance or escape. Today we are all party to the data, the algorithms are intentional, and intelligent people fight to manage or mismanage the collateral damage.

“The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory — precession of simulacra — it is the map that engenders the territory […] It is the real, and not the map, whose vestiges subsist here and there, in the deserts which are no longer those of the Empire, but our own. The desert of the real itself. – Baudrillard

Just as the chains of hyperreality prevent us from knowing the distinction between the real and the virtual, between our mechanization and our machination, the desert of the real is no longer a problem between us and material physicality, nor between us and the social machine. Now the absurd reality is within us. As we trace this lineage of the desert, we come full circle to the machines and automata from which self-consciousness attempted to distance us. The remainder of our philosophy will face the ethical and political dilemma in which we awake, to understand the moral weight of decisions, even if these we pursue in a dream within a dream, even if our awakening is only to another dream. We must establish what moral values ought to carry significance regardless of how deep in Plato’s cave we might be. Any mechanization that prevents this personal responsibility to life and existence is a machination.

Regardless of its original evolution, the intended consequence of formalization in written language was to bring humanity together. Abstraction became a powerful tool, trading on the currency of truth-values. Generalization allowed anchored, consistent existential instances to become probable patterns that we could exchange and test against reality. Once language became typography, the rules of grammar formalized and analyzed, and the lexicon of significations network into a matrix of signs, we realized the tool meant to bring us together resulted in our separation. The signs of language are simulacra, words that have definitions prior to our experience of an object. Together, full literacy creates a simulation of the world that we project upon it, distorting its significance. The signs of images in media do the same, so that instead of recognizing an object as a particularized word, we have experiences the name, the image, and the normative reactions of others in advance. Finally, we take all these simulations and place them on our own body, first in the pocket, then as wearable, with a goal to achieve further integration. Virtualization consumes us prior to any experience of reality.

Our time in the desert of the real means that we cannot look to a higher or lower plane of existence, or base our morality on the significance of rules outside ourselves. Now there are no rules outside us, only the axiomatization of our simulations, rules which we either manage or mismanage. For Schopenhauer, the desert was our capacity to resist the will and engage in pure simulation. For Nietzsche, the desert was the struggle to create new systems of significance and new patterns of understanding. For Camus, the desert was the absurd distance that alienates us from objectivity. In Baudrillard, we finally face our desert of the real, that the loss of any objectivity leaves everyone equally speculative, in a simulation we create and cannot escape. We are party to all the unintended consequences of the system and must build a better machine.

Rhizomatic Unconscious

Rhizomes behind our selfish, despotic, machinic, consistent, conscious analysis; we should explore what good such an idea does for us in practice. If we are hard agnostics of metaphysics, we must assess what we gain if we assume the abstract potential presence of other alien observers, applying logic and connections within. Even methodical naturalism gains creativity if we add, to our stubborn certainty of objective focus, a suspicion of what may loom outside our frame of reference.

Vitalism interprets the individual person according to the continuous irreducibility of Machinic Agency that bears a name. Each vitality plays on the stage, costumed as member of a socioeconomic ecopolitical system, masked observations of this homogenous collection of woman-particles and man-particles. From a distance, as a population, how uniform it all appears in abstraction, how easy for the simplistic to reduce billions of particularized lives into no less than two engendered masks!

Conceptual abstraction could be left to the morons and bigots, were it not for their tendency to backpropagate bad conclusions as causa prima. These power-law dynamic vectors of identification appear, under observation, to follow their causal becoming under unwavering mechanical determination. Becoming-woman, becoming-man, reproduction, death. So also the Spectacle thrives on the Circus of Values when simpletons debate their palettes of predeterminism; gender, race, orientation, class, sanity… often in that order, according to mass media.

Stochastic analysis provides pragmatic predictions in terms of probability densities, answering only where one ought to look; one is already certain the probability is possible. The opposite, to treat an emergent normal standard distribution as a caste system, has been the justification of every cruelty imagined by collections of political economy.

Appearance as particle is deceptive when we cease observation of the totality of the population or experience it from the inside – the experience that is most intimate to us! Only then do we find that free will experiences itself as a continuum of power, of many forces in dynamic relation and opposition. Causal Agency is an uncollapsed wave of indeterminate probabilities. Observation collapses these, with an accompanying sentiment of mental empowerment, as teleonomic leadership of the body or conducting the dynamics of thought, memory, emotion, and drive like an orchestra. If we rush the orchestra, the music becomes disjointed; if we stop conducting many well-practiced melodies might be played without additional effort.

Applying logic to an entire system of truth-ideas is an effort in projecting consistency and unity to our understanding. First, we must forecast many particularized hypotheses and assert their abstraction as a universal value. Next, and most fundamental to the entire history of philosophy, we force upon the systems of abstract signs a single axiomatic of all logic, the law of non-contradiction. The law of non-contradiction first espoused by Aristotle states that a proposition and negation cannot be simultaneous true. If I say, “The cat is black; the cat not-black,” the good logician immediately clarifies if I am being poetic, lack logical intelligence, or need to provide more details. For instance, “The cat seemed perfectly and consistently black from afar, but now that it is in my arms I see white and grey hairs spread about sporadically. Thus, even a black cat may be imperfectly black-haired.” In every philosophical debate, sifting through the technical and formal meanings of statement and applying the law of non-contradiction accounts for most of the leg work.

This law of non-contradiction, however, is precisely the a priori argument we must now question. The Uncertainty Principle provides a complex function that may at last span the wave-like properties of the rhizomes. This is the purpose of Quantum Liberty, to find Machinic Agency in the rhizomes. The remainder of our exploration applies quantum physics as an improved tool where we once applied the emergent power-law of non-contradiction.

Logic, capitalism, paternalism, these all thrive on forced non-contradiction. Deleuze & Guattari went to great lengths exposing that, while many philosophers and scientists take care to apply the law of non-contradiction with as little prejudice as they can manage, society has no patience for unanswered questions, doubt, minority values, or “deviant” opinions. We will thus take up a more strategic approach built upon their work exposing the rhizomes, admitting the two flaws in the system in an effort of critical leadership. First, as shown by Stiglitz and others in Behavioral Economics, that being watched, money, contracts, and cuing social role, can shift individuals toward rational self-interest, logical positivism, and objectification. Then, that the system as a whole acts upon truncated data, leaving without record any content that cannot be expressed according to currency, typography, mathematics, and the law of non-contradiction.

The exceptions to the rules, the amount of unexplained complications and complexities do begin to pile up! No wonder so many knowledge workers prefer the safety of specialization, hoping that enough trees of knowledge, branching selfishly, somehow forces the environment into a healthy ecological system. Equally true of forests and our own mind, pure arborescence as a categorical imperative leaves the health of the system unmanaged, and certain to degrade and collapse.

Our physicists look beneath the superficial flux of perception only to find a socioeconomic and ecopolitical system of molecules made of atoms. Particles seem to follow rules. Then we look deeper, subterranean as it were, and then we lose ourselves in quantum uncertainty. We ought to applaud the virility, obstinance, and confidence it took to produce the first Higgs boson after a century of elaboration. The role of the Observer throughout makes the cosmos participatory, capitalistic, though we mean two kinds of observation. This begs the question if we can logically treat the two as one. Machinic Agency would treat false the faith that human observation and material observation deserve any distinction.

Social systems throughout universal history, with effects of space-time projected onto each point, make human vitalism mere particles in the cosmic body-system, co-determinant with all possible subjective universes. We should conclude there is mind and free will, not only all the way up, but all the way down as well. That is to say, there is no difference between these perceptual machines in operative fact, only in our strategic commitment to one form over all others.

In the post-Marxist methods of Deleuze and Foucault, they should loudly to us that exceptions and deviations will expand our axiomatization; that rebellion and social progress keep us all locked in place as part of the machine. They call this subjectivation, because the subject-object relationship is given to the winners and losers as if implicitly true. We are made cogs in a machine that produces terrible unintended moral consequences. How much more, as Schopenhauer felt, the cosmos or the body! Metaphysical agnosticism leaves no escape. Each of us bear the burden of moral responsibility, although not at fault and without confidence in our wisdom.

We are in need of an uncertainty principle in philosophy. Anything we perceive as an individual, as a vitalism, a component in the system, a particle; they imply with elaborate efficiency an entire class of particularized objects, behaving on a long enough time scale to have 50/50 uncorrelated probability for any binary outcome. The stochastic philosopher may then play a game, treating all such particles as free agents that act to exchange at their level. Not only do we not know if their freedom or randomness is intrinsically different from what we feel, it also seems to make trivial difference in practice.

Another way to express the tendencies of our Rhizomatic Unconscious by contrast against statistics themselves – the pinnacle of arborescent consciousness. The “law of large numbers” that we apply to population dynamics, predicated upon a simple trait, the dichotomy manifests itself as axiomatically true. This only works when we define the population we wish to observe in advance. Observation is first a teleonomic prejudice of constraints. Science succeeds best when it is double blind and relies upon uncertainty to produce probability! To succeed, we need a memoryless queue of opportunities, and an agent that acts with uncorrelated probability at each opportunity. With enough opportunities, we find the risk of error diffuses into obsolescence. If we want to predict with confidence, we must first break assertions into tiny homogenous slices for which our incorrectness about one does not affect the outcome of the next.

The arborescent conscious builds a hegemony of the majority around which all exceptions are related, within the logic of the observer, as normalized standard deviations from the average. The law of non-contradiction is not a priori knowledge, it is strategic axiomatization.

In contrast, the rhizomatic unconscious is the cumulative deviation that grows in a series of observations within a universe of thought. While uncorrelated probability allows us to wait until enough opportunities pass, waiting for the long-run probability to minimize risk of tiny components secured within the huge system, cumulative deviation is like placing a bet on that same coin toss repeatedly – we can predict with the same certainty what the probability of the next conscious event will be, but we cannot predict how many opportunities we would need to restore our winnings or our debt to zero. This restoration is the realm of morality, the critical leadership in pursuit the cultivated universe.

The unconscious of our social systems, easily expressed in narrative form, is every opinion and method of living that privileged agents leave unrecorded and untold; liberalism pursues the shouting of uniqueness and the failure of false conformity. The impact of a personal unconscious, of the feelings, impulses, ideas, and memories left for later, uncategorized, outside our narrow focus, we will return to later.

Truth-Value in Pragmatic Epistenomics

“For eternally and always there is only now, one and the same now; the present is the only thing that has no end.” – Erwin Schrodinger

Pragmatist Epistemology, developed initially by American psychologist and philosopher William James, explores the value of an idea based on the difference it makes in practice. Two or more people can observe the same events with wildly different facts. Large groups can produce data that bias skews in favor of an incorrect conclusion. We might discuss truth as the extent to which a cohesive network of ideas related to verifiable facts, but philosophers have invested heavily in casting doubt. Perception can become distorted. Large groups appear to have believed intricate systems of ideas to which no one would now ascribe. Philosophers in epistemology took great please in showing how often additional information forces a paradigmatic shift.

Some conclusions regarding perceptible events continues to push the curtain of physicality further into generation by our own mind. Psychological dysfunction exacerbates this issue. Therefore, William James needed to help patients question what is “real” when validity may be impossible to confirm – opinions, feelings, superstitions, and so on. Our brains skew the interpretations of data in the production process of reproductive-survival information. Much of human history across segregated populations produced incommensurable ideological systems that only gain temporary resolution through war. If truth is a form of militarization then the validity of beliefs must play some darker role than we often hope.

We can see why James, working toward an understanding of human psychology, just after the birth of our nation, would have such a concern. When we carefully listen to people who have ideas that do not agree with our own, exploring their explanations, empathizing with their biases and the pains of their past, hopes for the future, we find that the distinctions of observable reality can be intricate. In quantum terms, the more look at one particle the less likely we feel it could be participating in a rational system of laws; in economic terms, the more we observe one person’s financial decisions the less we would feel anyone is acting rationally or in accordance with long-run self-interest.

When James developed the underpinnings of functional psychology, we see a nation of immigrants participating in an industrial revolution together. They came together in that proverbial melting pot of the American Dream. The hectic life, adventure, and opportunity made it clear that in an environment of constant change, crowded together as a population, needing to get along despite opposing views, placing truth-value outside humanity was dangerous. Entrusting reality to the alien dark matter of Kant’s metaphysics places it outside the moral responsibility of humanity. Any “truth” as aspired to by Hegel, who few could claim to understand, as the knowledge spirit gains about itself through associations justifies all forms of terrible actions to the extent they inspire their own antithesis.

In his treatment of patients, it must have been painfully clear that the ancient Greek challenge of the skeptics had little relevance to daily life. From then to know, we deny that any component of a rock possesses hardness itself, we only know our own painful experience in kicking it, as one network smashing against the resistance of another network. Something remained unanswered for James, the debates of the empiricists and rationalists of Europe helped little.

James made an argument based on what we may now call a functional system. As we explore Fractal Ontology while maintaining Metaphysical Agnosticism this functional resilience will be the measure of an idea’s value. Instead of Epistemology, we do better in calling this Epistenomics, the rules by which knowledge expands.

An idea is a commodity that valorizes only in continuous exchange. It is only through exchange that an idea becomes true. Rather than extensively defining truth-in-itself, we will methodologically apply three variations of truth-in-practice. First, Truth-Value is a semiotic system representation that intelligent beings exchange in accordance with axiomatized socioeconomic rules. Second, Information Dominance is the prevailing system of cohesive ideas regarding a topic that becomes “insured” by Political Economy; in other words, the truth-in-practice that is currently winning in each population. Third, we will update the old treatment of truth-in-itself as a goal that humanity has been willing to pursue great violence to attain, Hegemonic Truth. Although Information Dominance achieves its victory because the system of ideas aims to attain Hegemonic Truth, the final answer, the causa prima of all other valid ideas, we will treat each with suspicion, and prepare ourselves for moral and philosophical wars of our own.

Because we are taking this initial proposition to an extreme logical conclusion, we will supplant what James accomplished while ascribing much to his legacy. A discerning reader will see where we are applying theories that occur only after James’ pragmatism. We will apply quantum mechanics, postmodern critics, behavioral economics, and evolutionary biology to this “free market” of truth-value ideas. As a disclaimer, what follows has little to do with what James argued (or any other author we reference). As frequently happens in philosophy, we make arguments predicated entirely on the significance of originator’s legacy, typically with little regard for their words or intentions.

Systemic Liberty & Component Freedom

Components act as units of teleonomic reproduction. They warrant attention as objects reliable as decision nodes. Complex networks of nodes may develop based on simple decision rules despite the pressures of entropy. If the nodes are not stable enough in the reproduction of decision rules, network complexity will not arise. Node reproduction relies on the equilibrium-stable identity maintained. Free play can generate networks, but continuous irreducibility and consistent rule identities are prerequisites of system regeneration.

Freedom of unrestricted components in any system of exchange manifests as unmanaged, uncoordinated interaction of market tacticians. These desiring-machines are not only homogenous but equally rational, equally information-bearing and equally demand-producing. Lumping homogenous tactical components creates unstable networks but fails to produce stable complex adaptive systems. Complex adaptive systems require objectification of nodes as homogenous resources. Therefore, freedom without accumulation of inequalities produces an enormous, boring aggregate of bacterium, heaped in a pile.

The pressure to submit to rules of objectification arises out of power-law dynamics; will-to-power is this final power-law trajectory of the cosmos. These competing systemic forces shape the success of free play at many levels of continuous irreducibility. Without gravity, magnetism, sexuality, pain, discontent, axiomatic drives so ingrained in our sense of mastery that very few recognize their enslavement, the system breaks out of the cadence of synchronization. Homogenous freedom, perfect equality of all components, results in a non-system.

Morality in practice develops consistent rules of ethics. Politics reproduce these ethical systems. Freedom and equality cannot form a complex system of teleonomic reproduction. The State (as with the body, the cosmos) cannot empower its citizens beyond a certain limit of relative freedoms, disseminated through its system of inequalities, justified by the morality of its axiomatics. There can be no anarchy-state. Any freedom of socioeconomic exchange and any individualization of sociopolitical force is thereby post-paternalistic, all liberation is post-despotic.

We find this easy to accept so long as the law is orders of magnitude away from our daily exchange relations. Systems of relations and variables are always systems of inequalities, not only in the mathematic but also the social phase space. Without these inequalities, there is no differentiation of variables, no separation of space-time, no mass or gravity, no weak or strong nuclear force. It is only through an anti-equality, heterogeneity in four or more dimensions, that the system can intelligibly produce homogeneity of one category versus another.

The supply curve emerges unequal to but continuously relating in juxtaposition with the demand curve. The exponential decay of economies of scale emerge unequal but continuously relating to the exponential growth of inventory holding costs. It is precisely when we chain together the abstraction of any given variable into a continuous function that probable inequality is meaningful despite the complete absence of a single representative for the inequalities of the system.

How silly of us to say, as it were, that this particularized man is a “demand variable” or that particularized woman is a “supply variable” of these socioeconomic networks! This was always the fantastic insight of the economics and physics of the early Modern Era – when we take an economic view of a sociopolitical system, there is no need to find any particularized representative of inequalities. The gradation of ranks emerges without management, though this is frequently a mismanagement. Emergent systems only need continuous sampling to keep the function unequal, and thereby meaningful, in comparison to other functions.

We can see that an increasing comfort with abstraction, probability, and tolerance limits has driven human progress. For example, the ability to symbolize (representation) not actual inventory, nor actual intrinsic worth, nor actual persons who exchange, but the logical set of all items held, all wealth accumulated, all persons who may exchange. Only then does the human symbolization of inventory, particles, and citizens plot into waves of normalized, probable, continuous possibilities.

To the quantum system, truth-value is a particle; but it is only a particle in principle. The statistician’s real and indoctrinated citizen must exist somewhere in this probability density of the population. Is she economic supply or demand? Is he political inventory holding cost or political transaction cost? These systems of inequalities are vectors, but only in aggregated calculation, in sufficient populations, driving us toward that ultimate capitalist conclusion – s/he is a superposition vector

Superposition citizenship, a moving-forward forward concretization, a superposition of a multitude of vectors, functions, calculi, and inequalities. The citizen is a particle of the social fabric, if only in principle. The atomic particle and the quantum particle are likewise citizens in principle. These are all inventions of the human mind, “real” functions, mathematically, but non-actual existentially. How lucky for us that such schizophrenia and bipolarity, such increasing alienation of probability from singularity, turned out to be pragmatically essential to a digital revolution that will self-perpetuate, remembering our imaginary simulacra on our behalf!

However, we come to an uncomfortable relation in singularity. Who am I, in a phase of recursive reflexivity, when my most reliable identity is the one defined by standard deviations from a particularized citizen that is true “in principle” beyond a reasonable doubt, but is axiomatized upon the acceptance that existential instantiation is merely relative? The failure of unquestioned personal morality lies in unquestioning submission to the ethics of the political system. More often, this results in displacement into some invasive ideology as system of denial, because the system of values has become morally bankrupt. This is a pornographization of the soul (even in the absence of gods) that would make the most despotic of the medieval popes jealous. The consumer-citizen no longer compelled to produce or consume commodities; that is a foregone conclusion. The weak, cowardly, distracted soul must now produce and consume itself. This production is relative to the gravitational pull of normalized continuous citizenship waves, again, in principle.

Sublime Simplicity

O sancta simplicitiatas! In what strange simplification and falsification man lives! One can never cease wondering when once one has got eyes for beholding this marvel! How we have made everything around us clear and free and easy and simple! how we have been able to give our senses a passport to everything superficial, our thoughts a godlike desire for wanton pranks and wrong inferences!–how from the beginning, we have contrived to retain our ignorance in order to enjoy an almost inconceivable freedom, thoughtlessness, imprudence, heartiness, and gaiety–in order to enjoy life! And only on this solidified, granitelike foundation of ignorance could knowledge rear itself hitherto, the will to knowledge on the foundation of a far more powerful will, the will to ignorance, to the uncertain, to the untrue! Not as its opposite, but–as its refinement! – Nietzsche, Beyond Good and Evil

Rebellion, Bigotry, and Due Process

To build a legacy that will truly last, we need consistency of self-identification in addition to experimentation. This takes balance. An over-reactive system may adapt quickly, but it will fail to scale in complexity.

A complex system too rigorous in its resistance to change, cutting down strange attractors and emergent organic leadership that opposes its orthodoxy, will find itself maladaptive. Such a system, if made of relatively independent actors, will produce schisms, splits, and offshoots due to the excessive fundamentalism of its self-identification process. Instead, we systems builders want the robustness, strength, and adaptiveness that results from the tension between tradition-rooted cultivated practices versus the spontaneous pursuit of fashion and buzz. This tension is healthy so long as it drives the continuous experimentation engine of the organization, allowing signals to emerge and backpropogating message errors. In other words, we should expect “just enough” sibling rivalry at any level of the organization. We should expect triangulation and escalation of signaling.

Experimentation requires tension, but discovery requires due process – the fair treatment of both sides in a conflict resolution.  The essential role of the systems builder is the promotion of self-awareness. A system unable to recognize its own constructs and correct them is unable to change. We see all too often that power need not be taken from someone so long as they believe they are powerless. Similarly, an executive order is rarely an effective mechanism for introducing lasting adaptation, but it can be quite effective as a signal for the system self-organize against.

A cultivator of adaptive systems does not generate unrealistic and unreasonable new rules in an effort to artificially push the system in a new direction; rather, we make the existing rules of interaction and exchange visible and known equally. Where the rules allow differentiation, we make the logic behind such distinctions known, trusting that an adaptive system will correct itself. We do not employ attrition warfare – one ideological information system against another – instead we maneuver against the broken logic of the enemy system. In this way, organic leadership does not pursue the wholesale destruction of an opposing nation, religion, economic institution, or political party. This is folly, as the diminishing returns of attrition warfare depletes energy, resources, and public support. Those are people on the other side of our wars, after all, and our monstrosities in the pursuit of victory easily turn public support against us.

Instead, wherever there is differentiation the systems builder ensures there is equal access to knowledge of the logic behind apparent unfairness. We encourage open rebellion and take even the least realistic signal seriously, as this is preferable to letting the system stagnate while an insurgency is festering behind closed doors.

To maintain our persistence, to balance tradition and stability with experimentation and responsiveness, we must above all ensure due process and the faith of the public that due process provides fair treatment in any conflict at each level of the system. It is decentralization of local process enforcement that allows systems to experiment. Equal access to escalation of justice will resolve critical reinterpretations. We do not need, as a systems-builder within an overarching complex adaptive system, to control the rebellion of progressives and early adopters nor the backward quasi-bigotry of instinctually late adopters. We must only ensure the system is healthy enough to re-inform itself based on the outliers, trends, and signals.

Why? Why? Why? Why? Why? (5 of them)

As I described this weekend on Snapchat using the example of my house, Root Cause analysis – or asking the 5 why’s – is essential to lean scalability and a thriving culture of relentless improvement. In complex systems thinking, you must see problems (lack of quality, decreasing sales) as a symptom of the system as a whole.

I bought my first home in November in a north suburb of Chicago. Naturally, that means finding little issues here and there as I go. It was originally built in the 1950’s and I knew it was in a neighborhood that had flooded a bit a few years back. I was excited from the first tour to see a fantastic dual sump pump system in the finished basement.

Unfortunately: The previous homeowner had treated the symptom, not the problem.

A house (like a software product or tool in its context) is part of a complex adaptive system. It is inserted into a biological ecosystem, and integrated with multiple networks (cable, electrical, plumbing, roads). What the previous homeowner did is a mistake many of us make when it comes to eCommerce, marketing campaigns, enterprise software, you name it – the symptom was treated in the context of a system in homeostasis without changing the ability of the system to adapt to deal with a chaotic event.

SO – my basement has flooded, just a little, three times this spring.

Enter the “5 Why’s” Analysis:

1- Why is the carpet wet in the basement?  The sump pump didn’t pump out the water quickly enough. If I were to continue to treat the symptom, I might upgrade the sump pump, which is expensive and might not work (and what we tend to do in the workplace).

2- Why didn’t the sump pump handle it?  There was too much water around the house, building up hydrostatic pressure. The second time we had flooding, I noticed that the water appeared to have come in from all sides, not from the sump pump reservoir overflowing. (i.e. without “going to the place” I might have continued to blame the sump pump)

3- Why was there too much water around the foundation? I have a negative grade, meaning my lawn on one side slopes slightly toward the house. Again, easy to blame that and spend a fortune on a re-grading (legacy system migration anyone?) but I had the joy of really, really “going to the place” and spent an 1hr flash-flood storm OUTSIDE, managing the flow of water in non-normal conditions. After all, the yard may slope slightly, but there are 4 basement egresses with drains in the bottom that run to the sump pump…

4- Why did so much water flow to the basement window wells that the drains couldn’t get the water to the sump pump quickly enough? (notice that we are finally getting somewhere in our root cause analysis!) Once I was out in the storm, it was clear that the rain on its own was not the issue: despite having cleaned out my gutters hours before the storm, the winds that blew the storm in kicked lots of new leaves onto my roof, blocked the gutter, and a waterfall of water came off the gutter onto the negative grade instead of going down the downspout system that drains the water in a safer direction. What I also noticed was that the sidewalk gradually filled with water from the downspout nicely – meaning there was a certain amount of in-yard flooding that could occur before the water would pour unchecked into my window wells. (note, I could invest in LeafGuard or something as part of a total replacement of my gutters, but have we really found the root cause?)
5- Why doesn’t the system (my house in its context) handle a the flow of water in that quantity? Now we’re down to business. The soil has a high clay content and hasn’t been aerated recently. The previous homeowner removed bushes on that side of the house but not the roots and stumps. The downspouts eject water 3 feet from the house, but into an area of the lawn that can be easily filled with water that will then flow back to the egresses.

Root cause – The system is not prepared to handle the flow of unwanted inputs under non-normal conditions.

Oops, I slipped into discussing emergent leadership in complex adaptive systems.  What I meant was, nobody had bothered to look at what happens to the flow of excess water in flash-flood conditions.  Just like I frequently see no one planning for “storms” in their agile or devops culture, their social media presence, or omnichannel efforts.

To round out the story, now that we have a ROOT CAUSE.  I can come up with a….

Solution – Create a sub-system that encourage adaptation to non-normal systemic conditions.

Sorry, I did it again.  But you really can’t tack on a new tool or process if you have underlying cultural factors that need to be addressed.  For my house, the answer is simple, add a French Drain system that will handle excess water during a flash flood.

Now, with my years in custom app development consulting, the parallel is really quite striking. Investment in a bigger pump, a total re-grading, or new and improved gutters would have been an expensive way to deal with emergent properties of the system without helping it adapt properly to non-normal stress. The french drain and dry well implementation I have started will require some hard work (i’m digging it by hand!) but potentially no cash (I already have more river stones than I know what to do with).

  • I’ve discussed how this applies to agile or DevOps transformations that don’t address cultural problems.
  • I’ve shown how bad investments in software happen due to a lack of understanding of the root cause.
  • Look for more on how this applies to eCommerce and Marketing on the way!

5 Reasons I Would Fire You

Originally posted April 2016. 

Disclaimer: I currently work solo on this blog and could only fire myself – so this isn’t veiled threat.  I have done my best to mentor individuals and lead teams aways from these dysfunctions; and disrupt processes that perpetuate them.  These are also part of my personal introspection process.  This is not an accusation of anyone  in particular.  Instead, these are traits we can all continuously work to improve.  On the other hand – “You’re so vain, I bet you think this song is about you.”  

The Top 5 Reasons I Would Fire You

Tech professionals on teams trying to innovate:  Speaking on behalf of managers, your peers, and individual contributors everywhere, these are the top five reasons you aren’t just a poor performer, you’re bringing down the people around you as well.


Reason #1 – You Default to One-Way Communication

Collaborative problem solving cannot happen without meaningful and timely feedback.  There is a time for group chat and a time for well-argued prose (email).  To avoid death-by-chat and long CYA email chains, you need to set clear expectations about when you need to focus and when you can discuss issues – and respect that prerogative in others. 

Whining about documentation, instructions, or a process as document brings you no closer to a better workplace experience for yourself, improved team health, or a product you can feel a lasting pride, prestige, or sense of legacy about.  Bring a solution to the table, own your responsibility for following up, and escalate to a scheduled meeting if needed.  Folding your arms and leaving work unfinished is childish.  You know you can do better – do it.

Mantra – There are no documentation problems, only communication problems.


Reason #2 – You Repeat the Same Words When I Say I Don’t Understand 

Speaking of childish, self-advocacy is an important milestone.  It requires enough vocabulary, understanding of abstract concepts, and recognition of similarities and differences to allow a child to not only imagine a future state that is desirable, but also solve the most likely path to attain it, and make a rational statement to an adult who can permit, empower, or provide.  My three-year-old daughter, forgivably, needs an enormous amount of assistance, and patience, when she attempt this.  As an adult, you should not.

As a leader, I will do my best to bridge the gap between your words and my words.  I will cue you when I am unable to build that bridge, repeat back to you what I understood you to say, and ask you to demonstrate or show me where and what you mean so that I have the context I need for a deliberate and logical decision.  I will do all of this without patronizing you, even when it is mentally exhausting for me.

Not everyone has learned to lead this way, and I admit I can be imperfect at it as well, so you absolutely need to learn to self-advocate.

That said, I cannot heroically be an adult on your behalf.  The real dysfunction that brings down team performance through your own sub-par performance is the continued repetition of the same words when I (or others) explicitly ask you to re-word the request, argument, or question.  You are obstinately anti-try-something-else.  You refuse to paraphrase, assist my incorrect understanding, or demonstrate the meaning of your words.  It is only through my strong personality and insistence that I convince you to show me exactly what the problem is so that I solve it rather than answering a question that sounds like utter nonsense out of context.  Unfortunately, even that is not always effective.  I can carry my pre-school daughter to the cabinet and let her pick the exact afternoon snack she wants.  I cannot “carry” you as an engineer into a realm of creative solutions where emerging technology and emerging market segments meet.

Mantra – Communication is the responsibility of the communicator.


Reason #3 – You Feel No Pride of Ownership Over Your Work

Having coached, worked with, or heard the complaints of hundreds of tech-focused professionals in various, I have found this can often be more a symptom of the dysfunction of an organization than the root cause of poor performance.  The tech industry today is too mentally demanding and excitingly disruptive to attract genuinely lazy people, looking for a free ride.  So when you start giving into distraction, procrastination, or laziness, my leadership spidey-sense goes off.  I will tell you the secret to motivating innovation-based technical teams – empower them to know the impact a line of code will have on an end user. 

Karl Marx’ philosophy describes this exact phenomenon in its examination of the individual worker’s separation and alienation from the product.  Superficially, the question seems quite simple:  Which is more rewarding, a carpenter who makes custom-installed wooden shutters, getting to know the customer, their home, and tastes in the process, or working in a factory running a machine that produces millions of shutters for a big-box store’s generic one-size-fits-all product line?

If you have lost pride of ownership over your work as a software professional, though, shame on you.  You have no excuse for complacence, apathy, or becoming disengaged.  Your skills are a premium product in a seller’s market.  Companies of every size will fight to win you to their side. With one idea and a few colleagues, you could start a company of your own in a heartbeat.

Now, let’s be adults here.  We all have to collaborate and negotiate.  When the majority or a manager makes a call that goes against your individual dissenting opinion, don’t stomp away and pout.  Losing pride of ownership over work, and settling into a free-rider paradigm brings down the team, the product, the end user, and your career.  You better woman-up or man-up and either do a great job that you can be proud of, work to change the organization that is stifling you and your peers, or move on.

Change takes courage, but our virtue is the outcome of our habits.  When you accept and justify your childish, dysfunctional, lazy, sub-par effort and excusing yourself through an external locus of control hurts no one more than you.

Mantra – Anything worth doing is worth doing well.


Reason #4 – You Hide Behind Uncertainty

Deconstructionism is a dangerous game, especially when you are part of a team that is teetering on the edge of a cliff overlooking the seas of chaos, moments from falling into market risk or technical risk that could engulf you.  Since I coach teams on how to become a room full of adults solving the pains of a real person through a collaborative, unified, inspired collective brilliance and sheer power of will, I have a radar for someone  who is hiding. 

You are playing a dangerous game.  You signed up for this, after all.  You wanted to be brilliant, in the thick of it, defining emergent market segments using emerging technologies – but the minute you lost faith in the cause, lost hope for your job security, or lost belief in yourself as a builder and creator of new tech that can change the user’s world… that was the moment the inherent uncertainty of our goals became apparent.  You shut down.  You got stuck.  You became intolerant of technical risk AND market risk and looked to your leaders to spoon-feed you.

At first, a good leader can give a big speech, host a team-building event, or roll up the proverbial sleeves to help.  When the team as a whole needs some slack but they still have their eye on the prize, I have a long list of tools and tricks to re-energize the whole team.  When an individual begins the process of deconstructionism, and moves every conversation into an infinite regress in which the certainty of any word or any intention or any risk is now more important than the product discovery process, that’s when a tough love heart-to-heart happens.  Agile demands small increments.  Innovation requires trial and error.  You must remain infinitely curious.  You must self-advocate for the size of the risks you take.  Escalate when time-to-feedback is hurting you.  Sturdy yourself and your tenacious attitude about the “failure” intrinsic to empirical discovery – otherwise you don’t belong in this work space.

Mantra – Fail fast to succeed sooner.


Reason #5 – You Give Up Before Attempting to Solve a Problem 

This issue if often comes hand-in-hand with insecurity toward uncertainty.  When it comes to coaching a product visionary in agile, this means whipping them with the importance of setting goals for the product, an end user to empathize with, and a pain to solve in the target user’s particular context.  Once that is in place, a team – as a whole – may need some encouragement that a 100% success rate is not the goal.  Innovative, defect-free software that fits the user’s needs is the goal.  As it turns out, some people fear failure too much to risk it.  If that’s you, make sure you are in the least innovative technical space possible.  Sink your teeth into a legacy system and never complain about the spaghetti code you manage again.  That slow-moving space is perfect if you prefer to play it safe.

Innovation may not be important to SOME people, but it is VERY important to the REST of US.  The courage to risk failure is essential to experimentation. 

The real issue, of course, is not the fear or the failure.  It is a lack of proper perspective that puts your short-term ego ahead of long-term viability.  It is a base rate logical fallacy in which you are ignoring the most important variables.  Pretend for a moment that we have a product for which any given User Story – which we’ll restrict to less than two weeks of effort to get from planning to production – has a 70% chance of success (completion in two weeks) due to technical uncertainty and 20% chance of success due to market uncertainty (i.e. “is it really what the end users need?”).  If you take the risk of a false-positive – succeeding in releasing a working product increment that the market doesn’t demand – as the only indication of your own failure, you are sure to be unhappy. 

Now, imagine a breathalyzer has a 5% probability of a false-positive.  A police officer pulls over drivers truly at random at a random time of day.  What is the probability that a driver who tests positive is actually drunk?  Guess what!  A dreadful 2% chance.  Luckily, officers are trained not to play the odds like that.  The time of day, the day of the week, the location selected, and driving behavior all weed out the risk of a truly random selection.  Then recognition of symptoms, through human interaction must give probably cause. 

When you stop trying to overcome technical risk or market uncertainty prior to even attempt to solve a problem, you’re like a cop who stops pulling over anyone due to the statistical uncertainty of a false positive.  If you attempt to solve 0% of the problems you face, you’ll come away with a 100% lack of solved problems. 

Tackle 100% of the tough challenges tenaciously, courageously, and look for an assist as needed.  Anything else makes success incredibly unlikely.  The market risk of success is hard enough.  Don’t ruin the odds further by quitting in the face of technical risk.

Mantra – You miss 100% of the shots you don’t take.


Grow Up or Move On

If these sound like you, work to grow as an individual or you are likely already on your way out the door.  If you, your peers, and even your manager exhibit these traits and the organization seems unlikely to change despite a heroic group effort – it’s time to move on.  Complacence, apathy, and passive aggression is terrible for your career.

I’ve taken to saying, “Some people just want to watch the world burn – the rest of us build it anyway.”  If you aren’t a builder, at least stop burning down what the rest of us will happily accomplish with you.

Measuring What Matters to Innovation

Competing on Innovation

The rate of technological innovation and adoption has accelerated to a level that would have been impossible to imagine centuries ago. As such, an amazing number of startups have been able to succeed from strategic position based on niche-market differentiation and a culture of passionate innovators.

Established – and well-funded – large enterprises have taken note. Adoption, disruption, and even disappointment and abandonment of products are forcing every company to tackle what cloud computing, mobile, and IoT mean for their categories.

Every company is trying to become innovative, because we are all competing based on technological innovation.

Unfortunately, building a sustainable competitive advantage around culture of innovation is far more complex than previous strategic positioning was. Nash equilibriums a few decades ago could be easily resolved with a few long-term commitments that ensured indirect competition and margins for all major players. Innovation could actually be stifled – purposefully – by industry oligarchs to minimize the risk of new entrants. The rate of innovation today has made this control impossible, to the benefit of the consumer.

However, building a culture of innovation requires an entire new way to structure the organization and reinforce its behaviors. This is made even more challenging by the abundance of data available that was inaccessible before. Measuring metrics is the formalization of what decisions are worthy of notice. Thus, to understand what to measure, a leader must know not only how to measure a metric but also why the metric matters to the achievement of long-term goals.

To explore this, I’ve been discussing the connection between individual motivation and the company’s goals as a Minimum Viable Superorganism:

‘Selfish individuals pursuing shared goals (arising from shared underlying incentives), held together by a Prestige Economy which consists of two activities: (1) seeking status by attempting to advance the superorganism’s goals, and (2) celebrating (i.e., sucking up to) those who deserve it.’

The executive and visionary of the company must be the purposeful “mind” leading the activity of the superorganism. The relationship between employee motivation and company outcomes rely on this Prestige Economy, so leading a company means guiding this economy.

Naturally, it easiest for me to discuss this in terms of scaling agile or scrum, or technological innovation in start-ups versus large established enterprises. Discussing each, if you are hoping for very specific metrics for you company, will take the rest of my life so I will leave those conversations to a per-company basis. However, the fundamental issue confusing the connection between innovative teams and enterprise-level accounting metrics is the virtually insane forgetting of what has truly changed in our new and rapidly accelerating tech-adoptive society.


 

Market Risk vs Technical Risk

If you recall the example of the terrible metric (for innovation-based companies) “allocable resource utilization” we saw the well-established, consistent distribution – i.e. strategic trade-off – between utilization and responsiveness.

If you imagine one very capable engineer you’ll find:

  • Demands arrive to the employee at a variable rate.
  • Work is accomplished at a variable rate.
  • There is one worker.
  • The possible queue of demands is potentially infinite.

This type of queue is an M/M/1/ ∞ queue.

Whether the “1” here is a server, an interstate, a mobile developer, or a 1990’s movie hacker’s CPU, increased utilization of total potential results in rapidly-accelerating diminished returns as utilization reaches 100%. Likewise, you can occasionally “overclock” with appropriate support and recovery, but constant over-use results in chronically poor responsiveness to demands, a pile up of unmet requests, and finally, a CRASH.

HOWEVER, competing on innovation requires additional variables for our M/M/1/ ∞ queue because it treats “requests” as a discrete abstract entity. It accounts for the possibility, because it is a Poisson distribution, of requests not being the same size or difficulty – technical risk – by stating that work is accomplished at a variable rate.

What the M/M/1/ ∞ queue fails to account for is whether or not the request was the correct request. In the case of a server handling API requests, we could assume the incorrect initial request probability is zero if we believe retrieving unwanted data is the failure intuitive front end or simply user error (the API and the server is not the guilty party). If we are imagining a highway over an extended period of time, drivers who intended to take a different route and got on the highway by mistake are virtually an outlier, and an even less noticeable percentage behave in a way that would impact the flow of traffic.

In technological innovation and the development of any given software product, the risk that the request was the correct request at the time it was made and still the correct request by the time it has been fulfilled is EXTREMELY HIGH. The variable that prioritizes responsiveness over utilization in technological innovation is Market Risk.

After all, when we say “competing on innovation” what we really mean is “responding the fastest to disruptive market shifts while also creating market disruption or new demand and adapting quickly enough to capture value profitably”.

We don’t say the latter, of course, because it isn’t as sexy.

The reality is that a culture of innovation requires a few things that run counter to the leadership methods of old school consulting or manufacturing organizations. Don’t go on a request for quantifiable metrics on these, but an innovation culture requires things like:

  • Excess allocable brilliance
  • Willingness and aptitude for adaptation
  • Vigilant feedback-seeking
  • Permanent restlessness

The greatest risk of any innovation-based company is not the technical risk of learning to implement what was promised or the project risk of time-to-market or delayed advertising campaigns. The primary risk when competing on innovation is the market risk that, for any new product:

  1. The market knew what to demand
  2. Supply correctly met that demand
  3. The product still met a need by the time it went to market

In the App Store alone there are thousands of new apps per day. The market risk not only for a software product in this one market is unprecedentedly high, likewise shifting immense market risk to every feature added and every update released to your company’s increasingly less loyal consumer base. That is why Scrum works sprint-based, with increments that should always require less than a single sprint (2 weeks, ideally) of development team effort – not to mitigate project or technical risk, to mitigate the market risk that the end users or stakeholder knew what they actually wanted, knew the impact on the overall product, properly communicated it, and still want it by the time it is delivered.


 

Fail Faster to Succeed Sooner

We can see that when competing on a culture of innovation, receptiveness and relevance are the necessary compliments to responsiveness. This is the most important way in which agile delivers higher-quality software. Code quality, user testing, and market fit are all checked as often as possible. A great Scrum team fixes cosmetic, logic, and intuitive experience problems as they go, looks for feedback immediately about the demand for the feature, then enhances each tiny increment of the product prior to each release. In agile we call this “failing fast” so that we can assure we succeed sooner. The tight feedback loop means creating the right thing very well, based on the newest information available.

Time-to-irrelevance is the greatest risk to every innovation-based project. Not only the market risk of irrelevance, but also the loss of relevant context – both code and product vision – when ensuring the quality of the software and resolving defects or maximizing the return on a feature by improving it before moving on to the next feature.


 

 

Metrics that Matter

If we are driving a superorganism comprised of teams that are focused on product innovation – not only in software – we can see there plenty of metrics that will reinforce a Prestige Economy built for succeeding in innovation. I’ve described a handful below. One last note of caution here while these metrics are powerful and valuable, it will still be essential to clearly express who is accountable for each metric and empower that person or team so that they are in control of that metric. These are also defined rather philosophically. If you have a documented Work In Process flow to share, I can give you specifics.

Measurement Goal #1 – Receptiveness

Feedback-to-Answer cycle time – the total process time from the market making a demand to the market receiving an indication of response. In classic “core” Scrum, this may simply be the time from a customer making a request to the Product Owner telling that customer a valid expected release date. In a large-scale environment, this may be the time from a Tweet received by Marketing to the time Marketing announces the planned features in a new update that contains the feature requested on Twitter. To the extent your large enterprise is attempting to compete with small startups, this is the crux of your challenge. An entrepreneur leading a small team only needs manage her or his reputation for accuracy of promises and find a reliable way to ensure that single-mind heroic vision for the product becomes a reality. The cycle time from neuron to neuron is infinitesimally smaller than any scaled cycle time that includes multiple business units, functional teams, vendors, and a PMO.

The Feedback-to-Answer cycle can also be measured at the Work-In-Process level – when a card in a latter step is kicked back to an earlier step, how long does it take for that feedback to receive an answer? If it takes a long time and there is very little work-in-progress, this is a sign that receptiveness is poor. Maybe the Scrum board isn’t visible enough or the daily stand up is not as effective as it should be. On the other hand, if there is a huge amount of work-in-progress, capacity is over-utilized and responsiveness is suffering – setting WIP limits may be necessary (even if only for a short experimental period).

Feedback-to-Answer Quality – This is likely to be a qualitative measurement if used ongoing, and is likely a tertiary metric looked at only occasionally. The most relevant use for this metric is electronically documented support tickets that receive a rating by the requestor after it is closed. The problem with qualitative responses, of course is the possibility that only the most positive or most negative reviewers to surface. This makes this a poor metric for individual or team performance but should summarized more broadly for an indication of the process. Don’t set a target, just learn from the insights.

Supply-to-Demand Receptiveness – From a buzzword standpoint, this is your “Social Listening” as an organization, both internally and externally. From the time you meet a demand, how long does it take to discover that you met the right demand? How long does it take to know if you met the right demand correctly? For software, don’t leave this purely in the hands of social network listening – build into your software trailing indicators like usage analytics, product-wide ratings, and (sometimes) per-feature ratings and feedback soliciting.

In a large-scale product environment, pay attention to listening from all sources. A few new “ceremonies” are going to be needed to encourage collaboration from Epic Owners, program/division-level alignment of a shared backlog across products, and Stakeholder Gathering to solicit additional feedback.

Receptiveness is the precursor to responsiveness. If you aren’t “listening” to your market, you will never respond to demands correctly.

 

Measurement Goal #2 – Responsiveness

Demand-to-Supply cycle time – This is the traditional definition of cycle time and the best metric that carries over from Lean Manufacturing to Lean Startup principles. From the moment a market demand is made, assuming receptiveness is held constant, what is the total process time until supply can meet that demand. Anecdotally, the highest performance with this metric in a Scrum team I led as Product Owner was on a large enterprise tool. We released to stakeholders twice a week and released to production weekly. A feedback feature was created that gave the users direct input into our product backlog. We were able to respond to improvement requests made on Monday in a fully-tested Production Release that Wednesday. Statistically, these wonderfully short feedback cycles were outliers and relied on circumstances more than team performance. I share that anecdote as challenge to whatever complacence you may have about That said, if average Demand-to-Supply cycle time is greater than 90 days I would challenge you to consider if you are really “listening”.

Demand-to-Supply lead time – This is the traditional definition of lead time and is the time from initiation to completion of a production process. In classic Scrum, that’s the time from commitment at Sprint Planning to the time it is called Potentially Shippable by the Product Owner. This is a once-in-awhile metric that should be checked as an indication of whether teams are sizing stories and committing properly.   Whether a team is new or old, they will need extra reinforcement from a manager when average lead time is consistently greater than the sprint length. This is a sign of over-commitment and sprint carry-over. Too much WIP, over-utilization, and poor story sizing will leave a team hamstrung. Quality will suffer, context-switching will breed deceleration of velocity, and burn out will occur. This is at the heart of Little’s Law. When lead time is consistently greater than sprint length, this isn’t a performance metric to track – it’s a trailing indication that management needs to set clear expectations around the trade-off between velocity and quality (including market fit). Set WIP limits and enforce true swarming activities. Because WIP is a leading indicator for Lead Time, reducing WIP should lower lead time back below sprint length.

There is an important call-out here. While experienced Scrum teams know all too well the relationship between team WIP and Lead Time – this only covers the process states for that team. In a scaled implementation, where the stakeholders have an internal proxy voice (imagine a product division large enough to include a Social Listening Analyst on the marketing team) WIP limits at the Epic, Feature, Theme/Campaign, and Product may be necessary. Putting too many items on the Work In Process flow of the Product Marketing Managers that must A) Add Epics to the Product Backlogs and B) Give feedback that the right thing was built and properly fits market demand will create terrible inefficiency in the overall innovative delivery process. The same is true when new Business Development or long-term relationship-owning Account Managers are part of the input and output. No amount efficiency gained by the teams building the products will EVER MATTER without tightening the every other feedback cycle. Restrict stakeholder WIP to ensure they pay attention, provide meaningful feedback, and properly communicate new features and gather end user and customer feedback of their own. If average lead time per story is 7 business days but it takes an entire quarter for a minor enhancement worth millions in revenue to “circle back” through the organization to the development team – YOU WILL LOSE. Pack your bags, a startup is about become a category killer.

 

Measurement Goal #3 -Relevance

Cycle-Time Feedback conversion – when an end user or stakeholder makes a request, the WIP cycle ought to have a traceable “funnel” for requests making it through to market. This is not a performance metric but a continuous improvement metric. If a product is in its infancy and market share growth is on the rise, but product innovation is stagnant, ask for more requests to go through. If a product is mature and market share accumulation has plateaued, but the conversion rate is extremely high – the process may be building for the sake of staying busy and a new product should be innovated.

Lead-time Feedback Quality – This is another qualitative metric that may be useful in a 360 review process. From the time a team starts working on a product increment to the time it is delivered, each time an opportunity for feedback occurs, what is the relevance and value of the feedback that is given? Putting metrics around this can be very valuable for a short period of time if approval processes and quality assurance are failing. If it is right for your scaled environment and will not cause unnecessary inefficiencies, this can even be formalized and automatically enforced (e.g. make a Resolution Type and Comment Field mandatory when the Product Owner moves a Story card to Closed, with the expectation that the quality of the feedback will be a topic of review by a manager for purposes of mentoring and career development). At scale, think very hard about the inefficiency this may create and its fairness across the organization prior to roll out.


Conclusion

These are a handful of metrics that actually matter for innovation and success. For specifics that apply to your organization, feel free to reach out to me for free advice anytime at andrewthomaskeenermba@gmail.com or Tweet me @keenerstrategy

This is part of a series!

Part 1 – Metrics: The Good, The Bad, and The Ugly

Part 2 – How to Fail at Performance Metrics

Part 3 – Rules For Measuring Success

Part 4 – Measuring What Matters to Innovation

Throughout the series I tie together ideas from two great resources:

Kevin Simler’s Minimum Viable Superorganism

Steven Borg’s From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum

How to Fail at Performance Metrics

In my last post we reviewed Hawthorne Effect and other exciting topics.  Check it out!

Throughput Metrics:

So how do we find statistical process metrics that lead to better empirical process output (without dire consequences)?  The ramifications of an “ugly” metric cannot be understated.  The goal of implementing agile is to reap the benefits of higher team velocity, better fit-to-market, better quality products, faster time-to-market, while establishing culture and innovation as a competitive advantage.  These are lofty goals. The engineers and other functions you have gathered together likely joined with a desire for meaningful software creation. The natural undisturbed and unmeasured systemic state of such a group should be a collaborative effort to create products envisioned by executive leadership. Introducing an ugly metric will near-instantaneously disrupt whatever was gained through agile by driving symptoms of codependency in the organization. It will be a betrayal and undermine the creative process.

“Managers who don’t know how to measure what they want settle for wanting what they can measure.” – Russell Ackoff

 

First of all “manager who don’t know how to measure what they want” need to try harder and ask for help from thought leaders, a Google search, or fellow leaders. There is no excuse for allowing a company to hum along without any guidance from its visionary executive leader(s). There are an enormous number of metrics possible.  An experienced statistician could produce probability distributions showing likelihood of correlation between any number of variables and an expected outcome. This does not make them valuable to an executive or appropriate for an organization. A metric must be easy (enough) to understand. Although a fair number of humans (especially engineers) can compute two-variable “fuzzy weighted logic” in their heads, I defy you to find an entire for-profit organization where every person can compute and make informed decisions based on complex multivariate calculus and probability distributions.


 

Vanity Metrics:

We have seen so far that the right reason to have a metric is as a purposeful tool for implementing executive vision while the wrong reason to introduce a metric is to correct the insecurity of executives when they feel “out of touch”. The latter are vanity metrics. They make the executive feel better at the risk of redirecting energy toward behaviors that run counter to success. One example is utilization.  It may feel good to track as a manager, because companies that pay people have taken a risk and want an appropriate return on the social contract known as “salary”.

Unlike some metrics, it is unlikely that utilization gets tracked with a purposeful tradeoff against lead time or cycle time. In other words, to the extent a company adopts agile and prioritizes “responding to change” – or responsiveness in general – maximizing utilization is mathematically counter to agile because it is detrimental to responsiveness.

This has been thoroughly analyzed in queuing theory. If you imagine any one engineer:

  • Demands arrive to the employee at a variable rate.
  • Work is accomplished at a variable rate.
  • There is one worker.
  • The possible queue of demands is potentially infinite.

This type of queue is an M/M/1/ ∞ queue. Now you may have heard Google has 20% time as a benefit, but when looking at M/M/1 queue – applied to highway flow, server traffic, or people – the point at which the trade-off between capacity utilization and responsiveness becomes unacceptable is not solved statistically. All that is known is that handling additional requests will eventually need additional capacity.

“As the freeway approaches 100% capacity, it ceases being a freeway. It becomes a parking lot.”

Jim Benson, Personal Kanban: Mapping Work | Navigating Life

 

This is the problem with tracking utilization. What is the “right” utilization number? Executive strategy defines acceptable trade-offs. Unless you clearly articulate a benchmark and its importance, your employees will assume utilization is tracked against 100% of 40hrs, shifting their behavior to an inability to quickly respond to new requests. The Hawthorne Effect of tracking utilization purposelessly is over-commitment and burn out.

However, as a leader of an organization, an expectation of managers must be established. When are additional resources hired to ensure the desired level of responsiveness? As a rule of thumb, how much work – assuming there is significant work to do – assign to any given employee? Is it okay keep utilization at 50% for some employees? When is overtime acceptable? Acceptable management practices must be defined based on goals for responsiveness.

This is the difference between “utilizing” an hourly wage warehouse employee by having them sweep the floor an extra time on a given day due to downtime versus cutting a salary-based ambulance and firefighting team due to low “utilization”. The hourly employee typically would not want reduced wages because of a lack of work and there is always a floor to sweep while they wait – the manager knows they are suppose to keep the employee busy. In contrast, responsiveness to a major fire or someone going into cardiac arrest is prioritized through “excess” capacity by mitigating the risk that utilization of the capacity to respond to fires or medical emergencies ever exceeds 100%.

We can see now that tracking capacity and utilization is far less important than tracking responsiveness. In agile software delivery there are two types of metrics that ought to be meaningfully tracked and compared to achievement of company financial goals:

  1. Responsiveness to Change – In aggregate, from the time it is known a market demand has changed, how long does it take to “pivot” and address shifting market conditions.
  2. Feedback Timeliness – For any given point in the process, this is the length of time it takes to validate the intended change was implemented in response to change.

 

 

Proxy Metrics:

If the metric you want is nearly impossible to reliably compute or gain sufficient organization-wide understanding and traction around your vision, this is when you need to find proxy metrics that everyone can agree is an indirect leading or trailing indicator that the organization is properly taking the small daily steps that result in annual success. While a good expression of executive vision likely expresses strategic commitment and trade-off at a broad level, employees need an indication of how to make the daily hard decisions that directly impact their status and prestige within the superorganism.

Without this sense of “blessing” surrounding the commitment of time and resources, employees are powerless. Expect diffusion of responsibility and self-protective over-documenting of decisions that are made. In contradistinction, an executive seeking “the good” metrics needs a sharp eye on how a metric will create positive reinforcement of decisions that fit with the long term position in which the company is moving. If a metric does not reinforce the empowerment and authority you have blessed employees with, so that they make the correct decisions you expect your employees to make, it is a dreadful metric.

This is part of a series!

Part 1 – Metrics: The Good, The Bad, and The Ugly

Part 2 – How to Fail at Performance Metrics

Part 3 – Rules For Measuring Success

Part 4 – Measuring What Matters to Innovation

Throughout the series I tie together ideas from two great resources:

Kevin Simler’s Minimum Viable Superorganism

Steven Borg’s From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum