How to Fail at Performance Metrics

In my last post we reviewed Hawthorne Effect and other exciting topics.  Check it out!

Throughput Metrics:

So how do we find statistical process metrics that lead to better empirical process output (without dire consequences)?  The ramifications of an “ugly” metric cannot be understated.  The goal of implementing agile is to reap the benefits of higher team velocity, better fit-to-market, better quality products, faster time-to-market, while establishing culture and innovation as a competitive advantage.  These are lofty goals. The engineers and other functions you have gathered together likely joined with a desire for meaningful software creation. The natural undisturbed and unmeasured systemic state of such a group should be a collaborative effort to create products envisioned by executive leadership. Introducing an ugly metric will near-instantaneously disrupt whatever was gained through agile by driving symptoms of codependency in the organization. It will be a betrayal and undermine the creative process.

“Managers who don’t know how to measure what they want settle for wanting what they can measure.” – Russell Ackoff


First of all “manager who don’t know how to measure what they want” need to try harder and ask for help from thought leaders, a Google search, or fellow leaders. There is no excuse for allowing a company to hum along without any guidance from its visionary executive leader(s). There are an enormous number of metrics possible.  An experienced statistician could produce probability distributions showing likelihood of correlation between any number of variables and an expected outcome. This does not make them valuable to an executive or appropriate for an organization. A metric must be easy (enough) to understand. Although a fair number of humans (especially engineers) can compute two-variable “fuzzy weighted logic” in their heads, I defy you to find an entire for-profit organization where every person can compute and make informed decisions based on complex multivariate calculus and probability distributions.


Vanity Metrics:

We have seen so far that the right reason to have a metric is as a purposeful tool for implementing executive vision while the wrong reason to introduce a metric is to correct the insecurity of executives when they feel “out of touch”. The latter are vanity metrics. They make the executive feel better at the risk of redirecting energy toward behaviors that run counter to success. One example is utilization.  It may feel good to track as a manager, because companies that pay people have taken a risk and want an appropriate return on the social contract known as “salary”.

Unlike some metrics, it is unlikely that utilization gets tracked with a purposeful tradeoff against lead time or cycle time. In other words, to the extent a company adopts agile and prioritizes “responding to change” – or responsiveness in general – maximizing utilization is mathematically counter to agile because it is detrimental to responsiveness.

This has been thoroughly analyzed in queuing theory. If you imagine any one engineer:

  • Demands arrive to the employee at a variable rate.
  • Work is accomplished at a variable rate.
  • There is one worker.
  • The possible queue of demands is potentially infinite.

This type of queue is an M/M/1/ ∞ queue. Now you may have heard Google has 20% time as a benefit, but when looking at M/M/1 queue – applied to highway flow, server traffic, or people – the point at which the trade-off between capacity utilization and responsiveness becomes unacceptable is not solved statistically. All that is known is that handling additional requests will eventually need additional capacity.

“As the freeway approaches 100% capacity, it ceases being a freeway. It becomes a parking lot.”

Jim Benson, Personal Kanban: Mapping Work | Navigating Life


This is the problem with tracking utilization. What is the “right” utilization number? Executive strategy defines acceptable trade-offs. Unless you clearly articulate a benchmark and its importance, your employees will assume utilization is tracked against 100% of 40hrs, shifting their behavior to an inability to quickly respond to new requests. The Hawthorne Effect of tracking utilization purposelessly is over-commitment and burn out.

However, as a leader of an organization, an expectation of managers must be established. When are additional resources hired to ensure the desired level of responsiveness? As a rule of thumb, how much work – assuming there is significant work to do – assign to any given employee? Is it okay keep utilization at 50% for some employees? When is overtime acceptable? Acceptable management practices must be defined based on goals for responsiveness.

This is the difference between “utilizing” an hourly wage warehouse employee by having them sweep the floor an extra time on a given day due to downtime versus cutting a salary-based ambulance and firefighting team due to low “utilization”. The hourly employee typically would not want reduced wages because of a lack of work and there is always a floor to sweep while they wait – the manager knows they are suppose to keep the employee busy. In contrast, responsiveness to a major fire or someone going into cardiac arrest is prioritized through “excess” capacity by mitigating the risk that utilization of the capacity to respond to fires or medical emergencies ever exceeds 100%.

We can see now that tracking capacity and utilization is far less important than tracking responsiveness. In agile software delivery there are two types of metrics that ought to be meaningfully tracked and compared to achievement of company financial goals:

  1. Responsiveness to Change – In aggregate, from the time it is known a market demand has changed, how long does it take to “pivot” and address shifting market conditions.
  2. Feedback Timeliness – For any given point in the process, this is the length of time it takes to validate the intended change was implemented in response to change.



Proxy Metrics:

If the metric you want is nearly impossible to reliably compute or gain sufficient organization-wide understanding and traction around your vision, this is when you need to find proxy metrics that everyone can agree is an indirect leading or trailing indicator that the organization is properly taking the small daily steps that result in annual success. While a good expression of executive vision likely expresses strategic commitment and trade-off at a broad level, employees need an indication of how to make the daily hard decisions that directly impact their status and prestige within the superorganism.

Without this sense of “blessing” surrounding the commitment of time and resources, employees are powerless. Expect diffusion of responsibility and self-protective over-documenting of decisions that are made. In contradistinction, an executive seeking “the good” metrics needs a sharp eye on how a metric will create positive reinforcement of decisions that fit with the long term position in which the company is moving. If a metric does not reinforce the empowerment and authority you have blessed employees with, so that they make the correct decisions you expect your employees to make, it is a dreadful metric.

This is part of a series!

Part 1 – Metrics: The Good, The Bad, and The Ugly

Part 2 – How to Fail at Performance Metrics

Part 3 – Rules For Measuring Success

Part 4 – Measuring What Matters to Innovation

Throughout the series I tie together ideas from two great resources:

Kevin Simler’s Minimum Viable Superorganism

Steven Borg’s From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum

3 thoughts on “How to Fail at Performance Metrics

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s