Thinkers: Stop Metrics Mania
As companies increasingly turn to machine learning to make sense of vast amounts of data, they want to be sure they feed their algorithms with measures of the right things, taken for the right reasons. They may want to pay attention to Jerry Z. Muller’s critique of the use of metric performance indicators.
Muller, a recently retired professor of history and former history department chair at the Catholic University of America, experienced overzealous measurement firsthand when his university introduced new metrics for measuring student achievement. The metrics added nothing to the existing form of evaluating student performance: grades. His realization that these metrics were a waste of time and effort led him to look into the broader patterns of performance measurement in contemporary organizations.
Muller, who studies the history of capitalism and public policy, wrote his book The Tyranny of Metrics to counter the assumption that quantifying performance and making the results public will always lead to greater accountability, better decision-making, and improved results. In fact, he argues, forcing people to tailor their work to fit standardized measurements helps organizations feel good about monitoring themselves but discourages innovation.
We asked Muller what business leaders should do to fight what he calls “metrics fixation.”
Q: You say we measure too many of the wrong things and use the results inappropriately. What should business leaders do to course correct?
Jerry Z. Muller: Companies have eliminated a lot of middle managers—the people with actual experience with the parts of the company that produce products and deal with people. So the C-level has resorted to standardized measurements to understand what’s going on. Measuring something isn’t the same as understanding it, however.
To recapture understanding, we could reverse the trend of doing away with middle managers who are “in the weeds” about how the company operates. We could also remember that even though we have more access to data than ever, gathering and sharing it isn’t a cure-all. Essentially, we have to reject the belief that metrics are always objective and superior to human judgment.
Back in 1986, management guru Tom Peters wrote, “What gets measured gets done,” which helped create the idea that quantifying things is the necessary first step to improving them. But when you focus only on metrics, you end up like Wells Fargo, which incurred millions of dollars in fines because some frontline employees decided they could meet their cross-selling quotas by signing customers up for new services without their knowledge.
Q: What makes a metric meaningful?
Muller: You have to know not just whether the data is accurate but whether it’s significant. Companies measure profits because they’re concrete and easy to understand as a metric of success. But the long-term flourishing of a company is affected by things like customer relations, innovation, collaboration, and mentoring—all things that are critical even though they’re harder to generalize and quantify. You need a seasoned executive to assess them, not an algorithm.
Another example of a metric fallacy is evaluating a nonprofit by how much it spends on administration as opposed to programs. Low overhead may be an accurate metric, but it isn’t actually a significant one. We assume that a high ratio of overhead-to-program costs indicates fraud or poor management, but most of the time low overhead doesn’t equate to high productivity. Just the opposite: low overhead deprives the nonprofit of the trained staff, functional offices, and program management tools it needs to get the job done.
Finally, metrics lose their meaning when they are used to reward or punish performance rather than to diagnose and analyze. If there are rewards and punishments attached to a metric, people can and will game it. That can be a matter of life and death. In New York State, the mortality rates for heart surgery declined after the state began publishing metrics about individual doctors’ success rates—because the doctors became less willing to treat high-risk patients who might lower their performance scores by dying.
Q: Recent research suggests that artificial intelligence (AI) trained on photographs can actually diagnose skin cancer more accurately than experienced dermatologists can. Is the line getting thinner between using data to inform or augment human decision-making and letting data make the decisions for us?
Muller: This example is just the tip of the spear in medical diagnosis as more data gets aggregated. Using data to diagnose skin cancer seems to be genuinely useful. What AI is best at is uncovering clearly measurable but overlooked variables that are more significant than even someone with expertise and experience can recognize, or when any one person’s experience is likely to be too limited to develop an intuitive judgment.
What algorithms can’t do is provide vision or ideas or build up the sense of common purpose within a company. Entrepreneurship and innovation involve vision and risk. Metrics can only quantify what already exists. If you rely on them to tell you what to do, unmediated by human judgment, you’ll create a risk-averse business—which is ultimately a dead business.