Avoiding Likierman's Five Performance Measurement Traps

After mastering financial case construction and practical decision tools, you're now ready to tackle one of the most insidious challenges in management: the performance measurement traps that can lead even successful companies astray. Andrew Likierman's research reveals five common traps that organizations fall into when measuring performance, and understanding these pitfalls is essential for avoiding the fate of companies that looked great on paper right before disaster struck. Consider that Merrill Lynch reported a 30.1% pretax profit margin in 2006—its best performance ever—only to lose $8.6 billion the following year. Similarly, BP celebrated strong financial results in early 2010, just weeks before the Deepwater Horizon explosion cost the company $40 billion and irreparable reputation damage.

These catastrophes weren't random accidents but predictable outcomes of measurement systems that created blind spots. When you measure only against yourself, focus backward instead of forward, put excessive faith in numbers, allow metrics to be gamed, or stick to outdated measures, you create conditions where disaster can hide behind impressive-looking performance data. The companies that avoid these traps demonstrate that sophisticated measurement isn't about having more metrics but about having the right ones that reveal true performance. Enterprise Rent-A-Car with its customer repeat intentions tracking, Humana with its early intervention metrics, and Clifford Chance with its diversified performance criteria all show how thoughtful measurement design prevents the dangerous delusions that precede corporate disasters.

Benchmark Externally Using Enterprise's Customer Repeat Intentions Method

The first and perhaps most dangerous trap is measuring against yourself—comparing this year to last year, actual results to budget, or your division to other internal divisions. While these comparisons feel natural and data is readily available, they create a fatal blind spot: you might be winning internally while losing in the marketplace. Enterprise Rent-A-Car recognized this danger and developed a brilliantly simple solution called the Enterprise Service Quality Index (ESQi), which measures whether customers intend to use the company again. By calling random customers and asking "Would you rent from Enterprise again?", they discovered that when this metric rises, market share gains follow within months, while declining scores predict customer defection even when internal metrics look healthy.

Let's see how this trap plays out in a conversation between two managers:

  • Victoria: Great news! My division beat budget by 8% this quarter, and we're up 12% over last year. The board is thrilled with our performance.
  • Jake: That's impressive, but how are you doing compared to competitors? I just saw that TechForward grew 35% in your market segment.
  • Victoria: Well, we focus on our internal targets. As long as we're beating budget and improving year-over-year, we're successful.
  • Jake: But what if your customers are starting to prefer TechForward? Have you asked them if they'd choose you again?
  • Victoria: We track customer satisfaction scores, and they're at 4.2 out of 5, same as last year.
  • Jake: That's exactly the trap Enterprise avoided. They don't just measure satisfaction—they ask customers directly if they'll come back. When that metric drops, they know competitors are winning, even if internal numbers look good.
  • Victoria: I hadn't thought about it that way. We could be celebrating while actually losing market share...
  • Jake: Exactly. Your 8% growth means nothing if the market is growing at 20% and competitors at 35%. You need external benchmarks to know if you're really winning.

This dialogue illustrates how easily managers fall into the trap of internal-only measurement, celebrating budget victories while missing competitive threats. Victoria's initial confidence in beating internal targets blinds her to the market reality that competitors are growing three times faster. Only when Jake introduces the concept of external benchmarking does she realize that her might actually be failure in disguise.

Focus on Leading Indicators Like Humana's Early Screening Metrics

The second trap involves looking backward rather than forward, relying on lagging indicators like last quarter's revenue or year-over-year profit growth to assess current performance. These metrics tell you what happened, not what's about to happen, creating a dangerous time lag between problems emerging and management awareness. Humana, the health insurance giant, discovered that 80% of its costs came from the sickest 10% of members, and that early health interventions cost 40% less than treating advanced conditions. This insight led them to develop leading indicators around preventive care engagement, tracking members getting annual screenings, participating in wellness programs, and managing chronic conditions proactively. When these forward-looking metrics rise, Humana knows that costs will fall 12-18 months later, even if current expenses are high due to prevention investments.

Leading indicators require understanding the causal chains in your business—identifying what happens today that determines tomorrow's results. For a software company, customer support ticket patterns predict renewal rates six months forward, with rising complexity and frequency of issues signaling upcoming churn. Manufacturing firms find that supplier quality metrics and on-time delivery rates predict customer satisfaction three months hence. The key is identifying measurements that provide enough advance warning to enable meaningful intervention. Humana's screening metrics give them time to engage at-risk members before conditions deteriorate, while a technology company tracking developer satisfaction can address retention risks before key talent departs. These forward-looking measures transform management from reactive firefighting to proactive problem prevention.

Consider how a professional services firm might build a comprehensive leading indicator system. Rather than waiting for project profitability reports, which are inherently lagging, they could track project inception quality—how thoroughly scoped and resourced are new engagements? Poor inception quality reliably predicts margin erosion months later as projects require additional unplanned resources. Similarly, measuring client engagement frequency and depth provides six-month advance warning of renewal probability, as a client whose senior executives stop attending quarterly reviews is signaling dissatisfaction long before they formally decline to renew. Team capability assessments predict delivery delays because when skill gaps emerge between project requirements and team competencies, delays become nearly inevitable. Each leading indicator includes clear intervention triggers—specific actions to take when metrics cross threshold values—transforming measurement from passive observation to active management.

The challenge with leading indicators is that they often measure activities and behaviors rather than outcomes, making them feel less concrete than traditional financial metrics. A board might question why you're reporting instead of revenue, not understanding that one predicts the other with remarkable reliability. This requires education about the predictive relationships you've identified and discipline to maintain focus on leading indicators even when they conflict with short-term financial pressures. Humana faces constant pressure to reduce prevention spending to boost quarterly earnings, but their commitment to leading indicators has produced sustainable competitive advantage through lower medical costs and healthier members over time. The companies that master leading indicators gain the ability to shape their future rather than simply report their past.

Diversify Metrics Using Clifford Chance's Seven-Criteria Approach

The fifth trap—though we're discussing it as our third key defense—involves both sticking to the same metrics too long and relying on single, simple metrics that invite gaming. Clifford Chance, one of the world's largest law firms, discovered this danger when their billable hours metric created perverse incentives throughout the organization. Lawyers padded hours to meet targets, senior associates hoarded work that juniors could handle more efficiently, and client relationships suffered as every interaction became a billing opportunity. The firm's solution was radical: replace the single billable hours metric with seven diverse criteria encompassing respect and mentoring, quality of work, excellence in client service, integrity, contribution to the community, commitment to diversity, and contribution to the firm as an institution.

This diversification made gaming nearly impossible because while you might manipulate one or two metrics, optimizing across seven different dimensions requires genuine performance improvement. More importantly, the balanced criteria aligned with the firm's actual success factors in ways that billable hours never could. Client service quality matters more than hours billed if you want long-term relationships, mentoring junior staff ensures future capability, and community contribution builds reputation and recruitment advantages. By measuring what truly mattered rather than what was easily countable, Clifford Chance reduced gaming behaviors by 60% while improving actual performance across multiple dimensions. Partners could no longer succeed simply by billing massive hours while being terrible colleagues and mentors—success now required excellence across multiple stakeholder relationships.

The power of metric diversification extends beyond preventing gaming to encouraging holistic thinking about performance. When you evaluate employees on financial results alone, they'll sacrifice everything else to hit their numbers—customer relationships, employee development, innovation, even ethical standards fall by the wayside. However, when performance includes customer satisfaction weighted at perhaps 15%, innovation index at 10%, talent development at 10%, operational excellence at 15%, market position at 10%, strategic initiative progress at 10% alongside financial performance at 30% weight, managers must balance competing priorities just as successful businesses do. This complexity mirrors real organizational life better than any single metric could, forcing decisions that optimize for sustainable success rather than quarterly numbers.

Implementing diversified metrics requires careful thought about weights, measurement methods, and trade-offs between simplicity and completeness. Software company Autodesk uses a measuring financial performance, customer value creation, and employee engagement with equal weighting, forcing leaders to balance stakeholder interests rather than maximize any single dimension. Consumer goods giant Unilever evaluates brand managers on current sales, brand health metrics that predict future sales, and sustainability measures that ensure long-term viability, recognizing that today's profits can come at tomorrow's expense. The specific metrics matter less than the principle: multiple diverse measures that can't all be gamed simultaneously and that collectively capture what drives sustainable success. While this approach feels more complex than tracking a single number, it produces far better decisions and behaviors than oversimplified metrics ever could.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal