Year-end slacker or year-round performer? Adding peers to evaluation mix makes a difference

Every manager knows the drill: You meet your targets for the year, collect your bonus or pat on the back, and your organization ratchets up your targets even higher next year.

That’s great when higher targets motivate you to do better and earn higher compensation. It’s not so great when you reach your targets early in the fourth quarter. Bonus ensured, you might be tempted to slack off for the rest of the year, because if you keep doing well, next year’s targets would be even higher and harder to reach.

Workers’ tendency to withhold effort under such circumstances is known as the “ratchet effect,” and it’s part of why organizations wrestle with how to evaluate performance and set targets that will motivate workers to do their best but won’t punish top performers.

“Imagine the ratchet wheel: Once it notches one up, it never goes back down,” says Professor of Accountancy Michal Matejka. “That’s the idea with targets. I give you a target of 18, you do 20, so the next year I give you 21. You go, ‘Oh, stupid me, I could have done 18 and a half, then my target would have been 19.’

“That's how to destroy incentives … The smart ones will learn to not work hard.”

To help organizations turn potential year-end slackers into year-round performers, Matejka and Associate Professor of Accountancy Pablo Casas-Arce conducted new research that offers guidance on how to more effectively evaluate managers’ past performance and set targets that accurately predict their future performance.

“The underlying problem is when you’re trying to set targets for a manager, for a division, you don’t really know how much they are capable of doing,” Casas-Arce says. “It’s natural, if you see good performance, to infer that it was relatively easy to generate revenues or reduce costs ... and therefore the target ought to be made more difficult.”

Evaluating more than individual performance

Joined by colleagues from Michigan State University and the Frankfurt School of Finance and Management, Matejka and Casas-Arce found that managers are motivated to keep up their efforts at year end when organizations use two performance measures in setting targets: the individual managers’ performance compared to their own targets, and the managers’ peers’ performance compared to the peers’ targets. More significantly, they found that for individual managers who operate in economic environments that closely match those of their peers, organizations tend to put more weight on the peers’ performance and less weight on the individuals’ performance. As a result, the more balanced performance evaluation strengthens managers’ incentive to keep performing year-round because it assures them that their new targets won’t be based solely on their own performance.

Other researchers have documented the ratchet effect, but they offered scant evidence on how organizations could mitigate it. Matejka and Casas-Arce say their team’s study is the first to show that using well-matched peer groups as part of performance evaluations and targets gives individual managers less incentive to slack off and more motivation to perform year-round.

Matejka and Casas-Arce have both studied targets and how organizations set them. For this research, they wanted to know more about how organizations adjust targets and how managers behave under the new targets. They suspected that the performance of peer groups, previously overlooked by researchers, would be a key factor.

The researchers got the chance to test their ideas when they obtained data on how a European government agency evaluated the performance of, and set targets for, managers of 354 units charged with helping the long-term unemployed in their areas find jobs. The agency had sorted the managers into 12 groups of peers whose units faced similar economic environments, such as GDP per capita, seasonal fluctuations, and unemployment rates. The units operated independently, but the managers had access to monthly and annual reports on how all the units were performing and where each unit was ranking compared to its peers.

The agency measured managers’ performance primarily by the amount that their units had reduced welfare payouts during the year, whether by helping people find jobs, training them for new work, or subsidizing firms that hired them. The data showed how each unit had performed in reaching its target for reduced payouts, how each unit had performed compared to other units in its peer group, and what each unit’s target was set at for the following year. Managers’ compensation, consisting of base salaries and incentive pay, was based on their performance and that of their peers.

The researchers took a close look at the composition of the 12 peer groups, particularly at how well a unit’s performance correlated with the performance of other units in its peer group. The researchers deemed a group to be high quality if its units showed high correlations, suggesting that units within the group faced many of the same economic conditions in trying to get people back to work. Low-quality groups were those showing low correlations, suggesting that each unit faced dissimilar or unique economic conditions.

They then looked at how much weight the government agency gave to individual performance and to peer-group performance when it evaluated the unit managers, decided their incentive pay for the year, and set their new targets.

Evaluations depend on manager, peer match

The researchers found that managers whose peers closely matched their situations — those in high-quality peer groups — received evaluations that weighted the peer group’s past performance more heavily than the managers’ own past performance. Next, they found that the peer group’s performance became a factor when the agency set the new targets for these managers. Finally, they found that these managers kept up their performance through the fourth quarter, implying they worried less about their own performance and had less to gain by slacking off.

“When the peer quality is high, the adjustments to the target are based more on what the peers are doing and less on what I am doing as an individual manager,” Casas-Arce says. “I can continue working hard the last quarter, and that’s not going to make my targets more difficult the next year.”

Managers whose peers were less of a match — those in low-quality peer groups — received evaluations that weighted their own past performance more heavily than the peer group’s past performance, the research found. New targets for these managers were more heavily based on how the individual manager had performed. When these managers met their targets early, they tended to slack off in the fourth quarter — implying they worried more about their own performance and feared getting higher targets.

“When the peer quality is low, the targets are based on my own performance mostly,” Casas-Arce says. “At that point, I want to withhold effort to get easier targets the following year.”

A big benefit of sorting managers into peer groups is that it helps organizations account for variations in local markets. Managers in high-quality, well-matched groups operate on a level playing field, making it easier for organizations to assess the managers’ performance and set new targets that filter out the “noise” of common economic conditions. Managers in low-quality, more varied groups can be assessed on their own performance and be given new targets that take into consideration their unique economic conditions.

Because the data covered 2007 to 2010, a period of recession and recovery, the researchers also examined how the agency changed managers’ cost-cutting targets as the economy changed. They found that as the economy rose and fell, the agency struggled to set targets that proved accurate. The data showed managers performed well in 2007, spending less than expected when the economy was still strong. With peer group performance suggesting a favorable overall economic environment, the agency made the 2008 targets tougher and called for spending even less. But by 2009, as the recession deepened, managers were spending more than expected. With peer group performance indicating a poor economic environment, the agency made 2010 targets easier and allowed for more spending.

Although the study focused on cost-cutting efforts in government agencies, the researchers note that all organizations expect performance to improve from year to year. They therefore believe their results would apply broadly, to sales targets in companies or budgets in non-profits such as hospitals and museums.

“Everybody uses targets, but not everybody is aware what kind of games people play and the extent to which that is prevalent,” Matejka says  “Lesson No. 1 is: Think carefully when you revise targets up, and make sure you don’t punish your best performers.”

Bottom line

  • For top management: You are responsible for approving targets and budgets, so design your organization’s targets in a way that reduces incentives to game the system. If you raise the target every time workers surpass it, you may be encouraging them to shirk toward yearend. Consider leaving some room for objectivity and basing variable compensation on more than how workers perform against their targets.
  • For mid-level managers: If your peers are comparable, don’t withhold effort when you reach your target early, because your game-playing will be easier for top management to spot when it compares you to your peers. If your peers are a poor match, you may face a ratchet-effect decision: Keep performing and get a bigger bonus this year and tougher targets next year, or slack off and get a smaller bonus this year and easier targets next year.
  • For accounting professionals: The age of big data means organizations have a wealth of information about individual managers, how well their environment matches that of their peers, how managers and their peers performed during a year, and how managers performed in the face of new targets. Exploit the data so your organization can set targets that motivate rather than punish. 
By Jane Larson