Sign Up for Our Newsletter
Mastering Measurement: The Critical Performance Elements of Incentive Design
Today‘s competitive environment demands that organizations achieve ever higher levels of performance. Critical to improving performance is to know what and how to measure. Measurement therefore is a key element in any performance improvement effort and particularly to the development of any incentive or recognition plan.
The intent of this paper is twofold: First, it updates a performance measurement methodology originally published in 1992 called the “Master Measurement Model of Employee Performance”; second, it addresses how to measure the short and long-term impact and ROI of incentive and recognition plans. Of course any examination of incentive plans needs to touch on the current dialogue and studies relating to the best application of incentives at work. While this is a topic unto itself, this report, the first in a series, will focus on program measurement.
This paper is targeted primarily at the professionals who design incentive and reward programs, whether they are organizational staff or external consultants. It is also intended for those that support and implement the programs, whether they conduct training, deliver the rewards and incentives themselves, or manage the programs. Additionally, the paper will be of interest to suppliers of rewards and incentives (merchandise, trips, gift cards, etc.), especially those who seek to understand how these plans impact performance and how they might position themselves and their clients to thrive in a new era of creativity and innovation in the workplace. Finally, this paper will provide a good resource for any organization looking to investigate the use of incentives to improve performance.
To effectively design an ROI-focused incentive plan or program, one must be convinced that the incentive, reward or recognition1 program‘s intent is to improve employee performance, ultimately leading to outcomes that have a net positive financial impact on the organization.
Plan design then, should start with identifying the desired organizational outcomes. In this case, the designer seeks performance improvement that will translate into better financial results for his group or organization. This need to achieve higher performance raises several critical measurement issues:
- What is the specific performance that needs to improve, and how is it measured?
- How can reliable and credible estimates of the value of this performance improvement be collected and calculated?
- How can these improvements be linked to organizational results and then converted to dollars such that they will persuade the CEO or CFO?
Fortunately, there are tools available today to accurately calculate many, if not most aspects of human performance in financial terms -- and not only for sales, customer service and other "easy to measure" tasks. In 1992, the predecessor of this paper – The Master Measurement Model of Employee Performance2 – was published, providing a framework for measuring many tasks or jobs and a credible, if basic, means of translating that performance improvement into financial benefits.
Since 1992, new ideas, including tools, methods and formulas have been devised by experts and practitioners in the pursuit of the ‗holy grail‘ of human performance measurement – proof that a given investment in people (i.e., the organization‘s human capital) results in financial gains to the organization. And while it is still not possible to show causal ‗proof‘ of the connection between workplace initiatives designed to boost human performance—including incentive, reward and recognition plans—and subsequent gains, we are able, through strong correlations, to demonstrate overwhelming evidence of those gains by applying the proven techniques and processes discussed in this paper.
Moreover, it is possible to calculate a projection of the financial impact of an incentive plan and then measure its actual impact after implementation. A key objective of this paper is to describe how this is done so that designers can build compelling business cases for their programs, execute them more successfully and then demonstrate their actual return on investment.
The Original Master Measurement Model3
The Master Measurement Model of Employee Performance, a paper published by the Incentive Research Foundation in 1992, is the starting point for this research and paper. We assembled two Delphi Panels,4 comprised of many of the industry‘s most respected leaders as well as industry and non-industry experts, to help us understand whether the methodologies put forward in the 1992 paper are still applicable and relevant.
According to Delphi panelist Bruce Bolger, one of those responsible for developing the original Master Measurement Model, it was "conceived only to do one thing, and that is measure the value of performance improvement. This was to establish a framework that allows us to look at an incentive program (one that is meant to increase productivity) and determine how we can structure it in such a way that we can measure its return on investment, and therefore, how much we can afford to spend on the training, communications, and rewards and recognition related to that desired goal."5
Judged by the narrow objectives described above, the Master Measurement Model succeeded. It offered a process for measuring employee performance in a variety of routine and nonroutine occupations. With it, organizations could arrive at reasonably accurate estimates of change in performance and what net positive (or negative) change resulted. This measurement was calculated from a carefully selected set of weighted measures and allowed for grades of difficulty in work and differences in product pricing over the course of the measurement period. It could then be converted into a dollar value so that the performance of a group could be tracked, and further planned investment could be calculated and justified.
In The Master Measurement Model; the analysis included these vital elements:
- Performance numbers from a previous time period (Base Data).
- New numbers from the current period of the same length (New Data).
- The percentage change (New/Base).
- The pre-assigned Weights for each measure.
- The relative importance of that change (Weighted Result)
For example, a Field Service Representative measurement feedback report could be compiled as follows (Figure 1):
FIGURE 1: MASTER MEASUREMENT MODEL PERFORMANCE FEEDBACK REPORT
The critical indicator above is the Total Weighted Result, in this case 102.5. This means that, for this family of measures, productivity and quality rose 2.5 percent during the particular time period.
The Benefit Report (Figure 2) moves a critical step beyond the Feedback Report. As management creates an incentive program budget, a Benefit Report like this will help establish projected figures, which allow forecasted financial improvements before incentive expenses.
FIGURE 2: MASTER MEASUREMENT MODEL BENEFIT REPORT
The Master Measurement Model study also highlighted the involvement of a representative group from the potential participant population to be involved with the selection of the performance measures. This process not only increases the "buy-in" by the participants of the performance measures, but also indicates early in the process how the participants will accept the validity of the measurement process.
We have begun to identify key principles that underpin effective performance measurement. So far, these principles include the ability to address:
- Baseline data: To measure performance change, there must be meaningful before and after comparators.
- Number and “ease” of measures: The number of performance measures should be limited to about five, and their tracking not overly complex.
- Involvement: Gain participants‘ input and feedback in the development of the measures.
Areas for Enhancement
One aspect that the Master Measurement Model did not address was a greater emphasis on causality. That is, a method of assigning reasonable attribution of the incentive program to the change in performance. Outside of laboratory conditions, the real world intrudes with many external factors, and whereas longer study periods lend more credibility to the results, they also increase the chance that external factors might cloud the data. So the question arises: how much of the change was the result of the incentive plan versus change caused by other factors (i.e., general economy, industry-specific factors, change in competition, marketing campaign, etc.)?
Another area that the Master Measurement Model neglected is the discounting of the results to ensure that financial assumptions are conservative. The conversion of the results to financial terms generally requires a number of assumptions. The test for this "conservative" approach is easily achieved: does the CFO agree with the financial assumptions?
In a similar vein, the Master Measurement Model failed to address the unintended consequences of incentive and reward programs. Using the model, an analyst might show an increase in performance due at least in part to the incentive program. He might also be able to convert that performance increase into a credible financial ROI estimate for the organization. However, if the analyst fails to determine whether the program caused unintended consequences elsewhere in the organization – consequences that might have reduced or even eliminated the ROI – then the measurement process is incomplete and potentially misleading. It might in fact cause the organization to increase funding for a program that has a net detrimental impact.
Finally, according to the great majority of our panel experts, the Master Measurement Model is somewhat limited in its ability to address all of the performance measurement needs of today‘s business environment. Its approach to measurement fails to address the variety of programs, large and small, that an organization might require; neither does it address how to incorporate the more "intangible" aspects of organizational performance ─ particularly valued in today‘s creative environment ─ into the measurement.
For a measurement model to be effective today it must incorporate the principles in the Master Measurement Model (MMM) and include additional causal analysis. Moreover, it must address the intangibles, and include some consideration of unintended consequences. Finally, it must be flexible. An effective model will offer steps, checklists and flowcharts to assist the designer in determining which methods and measurements to use in different situations, for different outcomes, different tasks and for different workers.
The next principle to be examined is unintended consequences. The possibility, indeed likelihood, of unintended consequences in both short- and long-term programs, places an even greater emphasis on design. The design of incentive, reward and recognition programs is critical. Design includes choosing the right measurement model, the right metrics and the right level of measurement.
While this report is focused on measurements, the issue of design cannot be ignored. The Delphi Panels, our extensive interviews with practitioners and experts, and our secondary research, identified several design issues that will be addressed in a second report, which will look at the evolving research on incentives and how the knowledge-based workforce responds to incentives. For this report, the issue of unintended consequences is viewed through the prism of measurements – ensuring that improving one measure does not cause problems with another performance measure.
One strength of the MMM was the inclusion of more than one measure ─ the first method of reducing unintended consequences. By having the participants focus on multiple measures, the incentive plan manager reduces the potential that one measure will be driven at the expense of the others.
However, there also needs to be a check to determine whether emphasis on a department‘s performance will cause disruption in another department. In many organizations, management "silos" result in the optimization of one department to the detriment of others. For example, expense reduction efforts in one department (i.e., recruiting) can cause inefficiencies (lack of staffing) in another. Similarly, a quality initiative in one part of a process might lower production volumes throughout.
Organizations are more sensitive to the unintended consequences of incentive programs than in the past. "After the financial meltdown caused in part by poor incentive design," Delphi panelist, Thomas Haussmann of the Hay Group, says, "we are now asked to help make these programs risk proof." He advocates looking at the measures and objectives and, if they are short term, deciding if they fit into the long-term objectives. He suggests questioning whether the overachievement of the short-term goal might be detrimental to the long-term success and viability of the company. "The standard procedure should be: does the over-achievement or achievement of objectives entail risks that are not intended? And if you find them, address them in your program design.”
- Causality: Isolation of the effects of the intervention (i.e., the incentive plan) from other influences and assigning appropriate attribution (see predictive analysis on page 9).
- Conservative financial assumptions: Err on the conservative side when making financial assumptions and confirm all financials (assumptions and numbers) with the finance department.
- Unintended Consequences: The impact of the incentive plan on other individuals, departments or the organization in general – i.e., positive sales performance causing a backlog in manufacturing, yielding higher production costs, or the strained relations between the plan participants and nonparticipants (haves and have-nots).