Reinventing Performance ManagementPublished July 21, 2016
It’s that time of year again . . . time for the Mid-Year Review. We all know the drill: the hours spent crafting performance goals, evaluating ourselves and collecting 360-degree data. Marcus Buckingham (GLS 2004, 2007), champion of strengths at work, thinks there is a better way. And he is testing his ideas on a massive scale at Deloitte. The following article was named one of Harvard Business Review’s Must-Reads for 2016.
At Deloitte we’re redesigning our performance management system. This may not surprise you. Like many other companies, we realize that our current process for evaluating the work of our people—and then training them, promoting them and paying them accordingly—is increasingly out of step with our objectives.
In a public survey Deloitte conducted recently, more than half the executives questioned (58 percent) believe their current performance management approach drives neither employee engagement nor high performance. They, and we, are in need of something nimbler, real-time, and more individualized—something squarely focused on fueling performance in the future rather than assessing it in the past.
What might surprise you, however, is what we’ll include in Deloitte’s new system and what we won’t. It will have no cascading objectives, no once-a-year reviews and no 360-degree-feedback tools. We’ve arrived at a very different and much simpler design for managing people’s performance. Its hallmarks are speed, agility, one-size-fits-one and constant learning, and it’s underpinned by a new way of collecting reliable performance data. This system will make much more sense for our talent-dependent business. But we might never have arrived at its design without drawing on three pieces of evidence: a simple counting of hours, a review of research in the science of ratings and a carefully controlled study of our own organization.
Counting and the Case for Change
More than likely, the performance management system Deloitte has been using has some characteristics in common with yours. Objectives are set for each of our 65,000-plus people at the beginning of the year; after a project is finished, each person’s manager rates him or her on how well those objectives were met. The manager also comments on where the person did or didn’t excel. These evaluations are factored into a single year-end rating, arrived at in lengthy “consensus meetings” at which groups of “counselors” discuss hundreds of people in light of their peers.
Internal feedback demonstrates that our people like the predictability of this process and the fact that because each person is assigned a counselor, he or she has a representative at the consensus meetings. The vast majority of our people believe the process is fair. We realize, however, that it’s no longer the best design for Deloitte’s emerging needs: Once-a-year goals are too “batched” for a real-time world, and conversations about year-end ratings are generally less valuable than conversations conducted in the moment about actual performance.
But the need for change didn’t crystallize until we decided to count things. Specifically, we tallied the number of hours the organization was spending on performance management—and found that completing the forms, holding the meetings, and creating the ratings consumed close to 2 million hours a year. As we studied how those hours were spent, we realized that many of them were eaten up by leaders’ discussions behind closed doors about the outcomes of the process. We wondered if we could somehow shift our investment of time from talking to ourselves about ratings to talking to our people about their performance and careers—from a focus on the past to a focus on the future.
The Science of Ratings
Our next discovery was that assessing someone’s skills produces inconsistent data. Objective as I may try to be in evaluating you on, say, strategic thinking, it turns out that how much strategic thinking I do, or how valuable I think strategic thinking is, or how tough a rater I am significantly affects my assessment of your strategic thinking.
How significantly? The most comprehensive research on what ratings actually measure was conducted by Michael Mount, Steven Scullen, and Maynard Goff and published in the Journal of Applied Psychology in 2000. Their study—in which 4,492 managers were rated on certain performance dimensions by two bosses, two peers, and two subordinates—revealed that 62% of the variance in the ratings could be accounted for by individual raters’ peculiarities of perception. Actual performance accounted for only 21% of the variance. This led the researchers to conclude (in How People Evaluate Others in Organizations, edited by Manuel London): “Although it is implicitly assumed that the ratings measure the performance of the ratee, most of what is being measured by the ratings is the unique rating tendencies of the rater. Thus ratings “reveal more about the rater than they do about the ratee.”
This gave us pause. We wanted to understand performance at the individual level, and we knew that the person in the best position to judge it was the immediate team leader. But how could we capture a team leader’s view of performance without running afoul of what the researchers termed “idiosyncratic-rater effects?”
Putting Ourselves Under the Microscope
We also learned that the defining characteristic of the very best teams at Deloitte is that they are strengths oriented. Their members feel they are called upon to do their best work every day. This discovery was not based on intuitive judgment or gleaned from anecdotes and hearsay; rather, it was derived from an empirical study of our own high-performing teams.
Our study built on previous research. Starting in the late 1990s, Gallup conducted a multi-year examination of high-performing teams that eventually involved more than 1.4 million employees, 50,000 teams, and 192 organizations. Gallup asked both high- and lower-performing teams questions on numerous subjects, from mission and purpose to pay and career opportunities, and isolated the questions on which the high-performing teams strongly agreed and the rest did not.
It found at the beginning of the study that almost all the variation between high- and lower-performing teams was explained by a very small group of items. The most powerful one proved to be “At work, I have the opportunity to do what I do best every day.” Business units whose employees chose “strongly agree” for this item were 44% more likely to earn high customer satisfaction scores, 50% more likely to have low employee turnover, and 38% more likely to be productive.
We set out to see whether those results held at Deloitte. First we identified 60 high-performing teams, which involved 1,287 employees and represented all parts of the organization. For the control group, we chose a representative sample of 1,954 employees. To measure the conditions within a team, we employed a six-item survey. When the results were in and tallied, three items correlated best with high performance for a team:
- My coworkers are committed to doing quality work
- The mission of our company inspires me
- I have the chance to use my strengths every day
Of these, the third was the most powerful across the organization.
All this evidence helped bring into focus the problem we were trying to solve with our new design. We wanted to spend more time helping our people use their strengths—in teams characterized by great clarity of purpose and expectations—and we wanted a quick way to collect reliable and differentiated performance data. With this in mind, we set to work.
We began by stating as clearly as we could what performance management is actually for, at least as far as Deloitte is concerned. We articulated three objectives for our new system. The first was clear: It would allow us to recognize performance, particularly through variable compensation. Most current systems do this.
But to recognize each person’s performance, we had to be able to see it clearly. That became our second objective. Here we faced two issues—the idiosyncratic rater effect and the need to streamline our traditional process of evaluation, project rating, consensus meeting, and final rating. The solution to the former requires a subtle shift in our approach. Rather than asking more people for their opinion of a team member (in a 360-degree or an upward-feedback survey, for example), we found that we will need to ask only the immediate team leader—but, critically, to ask a different kind of question. People may rate other people’s skills inconsistently, but they are highly consistent when rating their own feelings and intentions. To see performance at the individual level, then, we will ask team leaders not about the skills of each team member, but about their own future actions with respect to that person.
At the end of every project (or once every quarter for long-term projects) we will ask team leaders to respond to four future-focused statements about each team member. We’ve refined the wording of these statements through successive tests, and we know that at Deloitte they clearly highlight differences among individuals and reliably measure performance.
Here are the four:
- Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus. [Measures overall performance and unique value to the organization on a five-point scale from “strongly agree” to “strongly disagree”].
- Given what I know of this person’s performance, I would always want him or her on my team. [Measures ability to work well with others on the same five-point scale].
- This person is at risk for low performance. [Identifies problems that might harm the customer or the team on a yes-or-no basis].
- This person is ready for promotion today. [Measures potential on a yes-or-no basis].
In effect, we are asking our team leaders what they would do with each team member rather than what they think of that individual. When we aggregate these data points over a year, weighting each according to the duration of a given project, we produce a rich stream of information for leaders’ discussions of what they, in turn, will do—whether it’s a question of succession planning, development paths, or performance-pattern analysis. Once a quarter, the organization’s leaders can use the new data to review a targeted subset of employees (those eligible for promotion, for example, or those with critical skills) and can debate what actions Deloitte might take to better develop that particular group. In this aggregation of simple but powerful data points, we see the possibility of shifting our 2-million-hour annual investment from talking about the ratings to talking about our people—from ascertaining the facts of performance to considering what we should do in response to those facts.
In addition to this consistent—and countable—data, when it comes to compensation, we want to factor in some uncountable things, such as the difficulty of project assignments in a given year and contributions to the organization other than formal projects. So the data will serve as the starting point for compensation, not the ending point. The final determination will be reached either by a leader who knows each individual personally or by a group of leaders looking at an entire segment of our practice and at many data points in parallel.
We could call this new evaluation a rating, but it bears no resemblance, in generation or in use, to the ratings of the past. Because it allows us to quickly capture performance at a single moment in time, we call it a performance snapshot.
The Third Objective
Two objectives for our new system, then, were clear: We wanted to recognize performance, and we had to be able to see it clearly. But all our research, all our conversations with leaders on the topic of performance management, and all the feedback from our people left us convinced that something was missing. Is performance management at root more about “management” or about “performance”? Put differently, although it may be great to be able to measure and reward the performance you have, wouldn’t it be better still to be able to improve it?
Our third objective therefore became to fuel performance. And if the performance snapshot was an organizational tool for measuring it, we needed a tool that team leaders could use to strengthen it.
To read more about how Marcus is working with Deloitte to fuel performance, go to the original article here.
Never Miss the Powerful Leadership Insights Published Here Regularly!
About the Author
Marcus Buckingham is a global researcher, thought leader and leading expert on talent, focused on unlocking people's strengths, increasing their performance and pioneering the future of how people work. A former senior researcher at Gallup Organization, he now guides the vision of ADP Research Institute as Head of People + Performance. He is the author of nine books, including First Break All the Rules, and Now Discover Your Strengths, two of the best-selling business books of all time. His latest release—Nine Lies About Work: A Freethinking Leader’s Guide to the Real World —takes an in-depth look at the lies that pervade our workplaces and the core truths that will help us change it for the better.
Years at GLS 2004, 2007, 2017