Performance evaluations have been awful for a long time. Forced ranking, the method that many large accounting firms employ, has successfully sucked numerous souls from the bodies of many talented accountants. People grumble, yet it goes on. And on. And on.
But now Deloitte, bless their hearts, has decided to take the issue of terrible performance evaluations head on. Their efforts landed them the cover of next month's Harvard Business Review and it sounds like they've made some headway.
Before jumping into the details, it's interesting to note how engrained the current methods are. The firm did some internal research to find out what people thought of the current system and what they found is a little surprising:
Internal feedback demonstrates that our people like the predictability of this process and the fact that because each person is assigned a counselor, he or she has a representative at the consensus meetings. The vast majority of our people believe the process is fair.
Predictable. Representation. Fair. Doesn't sound so bad, does it? Except it is.
When Deloitte surveyed business leaders, it found that 58% of them "believe that their current performance management approach drives neither employee engagement nor high performance."
Ron Baker, hater of the billable hour, also hates performance evaluations; he speaks and writes about alternatives to the traditional model. In an email to GC, he agreed with Deloitte's conclusion that the current model fails to improve performance and he called Deloitte's proposal "a big step in the right direction."
Now for the details.
One reason performance evals are so bad is that they are time-consuming. Deloitte discovered that the firm's people spent 2 million hours a year on "completing the forms, holding the meetings, and creating the ratings" and the majority of the time was taken up by "leaders’ discussions behind closed doors about the outcomes of the process." In other words, meetings about the meetings. God, that sounds like eternal merry-go-round of hellfire, doesn't it?
In his email to GC, Baker called the 2 million hours "staggering" and although it's clear how much the time costs firms, the method continued, "because that’s the way it’s always been done."
Ashley Goodall, director of leader development at Deloitte Services LP, co-wrote the HBR article with Marcus Buckingham, an adviser and author. They found that a big problem with the old way of evaluating performance has to do with its inherent subjectivity. Humans have biases that are inflated through the process of rating people on various goals. That is, the different perceptions of all the people rating an individual (e.g. a boss, a peer, a subordinate) on a given skill (e.g. critical thinking) accounted for a large portion of the variance in the ratee's overall score. Each of those people have a different idea about what "critical thinking" is and whether or not the person they are rating meets their definition. Goodall and Buckingham found that the actual performance resulted in a much smaller portion of the variance.
To eliminate this, Goodall and Buckingham switched the focus from skills to future actions, asking team leaders to answer four questions:
1. Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus [measures overall performance and unique value to the organization on a five-point scale from “strongly agree” to “strongly disagree”].2. Given what I know of this person’s performance, I would always want him or her on my team [measures ability to work well with others on the same five-point scale].3. This person is at risk for low performance [identifies problems that might harm the customer or the team on a yes-or-no basis].4. This person is ready for promotion today [measures potential on a yes-or-no basis].
Our design calls for every team leader to check in with each team member once a week. For us, these check-ins are not in addition to the work of a team leader; they are the work of a team leader. If a leader checks in less often than once a week, the team member’s priorities may become vague and aspirational, and the leader can’t be as helpful—and the conversation will shift from coaching for near-term work to giving feedback about past performance. In other words, the content of these conversations will be a direct outcome of their frequency: If you want people to talk about how to do their best work in the near future, they need to talk often. And so far we have found in our testing a direct and measurable correlation between the frequency of these conversations and the engagement of team members.
Many of the successful consumer technologies of the past several years (particularly social media) are sharing technologies, which suggests that most of us are consistently interested in ourselves—our own insights, achievements, and impact. So we want this new system to provide a place for people to explore and share what is best about themselves.
[H]ere’s what we’re asking ourselves and testing: What’s the most detailed view of you that we can gather and share? How does that data support a conversation about your performance? How can we equip our leaders to have insightful conversations? Our question now is not What is the simplest view of you? but What is the richest?