Many of us love competition and, more important, winning. Competition drives us toward our goals and motivates us to improve our performance, while the prestige and power that come from winning can provide a powerful morale booster. What’s more, winning increases testosterone and dopamine hormones, which, in turn, increases our confidence and willingness to take risks, and thus our chances of further success.
At the same time, the need to win can blind us to ethical considerations. It’s a potential problem in all kinds of areas: colleagues who have a strong rivalry at work, managers who need to make their numbers for the quarter, even political parties that spend campaign funds to attract votes. A common theme in these situations is that there are only a few winning slots — and maybe just one — with massive stakes in terms of money, advancement, and fame.
What’s often driving this fierce competition is the knowledge that our performance is being assessed not in absolute terms but in comparison with others’. In the workplace, such “rank-and-yank” methods — also known as the vitality curve, forced rankings, and stacking systems — are regularly used to judge performance, whereby, say, the top 20% of employees are categorized as high performers and the bottom 10% face redundancy. Similarly, the bell-curve grading in an MBA classroom ensures that students are categorized and graded relative to peers, without considering their overall performance.
In our research, recently published in the journal Human Resource Management, we found that performance evaluation schemes based on peer comparison can encourage unethical behavior. In one study, we asked 164 MBA students to read a hypothetical scenario (based on a true story) about an investment banker facing an ethical dilemma, and to estimate the likelihood that this banker would indulge in unethical behavior. The students were randomly assigned to three conditions for how the banker would be paid: a fixed salary with no bonus; a fixed salary with a bonus tied to the banker’s number of trades; and a fixed salary with a bonus tied to the banker’s performance relative to his peers. (For more details of this study and the ones below, see the sidebar “Our Studies.”) Our results showed that the students in the relative performance condition expected the banker to be more likely to behave in an unethical manner.
Our Studies
In another study, we investigated people’s ethical behavior in self-reporting their performance. Using Amazon’s Mechanical Turk platform, we invited 160 participants of U.S. origin to participate in a 10-question IQ quiz. They were asked to self-verify their answers and report their scores to us. Again, participants were randomly assigned to one of three compensation groups: a fixed participation fee of 10 cents, irrespective of performance; a fixed fee with a bonus based on the number of correct answers they reported; and a fixed fee with a bonus for only the top scorers. The results surprised us. The groups didn’t differ much in performance, and most participants overreported their scores. But both the incidence and the magnitude of overreporting was highest in the third group, the one in which only top performers received a bonus. Notably, every single person in the group overreported their score. In short, the competitive pressure and the comparisons encouraged rule breaking.
Organizations continue to experiment with and debate the pros and cons of comparison-based performance management systems. In recent years, for example, Yahoo endorsed them, while Microsoft abandoned them. One thing is clear, though: Relative comparisons are widespread and here to stay. Given that, what can be done to limit possible temptations of ethical breaches that accompany such competitive comparative settings?
We propose a subtle and simple intervention we call consequential reflection: prompt individuals to reflect on the positive and negative consequences of their decisions. In another study of ours, participants who took a moment to think and write down such possible consequences were less willing to act unethically. Again on Mechanical Turk, we invited 184 participants of U.S. origin to participate in a decision-making scenario. Participants assumed the role of a university professor, close to tenure evaluation, who had a manuscript under review with a top journal. The data analysis for the manuscript had not provided desirable results, and as a result the professor was tempted to manipulate the data. Participants were asked how likely it was that they would manipulate the data, with some participants being prompted to consider the consequences. We found that those participants were significantly less likely to take unethical action.
Why would this kind of prompt be effective? Research on the human mind tells us we run on autopilot much of the time. The pressures of our jobs mean we often don’t take time to pause and reflect. Therefore, our intuitive, habitual behaviors take over. In matters of ethics, this can lead to a self-centered, “me-first” attitude, focused on the immediate benefits for ourselves and ignoring the long-term consequences of ethical lapses.
To put this idea into practice, we propose that leaders try the following:
- Conduct pre-mortems. Ask employees and teams to regularly stop and reflect before making crucial ethically charged decisions. Instead of diagnosing decisions after the fact, take the time to think about their positive and negative consequences early on.
- Organize ethics hackathons. On a regular basis, get team members together to share upcoming decisions. Let peers dissect them, play devil’s advocate, and raise possible issues with various stakeholders.
- Train for reflection. Encourage employees to embrace a reflective, mindful approach to decision making. Training sessions on mindfulness can be beneficial for helping employees to slow down and think critically.
- Make ethics part of culture. Include consequential reflection in values statements and culture guidelines in your organization. Reminders such as “Think first” and “Seek opinions” can be placed prominently in offices.
We believe the strengths of our intervention are that it’s effective, cheap and easy to implement, and unlikely to provoke strong objections from people. As our research shows, simple psychological interventions can be a valuable part of an organization’s tool kit for creating an ethical culture.
from HBR.org https://ift.tt/2QtgTpZ