Featured Collaborator of the Month: Daylian Cain [1]

Interview with author Daylian M. Cain [2], associate professor of Marketing, Yale School of Management

 

What is the main research for which you are known?

I study judgment and decision making, or, as I like to say, “why good people do bad things, and why smart people do dumb things.” Much of my work is on conflicts of interest [3] and how they are problems not only for the intentionally corrupt but also for well-meaning professionals who fall prey to unintentional bias.

I have published various papers on the perverse effects of disclosing conflicts of interest (co-authored with Alan Detsky, George Loewentstein, Don Moore, and Sunita Sah). These papers show how disclosure can alter the balance of one’s concern for others. For example, when an advisor discloses a conflict of interest to an advisee, the advisor often feels they can offer biased advice more freely because the advisee has been warned—caveat emptor. Also, disclosure can make advisees feel pressured into complying with advice they distrust in order to help the advisor satisfy the interest she has disclosed; this is what we call a “panhandler effect,” as, for instance, when your doctor discloses that your taking part in an experimental study will help her out financially. After that disclosure, you feel pressured into helping. This ties into my work on reluctant altruism, where people often help in situations they would prefer to avoid altogether. I am not saying that disclosure is a bad thing, but conflicts of interest are surprisingly dangerous and disclosure is not the panacea that it is sometimes made out to be.

Recently, my work has looked into why people help others and how altruism can be turned on and off. This work includes research with Jason Dana on “dictator exit” effects, “paying people to look at the consequences of their own actions,” and “giving vs. giving in,” which is featured far below.

If you could highlight just one paper or research finding that relates to ethical systems, what would it be and why?

I will suggest an odd choice: “The Robust Beauty of Improper Linear Models [4]” by Robyn Dawes. Think of it as a primer on the power of “Moneyballing,” or using data and algorithms to make predictions. In a world of “Big Data,” this inspiring paper reminds us that even simple algorithms and small data often do a better job than our intuitive judgments. The article connects with ethical systems in subtle ways and provides several keys to better ethical decision making.

The Dawes article basically suggests that a kid with an Excel sheet can out-predict a group of experts on almost any prediction task. The kid is not terribly accurate, but experts turn out to be even worse. The kid asks the experts how they plan to make the prediction, recognizing that the experts do have real expertise. Then the kid models the prediction as best he can. The kid—as crude as his model might be—often does a better job of following the expert advice than even the experts themselves (reflecting the wisdom of “do as I say, not as I do”). Experts inform the model, but Excel carries it out more consistently. Humans are just not great at putting rows and columns together in their heads in systematic ways. Computers literally excel at that.

The people a firm recruits and how it recruits them has obvious ethical implications. The Dawes article provides many insights into how the hiring process should be structured to make it more fair. Yet, even professors who teach MBAs how to hire often fail to use best practices when recruiting for their own organizations. This is a classic example of the “knowing vs. doing gap,” which is another culprit behind many ethical failures.

When I was first offered professor jobs, I was surprised at how unstructured it all was and how much weight was put on the campus visit. If the experts want to overweigh a campus visit, that is their prerogative (maybe), but I wondered if they would openly consent to how much weight is given to such things in practice. I thought, “If these MBA professors don’t listen to the vast research on how to hire people, what chance does my little research ever have of having an impact here?”

The deeper tie to ethical systems is that using algorithms to aid in decision making often requires one to quantify intangibles, e.g., a recruit’s “fit” with the group. Thinking about how to score “intangibles” is a valuable task for groups to engage in. It is something I focus on in my MBA classes and it sheds light on how to assign a numeric value to one’s ethical values. For example, how much is “consumer safety” worth to the firm, how do we measure it, and how much should we spend to bring it from an 8/10 to a 9/10? Precise questions often come from precise models.

Excel programming would require that many things be spelled out and specified: What are the variables that ought to be targeted, how are they being measured, and how much weight should be put on them? Even if readers of the article ultimately refuse to make their decisions using Excel, a tremendous benefit is gained from groups merely wondering and trying to agree on what such an Excel sheet might look like. Without such a discussion, everyone weighs different aspects idiosyncratically (“Oh, that recruit plays poker like me, so he will fit in great around here”).

All this highlights a broader point: Leaving the complicated aspects of a decision to one’s intuition is often a recipe for good people doing bad things. Granted, being explicit about “intangibles” is difficult, taboo, and wrought with error. However, being vague (and “gutsy”) about such things is often even worse.

Tell us about one of your current or future research projects.

With Jason Dana [5], I am looking at the effectiveness of “paying people to look at the social consequences of their actions.” Prior research has suggested that people prefer to remain “strategically ignorant” of the negative social consequences of their actions and that this uncertainty facilitates selfish behavior.

We offered participants various incentives to examine the potential consequences of their actions, and this approach reduced the selfishness of the participants’ behavior. Interestingly, participants who were paid to examine these consequences were more generous than participants who “looked” for free.

We also find that these payments can be cost effective: small payments can lead to social welfare gains that are larger than the total cost of the subsidies. Our results suggest an efficient way to change behavior because it may be cheaper to pay someone to look at information about his social footprint—thus activating his social preferences—than it would be to directly monitor/reward pro-social behavior.

If you could only give one piece of advice to companies, what would it be?

When I speak to executive audiences, I often try to illuminate the power of perspective taking.  Seeing the world how others see it can be a powerful aid to those who are trying to do the right thing, especially when the perspective taken is that you are incorrect in some way. It works better than you might think. “Could it be true?” seems like merely a logical flipside to “could it be false?”, but these questions turn out to be dramatically different for the brain; they are logically similar but psychologically worlds apart. One need not second-guess oneself into indecision. Groups who “consider the opposite” early on in the process often arrive at better decisions, decide faster, and feel more connected to the decisions/products that they ultimately make.

Next time you look at your own PowerPoint presentation, instead of asking yourself, “Is this ok?”, ask yourself, “What might a few naysayers dislike about this presentation? What is missing? What can I cut?” Suddenly, you will see your presentation in a whole new light and you will be able to make it much better.

=========================================================================================================

Featured academic article:

Giving versus Giving in [6], DM Cain, J Dana, GE Newman

The Academy of Management Annals 2014

ABSTRACT: Altruism is central to organizational and social life, but its motivations are not well understood. We propose a new theoretical distinction that sorts these motivations into two basic types: “giving” indicates prosocial behaviors in which one willingly engages, while “giving in” indicates prosocial behavior in which one reluctantly engages, often in response to social pressure or obligation. Unlike those who give, those who give in prefer to avoid the situation that compels altruism altogether, even if doing so leaves the would-be beneficiary empty-handed. We suggest that the distinction between giving and giving in is not only central from a theoretical standpoint but also has important methodological implications for researchers trying to study prosocial behavior and for practitioners trying to encourage it.

Featured popular article:

"Tainted Altruism [7]" by Daylian Cain and George Newman was the subject of a recent Time Magazine article: “When Doing Good Means You’re Bad [8]

Learn more about Daylian Cain and the entire roster of Ethical Systems collaborators. [9]

[15]
Continue Reading [1]