Episode 005:

Sharing is Caring: Moods, Cooperation, and Fairness

with Dr. Barbara Mellers

March 10th, 2021

Listen or subscribe to our podcast on your favorite podcast provider!

Episode description

How does our mood affect how cooperatively we act in a situation? Dr. Barbara Mellers, I. George Heyman University Professor at the University of Pennsylvania, joins your host, Executive Director of the Alliance for Decision Education Dr. Joe Sweeney, to talk about the difference between judgments and decisions, studies on what happens when our desires and our sense of fairness are in conflict, and research on how a positive mood might make you act more cooperatively. You’ll also hear about the differences between how we should make decisions, how we actually make decisions, and what we can do to improve our decision-making.

To listen to more episodes, visit The Decision Education Podcast homepage.

Barbara is the I. George Heyman University Professor of Psychology at the University of Pennsylvania, teaching in both the Marketing Department at Wharton and the Psychology Department in the School of Arts and Sciences. Prior to joining Penn, Barbara spent a decade as Professor of Marketing and Organizational Behavior at University of California, Berkeley, before which she spent six years as a Professor of Psychology at Ohio State University.

In Barbara’s own words, “I am interested in how people develop beliefs, formulate preferences, and arrive at choices, I have focused on human decisions that deviate from rational or normative principles due to fairness concerns, anticipated emotions, contextual effects, or response mode effects. I use laboratory studies to manipulate and control variables that inform us about underlying processes. I am also interested in how behavioral decision making can shape better public policy.”

Barbara sits on the Scientific Advisory Board of the Max Planck Institute for Human Development and is the former President and Executive Board Member of the Judgment and Decision Making Society. She has won numerous prestigious awards, and authored or contributed to scores of publications.

Joe: I’m excited to welcome our guest today, Dr. Barbara Mellers. Barbara is the I. George Heyman University Professor of Psychology at the University of Pennsylvania. Her research focuses on behavioral decision theory, emotions, fairness, preference, measurement, and how behavioral decision-making can shape better public policy. As Co-Principal Investigator of the Good Judgment Project, her work has revolutionized how we think about the science of forecasting. Barbara is also on the Advisory Council here at the Alliance for Decision Education. I first came across Barbara’s work several years ago when I began looking at the connection between making predictions and decision-making. Welcome, Barbara. I’ve been looking forward to our conversation. Could you just tell us in your own words, a little bit about what you do and your path to getting there?

Barbara: Yeah, thank you, Joe. I am a cognitive psychologist and I hold appointments in the psych department and the business school at University of Pennsylvania. I’ve been teaching and researching judgment and decision-making, mostly from a descriptive point of view, that is to say how people actually make decisions, but I’ve gotten more interested in how to improve decision-making in the last decade or so. So my focus has been on looking at the glasses as half full instead of half empty.

Joe: Well, that’s going to be really interesting. One of the things that I try to do before any podcast is just review things that the guest has written andI gotta say, you’ve been quite productive. You made that a bit of a challenge. The range of things that you’ve paid attention to and studied is fascinating. But before we even dive in there, could you just explain to our audience what you mean by judgment versus decision-making? Because you separated those two.

Barbara: Yeah, sure. Judgment refers to a single evaluation. It could be an evaluation of many things, as might happen with a prediction. That’s a form of judgment. It could be an evaluation of an applicant for a particular position. It could be just a rating of how I feel right now, how happy I am. A decision involves at least two options and they might not be obvious, but there’s gotta be some selection of an alternative or an act and that is a choice or a decision.

Joe: Okay and so when you began looking at these things, where did you start? What [was] your area of interest or the topic that you began exploring at first and tried to describe?

Barbara: Well, my dissertation was on models of fairness in situations where you have a scarce set of resources and you need to distribute them among a group of people. And that’s been a theme in my work for many years. Later on I worked on fairness in the context of economic games, in the context of perceptions of prices in consumer judgments. And actually I am writing a paper with several other people on perceptions of fairness in the U.S. right now, income inequality, and how it relates to perceptions of autonomy, one’s own feeling of being in control.

Joe: That’s interesting. And what are you finding with regard to how it relates to people’s sense of autonomy or control?

Barbara: Well, it turns out that people who feel greater autonomy are also more likely to believe that inequality is based on merit, it’s differences in effort and ability, and that there’s also less of it than actually exists. So people who tend to be more autonomous also tend to perceive less inequality in the world than those who feel less autonomous.


“So people who tend to be more autonomous also tend to perceive less inequality
in the world than those who feel less autonomous.”


Joe: Did you come at that by looking at how they correlate or did you try to move people’s sense of autonomy and then see there was a change in their sense of inequality and fairness?

Barbara: Yeah. Well, we’re doing experiments now to manipulate one’s sense of autonomy, but it initially arose from representative surveys of U.S. citizens.

Joe: Okay. All right. So somebody thinks of themselves as being autonomous in some way that you’re asking about it. And the more that they think of themselves as having agency or autonomy in our society, the more fair they think that society is basically. Am I getting that right?

Barbara: You’re getting that right.

Joe: And then the perception of fairness, is there a way that you’re evaluating that against some objective standards so that you can say somebody who feels this autonomous and this sense of agency is misconstruing, how fair things are, or are you just recording whether they think it’s fair or not?

Barbara: What we’re doing is asking them what’s fair? Is X fair? Is Y fair? And we’re not telling them what fairness means. We’re letting them tell us what they think fairness means. So it’s a self-report about various conditions described in the survey. Actually, that’s something that we’ve done in other lines of work. I’ve tried to take the perspective of players in economic games. Two famous economic games are the Ultimatum Game and the Dictator Game. They’ve been used to understand human cooperation. And here’s how they work. In the Ultimatum Game, there are two anonymous players they’re randomly assigned to the role of a proposer and a responder. The proposer is given some amount of money and it’s often $10 and he or she decides how much or how little of it to give to the responder.


“Two famous economic games are the Ultimatum Game and the Dictator Game.
They’ve been used to understand human cooperation and here’s how they work.”


Now the responder can accept or reject. Acceptance means the money is distributed that way. Rejection means nobody gets anything. Economic theory says the prediction is straightforward. It’s a Nash equilibrium that says these proposers ought to maximize their gains and offer just the smallest possible amount, say a penny. And the responder ought to accept because something’s better than nothing, but that’s not what happens. The data shows something quite different. First, the proposers offer too much, usually somewhere around a half or just a little less. And the responders often reject, often in the neighborhood of 20% of the total amount, so that would be $2 or less than that. And they’re not being consistent with standard economic theory.


“… these proposers ought to maximize their gains and offer just the smallest possible amount,
say a penny. And the responder ought to accept because something’s better than nothing,
but that’s not what happens.”


Now, the question is why? And there’ve been two different explanations that have been proposed. The first one is that the proposers aren’t stupid. And they know that responders might reject, so what they do is to try to maximize their expected profits, so what they do is to try to maximize their expected profits. And basically that means offering somewhere around half. The other explanation is that people have a taste for fairness. They want to be treated fairly. They want to treat other people fairly. And they want to punish people who aren’t behaving fairly.


“… the other explanation is that people have a taste for fairness, they want to be treated fairly.
They want to treat other people fairly and they want to punish people who aren’t behaving fairly.”


So to distinguish between these two explanations, some very clever people came up with the Dictator Game. And this is a game in which once again, two players are randomly assigned to the role of an allocator or a receiver. The allocator gets some amount of money, let’s say $10. He or she could give as little as he or she wishes and the receiver has to accept it. Now, this is a test between these two explanations in the following way. If the proposers simply want to maximize their wealth, they’ll just take it all, but if they care about cooperation and sharing and fairness, then they should do something along the lines of what they do by sharing roughly half. Okay. So what happens? The answer is: lots of things. Some people give nothing, some people share and some people seem to compromise somewhere in the middle. So what do you make of this as a psychologist? I think what’s going on is something that involves trade-offs between what we want and what we think we ought to do.


“What’s going on is something that involves trade-offs between
what we want and what we think we are to do.”


So how can you measure those things? And what we did was to set up games with $10. And if you divide on the dollar, there are 11 possible allocations that you as a proposer could make: zero-ten, one-nine, all the way up. Now, the first thing we did was to ask people, of all these, tell us for each one of these allocations, 11 possible splits, what makes you happy?
What would make you feel pleasure? And then we say, okay, now rank order these preferences from first, – we’ll implement the first one you pick- and then tell us what you do after that. When it’s all done, we say, now tell us what you think you ought to do, whatever that is. So everything is defined by the player. Nothing’s defined by us. What you end up having is people for whom wants and oughts are aligned and people for whom wants and oughts are in conflict. There’s value conflict there. So what can you make of that? Well in life, we often have the chance to exit or walk away from the situation. You cross the street when you see the panhandler there and maybe you don’t want to give him anything. So, you try to get yourself out of these situations. Well, there were some other clever experimenters, who tried to implement that opportunity to exit in the Ultimatum Game. And what they did was to say, okay, proposer, what do you want to give the responder? And before that was implemented, the experimenter said, Hey, look, I’ll make you an offer. If you want to pay me a dollar and take the $9, I promise you, I won’t tell the responder that there was even a game at all. Now it’s really stupid if you think about someone who wants to maximize gains. They want to maximize gains, why would they give the experimenter a dollar? Well, what happens is that we could predict the people who exit. And a substantial number do, roughly a third of the subjects take that option, and they are the players who are conflicted.


“What happens is that we could predict the people who exit. And a substantial number do,
roughly a third of the subjects take that option, and they are the players who are conflicted.”


Joe: Oh, wow. That’s so fun.

Barbara: None of these constructs of wants and oughts are in standard economic theory. But the story gets even a little bit better. There’s been work on positive mood inductions. In other words, putting subjects and in good moods, either with a small gift of hard candy or having them watch a video, in this case, we used Robin Williams comedy routines and got them laughing and feeling happy. Tell them it’s about some other study, humor in marketing or whatever. And then have the play, the dictator game. That’s the one where you can’t reject.

Joe: Right, now they’re more benevolent as dictators.

Barbara: You literally shift their wants. The entire distribution. It’s almost as if they all want to make the other player a little happier. And that’s part of how they’re feeling now. You’re making most of them more aligned, wants and oughts are not in conflict. And you’re turning the Dictator Game into an Ultimatum Game, such that at least two thirds of participants share 50-50, which says a lot about good moods and human cooperation. And now good moods are incidental. In other words, they’re created as something that has nothing to do with the decision-making task. But it has a profound effect on the behavior, in this particular case. And turns dictators into ultimatum players, essentially.


“You literally shift their wants. The entire distribution. It’s almost as if they all want to make the other player a little happier. And that’s part of how they’re feeling now. You’re making most of them
more aligned, wants and oughts are not in conflict. And you’re turning the Dictator Game
into an Ultimatum Game, such that at least two thirds of participants share 50- 50,
which says a lot about good moods and human cooperation.”


Joe: That’s amazing. That’s I thought you were going to go somewhere about how their self-image, their sense of, should they be a fair person or be an honest person, or, what are they holding in their head as one of the goals beyond just how much money can they make out of this game? What other goals are they trying to satisfy for themselves? And I thought you were going to go somewhere with, they’ve got a sense of who they want to see themselves as. And the idea that it’s just as simple as their mood. That’s interesting and disturbing, in a different way.

Barbara: Not to me as a psychologist, but it might be to other people. Anyway, that’s an example of how emotions can be important in our decisions, both in terms of what we anticipate will make us happy and the incidental feelings we have that put us in good or bad moods.

Joe: Just for a moment, taking you out of the researcher, and into just the person in the world point of view. So now you know this about our decision-making, what does it make you think about or do differently when you’re making your own decisions? Do you guard against being too positive? Do you make sure that you are positive? Like what, what are you doing then with that?

Barbara: I think that it says that if you want cooperative behavior, you’re much more likely to get it when people are feeling happy, in positive moods. There’s quite a bit of research that suggests that’s the case. There’s also other research that suggests people want to regulate their emotions and if they’re in a good mood, they don’t want you to mess it up, so they avoid conflict situations. People are complicated and I think it’s hard to generalize from small studies, but there’s some intriguing things there.


“I think that it says that if you want cooperative behavior, you’re much more likely
to get it when people are feeling happy, in positive moods.”


Joe: Well I’m mostly wondering for you as you see these things. And I mean, you’ve spent a career investigating how we form judgments and make decisions. There have to be all sorts of ways that it informs your own thinking about your own decisions. This just seems like such a crisp one. You walk away with a new set of insights about people being turned from being dictators to cooperative because of their mood, I’m just wondering, has that shown up in your life? Have you then decided, Oh, before I go into this situation, I’m going to be more thoughtful about my mood or the mood of others or I’m going into a negotiation about my car, and so I don’t want to be too happy when I go in there because I actually don’t want to give away too much, but I’ll certainly give the sales person some candy. What’s happened for you with that?

Barbara: Yeah. Well, I mean, you’re describing a few of the things. You don’t want to ask for much from people if they’re not in a good mood and if they aren’t in a good mood, maybe you want to do something about that before you interact or make requests.

Joe: Relatedly, and only because it came up in the same area of research when I was reading up on some of what you’ve done, you talked about response mode effects, that different phrasing might actually affect people’s answers to questions that come up in this study, or is that a different study.

Barbara: Well that’s, that’s a different study. I’ve spent too much time thinking about the effects of variables that are seemingly irrelevant from a normative perspective, but seriously important from a psychological one. And context effects and response mode effects are examples of these. Let me tell you a little bit about some of this research. You can get people to say that they want different things, depending on how you ask the question. And some of the early work on this came up in the context of risky decision-making. And the researchers gave people two gambles. One might be a safe bet, maybe a 90% chance of winning $10, otherwise nothing. And the other one might be a riskier bet. Say 10% chance of winning $90, otherwise nothing. Okay. They’re matched on expected value. So when you give people those two gambles, they say in a choice task, I want the safe bet.


“You can get people to say that they want different things, depending on how you ask the question…”


They want to walk away with something. But if you tell them. Okay. Now assume that you own both of them and you have an opportunity to sell them. What is the minimum selling price you’d ask for each one of them separately? Well, they assign a higher minimum selling price to the 10% chance of winning the $90, the risky bet, than the 90% chance of winning $10.

Joe: It was probably a year or two ago, a cousin of mine asked about how to make a decision around which spring break to take, and so I wrote up a little guide on how to think about a situation like that using the weight and rate method. So that I know there are some things we can do to try to address this [situation where] I’ve got multiple things I’m trying to optimize for. Are there other ones that jump to your mind besides taking multiple perspectives or trying to write it down and use a weight and rate method?

Barbara: Well, I tell my students when I teach courses on judgment and decision-making that there’s three ways to improve decision-making and the first one is to learn about the normative models: multi-attribute utility theory, as you were just saying, expected utility theory, Bayes theorem, signal detection theory, and so forth. Second one is to learn about the biases and the noise and attempts to debias and what works, when training is effective, when it’s not. And the third way is to change the environment and make the environment more decision friendly so that you’re more likely to choose the right thing, out of habit, with low energy and low effort.

Joe: So if you don’t mind Barbara, I’m going to follow up with each one of those, because some of this might be really new to our audience. When you said earlier that you had originally focused on descriptive research, but then later you’ve used now this term normative, could you explain what that is? What’s normative?

Barbara: Okay. Let me make that distinction. Decision scientists often cut up the world in terms of normative, descriptive, and prescriptive decision-making. Normative decision-making is how we ought to make decisions if we want to be consistent with principles of logic, or probability, or economic theory, a small set of axioms that seem quite reasonable. Descriptive decision-making is how people actually go about doing it throughout the day. And prescriptive of course, is how to help people make better decisions, given the normative models and given what we know about descriptive errors.


“Decision scientists often cut up the world in terms of normative, descriptive, and prescriptive decision-making. Normative decision-making is how we ought to make decisions if we want to be consistent with principles of logic, or probability, or economic theory, a small set of axioms that seem quite reasonable. Descriptive decision-making is how people actually go about doing it throughout the day. And prescriptive of course, is how to help people make better decisions, given the normative models and given what we know about descriptive errors.”


Joe: Okay that’s very helpful. And so it sounds like you’re saying the number one way that you mentioned, not necessarily ranked, but of the three things, the first thing you mention to your students is they need to learn about the normative models. That there are things out there that describe what a good decision and decision process would look like that’s consistent with logic and probability and having things like well-ordered and stable preferences and that sort of thing. Okay.

Barbara: Yeah.

Joe: So, all right, that makes a ton of sense. And then number two, you mentioned, was learn about biases and noise.

Barbara: And debiasing, attempts to debias, training that’s effective, when it is, when it isn’t, and then learn about nudges and decision environments that are influencing what we do.

Joe: That one was number three, but when, when you talk about biases, are you talking about how we feel about other people there? Or are you talking about cognitive biases? And if you are, can you say what you mean by that?

Barbara: Yeah I’m talking about cognitive and motivational biases. I don’t mean it in the sense of, you’re biased against someone, you’re discriminating against someone, but rather you’re biased in the sense that you have a systematic tendency to do something. And one of the most interesting cognitive biases is overconfidence. We tend to think that our skills are greater than they actually are objectively speaking. We see ourselves quite often in terms of a positive light. Self-serving biases.


“One of the most interesting cognitive biases is overconfidence.
We tend to think that our skills are greater than they actually are objectively speaking.
We see ourselves quite often in terms of a positive light.”


Joe: All right. And that there are ways to recognize and resist or debias, that you were saying, and that we should, we should be looking for those and learning about those.

Barbara: Yeah, there is a literature, not large, but a literature on debiasing. And there are some very important insights from that literature.

Joe: I will definitely follow up on that one. And then the third one was changes to the environment. And you mentioned nudges, but before we hit nudges, are there other kinds of changes to the environment? What are you referring to there?

Barbara: Well, an example in our forecasting tournaments would be: what’s the best social arrangements for making predictions? Should people be doing it by themselves, alone? And then the experimenter aggregates a large set of independent forecasts? Or should people be talking and interacting in groups and sharing information or maybe motivating each other, maybe correcting each other’s mistakes. Should they work that way? So I do mean nudges and I also mean that some working environments are better than others when we’re trying to accomplish a particular task, like making forecasts.

Joe: So a few things you mentioned there really deserve their own separate conversation. I first got to know about your work mostly through Superforecasting and the work you did with the Good Judgment Project and everything related to predictions. I’d love to follow up sometime and talk more about that but, I just want to say thank you. You’re I can see why you’re so well-regarded as a teacher. Your explanation of these things is so clear and compelling and exciting. I want to learn more about them.

Barbara: Thank you, Joe.

Joe: You’re very welcome. Thank you for sharing your time with us. I like to ask two quick questions at the end of these podcasts. They’re maybe a little more fun, or less serious. I don’t know about serious. They’re pretty serious. So if you could pass down one tool, one skill to the next generation of decision makers, what would it be?

Barbara: I don’t have a single one, but I think those three things I told you earlier are the things to do. Learn how to do it right, learn how people do it wrong, and try to change the environment to make it easier to do it right.


“Learn how to do it right, learn how people do it wrong, and try to change
the environment to make it easier to do it right.”


Joe: Beautifully said, and based on our conversation, I’m not surprised that we were talking about having more than one thing that we cared about. No big surprise there. And the last one is: if there was a single book that you would recommend as like priority reading for our listeners who want to improve their own judgment and decision-making, is there a book that really jumps out to you as, this is the place to go first?

Barbara: Well, there is a book that’s coming out next year, May 2021, and I am super excited about it. I think it’s phenomenal. It’s a book by Daniel Kahneman, Olivier Sibony and Cass Sunstein and it’s called Noise. It’s about decision noise and how pervasive it is, how destructive it can be and I’ve read or early versions of it and it is absolutely terrific.

Joe: Okay, great. Barbara, thank you so very much for coming on the show. We really appreciate it. That was a treat.

Barbara: You’re welcome, Joe. Thank you.

Share this episode to your favorite platform!

Check out our other latest episodes

  • Episode 029:

    Changing Minds in a Polarized World

    with David McRaney

    Why do people sometimes become more entrenched in their beliefs when they are challenged? In this episode, David McRaney, science journalist and creator of the [...]

  • Episode 028:

    Rethinking the Workplace

    with Dr. Adam Grant

    Can giving advice actually be more valuable than receiving it? In this episode, Dr. Adam Grant, organizational psychologist and world-renowned author, joins host Annie Duke, [...]

Stay informed and join our mailing list