Episode 026:

Tuning Out the Noise

with Dr. Cass Sunstein

September 13th, 2023

Listen or subscribe to our podcast on your favorite podcast provider!

Episode description

Do judges impose harsher sentences on Friday afternoons? In this episode, Dr. Cass Sunstein, law professor, former administrator of the White House Office of Information and Regulatory Affairs, and co-author of Nudge: Improving Decisions about Health, Wealth, and Happiness, joins host Annie Duke, co-founder of the Alliance for Decision Education. They discuss the concept of noise, inconsistencies in human judgment that can arise even when people are presented with the same information. Annie and Cass talk about how we are more likely to believe things we hear repeatedly, even when they’re not true, and how “nudges” can positively influence our choices without us realizing it. Cass also sheds light on the surprising impact group polarization has on everyday decision-making, including the tendency of juries to impose harsher sentences collectively than any individual jurors would choose alone. This has big implications for the group decisions we make every day at work, at home, and in our families!

 

Dr. Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School.

In 2018, he received the Holberg Prize, sometimes described as the equivalent of the Nobel Prize for law and the humanities, from the government of Norway. In 2020, the World Health Organization appointed him as chair of its Technical Advisory Group on Behavioural Insights and Sciences for Health. From 2009 to 2012, he was administrator of the White House Office of Information and Regulatory Affairs, and after that, he served on the president’s Review Board on Intelligence and Communications Technologies and on the Pentagon’s Defense Innovation Board.

Cass has testified before congressional committees on many subjects, and he has advised officials at the United Nations, the European Commission, the World Bank, and many nations on issues of law and public policy. He serves as an adviser to the Behavioural Insights Team in the United Kingdom.

Cass is the author of hundreds of articles and dozens of books, including Nudge: Improving Decisions about Health, Wealth, and Happiness (with Richard H. Thaler, 2008), Simpler: The Future of Government (2013), The Ethics of Influence: The Ethics of Influence: Government in the Age of Behavioral Science (2015), #Republic: Divided Democracy in the Age of Social Media (2017), Impeachment: A Citizen’s Guide (2017), The Cost-Benefit Revolution (2018), On Freedom (2019), Conformity: The Power of Social Influences (2019), How Change Happens (2019), and Too Much Information: Understanding What You Don’t Want to Know (2020). He is now working on a variety of projects involving the regulatory state, “sludge” (defined to include paperwork and similar burdens), fake news, and freedom of speech.

Books

Resources

Websites

Articles

Podcasts

Social Media

Annie: I’m so excited to welcome my guest today—my good friend Cass Sunstein. He is the founder and director of the program on Behavioral Economics and Public Policy at Harvard Law. From 2009 to 2012, Cass was the administrator of the White House Office of Information and Regulatory Affairs in the Obama administration.

In 2018, he received the Holberg Prize from the government of Norway, sometimes described as the equivalent of the Nobel Prize for law and the humanities. In 2020, he was appointed chair of the World Health Organization’s Advisory Group on Behavioral Insights and Sciences for Health. He has also advised internationally for the United Nations, the World Bank, and the Behavioral Insights team in the UK.

Cass is currently the Robert Walmsley University Professor at Harvard. He is also serving as an advisor to the Secretary of Homeland Security in the Biden administration. He is also the co-author of a number of hugely successful books, including Nudge: Improving Decisions about Health, Wealth, and Happiness, which he co-authored with Richard Thaler, Noise: A Flaw in Human Judgment, which he co-authored with Daniel Kahneman and Olivier Sibony.

He is also the sole author of more books than I could possibly count. I think of Cass as the most productive human being I have ever met. He is the author of Sludge: What Stops Us from Getting Things Done and What to Do about It, #Republic: Divided Democracy in the Age of Social Media, and the book Too Much Information: Understanding What You Don’t Want to Know, among the many other books that he has written and published.

So Cass, I’m so excited to have you here.

Cass: That’s very nice. Thank you!

Annie: So let me just ask you first, because obviously that’s a hell of a CV, how many books have you published? Do you have a count?

Cass: Too many.

Annie: Too many. So I would love to hear sort of what’s the secret to how productive you are? I mean, not just in terms of your professional life, but also just in terms of the volume of what you’re able to produce.

Cass: Well, for writing, I think to myself, often, you can’t edit a blank page. And I have to put words on it to edit it, which means that I will motivate myself to put something on it. And even if what I put on it will ultimately be so edited that it will be unrecognizable as itself, at least I gave myself a start. I also try to be very critical of what I produce. Not at the first draft, not at the second draft. Late.

Annie: You know, I love that kind of thinking about strategy for decision-making in general. Right? Because it sounds like, if I had to kind of sum up what you just said, that implied in that is a lot of what stops people from writing is that they think about what they’re putting down on the page is having to be really good, that if you showed it to somebody else, they would read it and say, “This is awesome!” And that’s really daunting when you’re sort of aiming for something that at first bat would be perfection. And what you’re saying is there’s a lot of power in just getting started, particularly if you understand that you always have the option to throw it out. So it’s not really a big deal. It would relieve a lot of pressure. Like how would you think about that relating to other types of decisions besides writing?

Cass: So my work is sometimes with businesses over the years, and sometimes in or with governments, and I often think if we haven’t done anything within the last week that that’s going to matter to the business or the government or the operation, what will we do in the next week that will actually make a difference?

And I’ll sometimes even say an artificial deadline, this has to be done by the end of the month. And so setting deadlines, even if they’re extremely artificial, can be helpful. In government, once I noticed that there was a birthday of a US Senator that happened at a time when I wanted a project to be finished, and I said, why don’t we do this in time for the senator’s birthday?

It actually became, within government, mildly controversial that I had suggested we should do this in time for a senator’s birthday. That sounded kind of inappropriate. But of course it was generally understood that there was just an artificial day because we needed a day to get something done that was very important for people. And a deadline often concentrates the mind, even if people know that there’s nothing real about it.

Annie: You know what I love about that? It reminds me of the flip side of a lot of, like, Katy Milkman’s work, where she talks a lot about fresh starts as a way to really change habits and behaviors. But what’s interesting is that to your point about a senator’s birthday, well, isn’t that just kind of an arbitrary date that we’re going to finish something?

When she’s talking about we have these natural, fresh starts, like we could take January 1st, that’s a natural fresh start, or at the beginning of a week, that those are also quite arbitrary. But they have a lot of psychological power. So it sounds like you’re sort of taking some of that and kind of flipping it around to how do you figure out the end of a project, recognizing that there is power to these kinds of either fresh starts or like creating these deadlines, whether they’re real or imagined, I guess.

Cass Sunstein: Right. So if there’s a business that is thinking hard about some new initiative—and it might be people are floundering a little bit—if it said, why don’t we launch at least something that’s in the domain of the new project by March 15th or April 15th—Tax Day. Let’s do something to celebrate Tax Day that can concentrate the mind and actually get something to happen.

And I will often think for myself in my writing projects, a draft has to be done by a certain date. And it might be completely ridiculous, but it can motivate and especially if you feel that at some point in your mind that it’s real.

Annie: I can see a decision strategy now, which is you figure out how long you would like the project to take, or you know, sort of about how long from now you’d like to launch something.

Go look up what that date is, then go on Google. And say, what happened on this date at some point, right? Like you found a senator’s birthday, but you know, maybe it’s the first day that pizza was served in America, and attach some sort of meaning to it that then makes the deadline feel more real. Like I think that we should now launch that as a decision strategy for people

Cass: Completely. And the senator whose birthday it happened to be, the senator actually cared about the relevant thing. And so to say that would be a birthday present for that senator was not completely random. And the date was in the vicinity of what was needed for the thing to be as helpful as it could be. So this is always doable to find a date and to make it kind of funny.

If it sticks in the mind as if it has a sense, then it typically does. So if it’s a Christmas present or a birthday present or to celebrate July 4th for America, why don’t we . . . you and I have just discovered that one on the spot here. I suggest on July 4th a lot of things ought to happen that all listeners are engaged in and that might not happen until 2028 if we didn’t fix on July 4th as our deadline.

Annie: I love that, and I love the addition of—don’t just go pick out an arbitrary date and say what occurred on that date. But there has to be meaning, in the same way that January 1st is really somewhat an arbitrary day, but we think of it as the start of the New Year, which has a great deal of meaning for us.

So I love that with the senator’s birthday, it was something that they cared about, so you could frame it as a birthday present, which I think is great. Obviously you’ve spent quite a bit of time in government. You also have written on the topic of thinking about how things are communicated to people, not just from the decision-making within the government, but also how are people going to use that information to understand?

And something that you’ve talked about is processing fluency, you know, and we’re living in a time when there’s so much information and it’s really just hard to parse it out. I would love it if you could explain to people what the concept of processing fluency is. How that relates to what’s happening, both from the standpoint of how we come across information just sort of on our own, but also what that means for policy and the way that policies are communicated to people, either good or bad.

Cass: In my four years in the White House, what I heard most from the private sector—I was in charge of overseeing regulation—was not you’re regulating too much, or you’re regulating stupidly, or you’re not regulating enough. It was, we don’t understand what you want us to do. That’s what I heard time and again, and that was because the capacity to process what government was saying in this not extremely legible journal called the Federal Register. You could look it up. Some of what’s there isn’t as intelligible as the authors would hope. It’s just not clear enough—processing it is hard. And the illustration I got in government of the—I put a big poster in my office to remind myself—was of something called a food pyramid, which was the most visited website in the US government, I believe, at a certain point.

And it shows a person, gender unclear, who’s climbing to the top of a pyramid. The person has no shoes on, and when you go to the top of the pyramid, there’s a little white triangle at the top, as there is for pyramids, and then there are these stripes, and the stripes have foods associated with that. Now, if you look at the food pyramid and think, what are you supposed to eat? If you’re a teacher, a parent, or a kid, you might think, I don’t know, but I am wearing shoes, and maybe that’s a bad thing. And then you see that little white triangle at the top. You think that, are you going to die if you eat? Or is that heaven? There’s no way to process this thing. So the behavioral scientists said the food pyramid was a catastrophe and a disaster and said, “Why don’t you do something that people can process?”

And so we replaced it with what you can find now pretty easily, which is the food plate, which basically says, make half your plate fruits and vegetables. And it’s everywhere. And it’s not perfect, but it’s really easy to process. It’s actionable. And so I had a little poster in my government office saying, “Plate not pyramid.”

And you can think of many things that companies say to people that are like the pyramid. They don’t know that because to them the pyramid is the plate. They wrote it, they understand that. But it’s like that gender-unclear person without shoes moving to the top of something and people think, what am I supposed to do?

And this can make people feel humiliated as well as not helped. And to have communications that require essentially nothing, or nothing hard, is often a blessing. There’s a book on website design called, Don’t Make Me Think: A Common Sense Approach to Web Usability. And I think it’s terrific. The idea is that if, and this is what Steve Jobs’s genius consisted in part of, just everything on an Apple computer, when things are going well, is really easy to process.

Annie: So I want to sort of get into a little bit of the flip side of that, so we can think about how do we make it so that, when we’re really trying to convey really important information to people so that they can have better lives, how do we think about how we can make that easier for them to process, easier for them to understand?

Let’s take the flip side of that and say, one of the things about processing fluency is the easier that we can process something, the more fluent we are, right? When we, you know, how fluently we understand the message, the more true it feels to us. So that’s just kind of like a fact of cognitive psychology.

So I think about certain politicians and the repetition, just the mere repetition of the message, true or not, in terms of the effect that that has on people and how they process the truth of that—A, and I’d love to hear your thoughts on that. And then the second part is what happens to nuance then in the federal government? Because you, having had to work in regulations and policy, a lot of things have to be quite nuanced, right? So there’s a flip side to that, which is as we simplify the message more, because that’s what people will respond to, what happens to that? So I’m thinking about just the mere repetition, true or not, and what effect that has on the political side and what the incentives are for politicians. And then secondly, sort of how that then affects regulatory issues and policy.

Cass: So in football, Tom Brady is the greatest player who ever lived. And the reason Tom Brady is the greatest player who ever lived is he’s the greatest winner, and he performs best under pressure, and he’s the greatest player—Tom Brady.

In basketball, the greatest player who ever lived is LeBron James because he is unbelievably good at everything. He’s a great rebounder. He is a great passer. He is a great scorer, and he delivers in the clutch. And so LeBron James—the greatest basketball player who ever lived.

Okay. I just said one thing that’s true—Tom Brady is the greatest football player who’s ever lived—and one thing that’s false. Michael Jordan is actually the greatest basketball player. I’m just stipulating that, and then they believe it’s true. So if people hear something several times that’s simple and easy to remember, they’re more likely to think it’s true even if it’s false.

Annie: Can I just ask you, does that include—so if you’re talking about LeBron James as the greatest player of all time and you say that many times, if someone also says in that cluster, “I don’t think LeBron James is the greatest player of all time,” is that also contributing to you believing that LeBron James is the greatest player of all time?

Cass: So there are two different effects. That’s great. Thank you. One is a repeated statement is more likely to be believed to be true, other things being equal, even though the fact that it is repeated has no relationship to whether it’s true. It’s also true that if I told you, “I don’t know if you heard that Rafael Nadal quit tennis yesterday?” I lied. That didn’t happen. I apologize for doing that because some part of your mind, and that of your listeners, is going to think for a while—did Nadal just retire? So if you say something that’s false and then immediately in real time say it’s false, people will tend to believe it’s true because it’s really easy to process. You heard it.

So there are two different phenomena, the truth bias and the illusory truth effect, as they’re sometimes called, where one is that the disclaimer doesn’t undermine the sticking power of the thing, and the other is that repetition produces a belief that something is true. I don’t know that the statement that the thing is false makes it more believed than if that statement were there. The cool finding is that it doesn’t eliminate what it has to do in the mind, which is Rafael retired. Did he? Even now I’m starting to think maybe Rafael retired, even though I lied. But I heard it.


“A repeated statement is more likely to be believed to be true, other things being equal, even though the fact that it is repeated has no relationship to whether it’s true.” – Dr. Cass Sunstein


Annie: So if we could then gather up all the listeners and somehow quiz them on whether Rafael Nadal retired, they would be more likely to be wishy-washy on saying yes or no to that.

Cass: I’m hopeful that if we asked them right now, immediately after this discussion, they would say, “No, he didn’t retire.”

Annie: But if we caught them in a few months.

Cass: Yeah. Or two weeks. Some number would think that with some probability he retired, who would not think with any probability he retired had they not heard that.

Annie: What happens when you have a politician who does not necessarily have a regard for the truth? Who says these things over and over again. Like what effect does that then have on our republic?

Cass: I mean, it will be that if there’s a politician or a business leader who completely departs from the truth and says the same or a small number of falsehoods repeatedly, people will start to believe them. And that will be true, to some extent, even among people who are deeply skeptical of those people. Some part of their brain will think, well, maybe that happened, even though their, let’s say, more reflective self knows that was said by someone who is not a truth teller.

Annie: So the incentives are bad there because it’s an effective strategy to make people doubt what they believed—might otherwise believe to be true.

Cass Sunstein: You know the movie called Gaslight, which is about someone who tried to get someone, his wife as it happened, to deny her sense of reality. And it kind of worked. It wasn’t just repeating the same falsehood over and over again, but it was in the vicinity of that. It was repeating things in practice that really weren’t happening and getting the wife to see those things as happening.

So her reality sense was undermined and that has led to a situation where now the word gaslighting is approaching or in common parlance. And that’s kind of what you’re saying. It is a big problem for people in individual lives who deal with liars and also in business and in politics. Now, ultimately, it’s profoundly to be hoped that if a business is just lying all the time, or a politician is lying all the time, they can’t have enduring success. But the word enduring is pretty squishy. It may be they could have success over a non-trivial period.

Annie: Yeah, I mean, I just think about this issue of processing fluency and what makes a great decision is partly the information that you’re inputting into that decision process. And that includes things like, what policies are you going to support, what politicians are you going to vote for? And if we have this interference occurring where it becomes harder to figure out what’s true and what’s not true, in terms of driving my own desire for who I want to be in office, that may create a problem. So as you point out about the food pyramid, right, like simpler messages—the simpler the message, the more processing fluency we have, the easier it is for us to understand the message. And what goes along with that is it feels more true, right? So the faster we can process something, the more true it feels.

But what happens in both the law and regulatory environment and government in general is complicated and nuanced. I think that if you asked people, who would you want to be making decisions for you? Right? Because the government is obviously, in large part, making decisions for the populace. Right? Do you want someone who’s thinking about things in very nuanced ways, is understanding the complexities that there’s not always a right answer, that you often have to be balancing different costs and benefits. They’re thoughtful in that way. Right? Or do you want someone who just is what Phil Tetlock would call a hedgehog? Just like one big idea, like pound it. It’s really simple actually, and it just thinks about things in this very sort of simple, un-nuanced way.

I think that most people would say they would want the first, right? Like, running a country is incredibly complex, and you would like someone who really navigates and understands the nuance of that. But it seems like our cognition, regardless of what we think we would prefer, our cognition is going to prefer the second type of person.

So I’m just interested from your perspective within government, having, you know, served in the Obama White House, how do you think about navigating that particular issue?

Cass: It’s a fantastic question. So I’m thinking that, within government, you have a lot of people who are drawn to an acknowledgment of complexity. So you might have someone who’s running the National Economic Council—very important White House office—and they’re unlikely to say, “The course is easy, let’s do it.” They’re more likely to say, “There are four options and here are the costs and benefits of each, and here’s why the third or the fourth is better. What do you think?”

That’s much more likely. So in government it’s very rare, I’ve found, that someone will say, “It’s clear what to do, and it’s this.” Unless it’s a unique kind of problem or an unusual kind of problem where the answer is self-evident. I was in a meeting just very recently where there were four options described, and none was obviously right.

That’s kind of the coin of the realm within at least the executive branch of government. I think behind closed doors, within Congress, it’s similar. Sometimes what a member of Congress will say is, “There are three options, and I think the third is the best. But we’ll get killed politically unless we do one of the first two. So the third is off the table. Which of the first two do you like better?” And they will be going back and forth between the political and the substantive. Then there’s campaigns where, in a campaign, to give a sense of the difficulty of some problem is often not a winner. So whatever your political preference, you want someone who doesn’t lead with, “This one is really, really hard.” You’d more like that to be the third sentence. And the first sentence is,
“Here’s the best approach. I have a plan.”

Annie: Sticking in this world, some of these issues like processing fluency, for example, somewhat apart from the ways in which we’re being divided into these tribes, which, I think those two things are related. Because once you simplify a message, you’re creating less room for the center. I think that a lot of people intuitively feel like, okay, well the way to solve that is for us to talk to each other more. Right? Like, let’s all get in groups of different political persuasions and let’s talk to each other more.

So I know you’ve done some fantastic work on group decision-making, kind of thinking about exactly that problem. And I would love to hear your thoughts on is that a reasonable strategy to think it’s going to help the situation?

Cass: What we know is if you get groups of like-minded people together, they tend to end up thinking a more extreme version of what they thought before they started to talk.

This is true in politics, in business, in juries, and on courts. So we have a crazy amount of data suggesting if you get people who tend to think climate change isn’t real, if they talk to each other, after half an hour, they’ll think climate change is ridiculously unreal. How could anyone think it’s real?

You have a group of people who tend, on the left, to think something, let’s say, about race. After they talk to each other, they’re going to think some more extreme version of that and they’re going to be more confident and more unified. So that’s how it works.


“If you get groups of like-minded people together, they tend to end up thinking a more extreme version of what they thought before they started to talk.” – Dr. Cass Sunstein


Annie: So I just want to make sure that I understand there. So let’s say that on a scale of one to ten, in terms of how strong our opinion was or how radical our opinion was on a given topic, let’s say climate change. And let’s say that we were all people who were a five on the sort of pro-climate change side. If we all get together and we talk, it pushes us to the more extreme end of that opinion, even though none of us came in necessarily that extreme?

Cass: If you’re a group of people who tend to think, when asked, is climate change a serious problem, they are a three—if they talk to each other, they’ll probably end up on average at two. If you have a group of people who tend to think that climate change is an eight, after they talk to each other, they’re likely to turn up at nine or ten.

But if people are severe in their judgment, they end up more severe in groups. If they’re lenient, they end up more lenient. If people are a five on climate change, they’re dead center. The prediction is they won’t move at all. They’ll stay at five.

Annie: Is that true? So let me ask you a question. Is that true if you have all fives get together? What happens if you throw a five in with the eights?

Cass: It’s a little complicated. So if you throw the five people in with the eight, the median is the best predictor of where the group’s going to come out. So if you have, let’s say, 300 fives and 300 eights, the median is in between. I was a literature major, but even I know that.

So they’ll end up being north of the average because, on average, they’re concerned about climate. If you have a group of people who are, let’s say, on immigration, a number of people hate it. A number of people dislike it. The group, after they talk, they’re going to dislike it intensely, and eight will be the tendency of the group.

If you have a bunch of people in business who think that we should start a new product line, that’s the median view. If people talk to each other, and maybe participants in business who are listening to this have seen this. Business people tend to end up more confident, more unified, and extreme. Let’s launch that darn product, if that’s the inclination before they started to talk.

With respect to jury judgments, we find that often juries are as upset in terms of their intended punishment as the highest level of upset before people started to talk. Pause over that one, if you would. The question is how much should punish corporate wrongdoing? Twenty-seven percent of a very large number of juries, people ended up as—the jury ended up as at least as high as the most punitive member.

Annie: Oh, okay wait. I just want to pause on that because I think that’s a really interesting finding. So if you query them beforehand, the most punitive member, let’s say that they want to award $10 million, and everybody else is below that, that when they then go and deliberate as a group, you might suspect, oh, well they’re going to move toward the 10 million but not all the way there. Right? I think that our intuition, correct me if I’m wrong, is that the people who are more moderate will have a moderating effect on the group. But what you find actually is that they end up giving, at a minimum, the harshest punishment and possibly even more. So they very often will go beyond that.

Cass: We were inclined, this is Danny Kahneman and David Schkade and I, to predict that the median is where they would end up. That was our hypothesis. It wasn’t true. They ended up higher than the median, basically always, and in 27% of the cases, at least as high as the highest. So the basic finding is that punishment judgments, when they start out real, tend to end up more extreme and more severe. And that’s a robust finding with respect to monetary awards. You know, the question, what happens if you mix people?

Annie: Yeah, so you have, let’s say, someone who’s a two in terms of concern about climate mixed with someone who’s an eight.

Cass: Now, the median is the best predictor. But it gets a little complicated because if the two people think I’m the kind of person who just isn’t worried about climate change, and then there are these other people who are crazy zealots, and if the eight people think I’m the kind of person who’s concerned about climate change and the two people are in the pocket of industry or basically don’t know anything about science, then they’re not going to listen to each other and they won’t move at all.

So if you have three people who tend to think, you know, Israel is right on everything with respect to what’s happening in the Middle East, and three people who think Israel is wrong about everything that’s happening there, then they’re not going to end up in the middle or moving toward the north of the six-person median.

It’s just the three are going to stick. So if people have a clear conception of their identity, and if they’re very confident that where they are is right, then no movement is what we’ll observe. But that’s relatively rare, even in online interactions where people tend to be pretty heated. The phenomenon we’re describing, which is called group polarization, groups end up in more extreme points in line to their pre-deliberation tendencies. That is the phenomenon that happens online.

Annie: So if I had to wrap that up in terms of, like, online environments, we’re mostly talking to people who hold our own views, so that’s going to make us more extreme. And then if we’re interacting with people where now we’re very entrenched in our point of view and we’re interacting with people from the other side who are equally entrenched, that also will create polarization. So it sounds like this sort of maybe overly hopeful to think that, well, but if we’re all interacting with each other, then we’ll all moderate.

Cass: Yeah, there are things that can be done. So, this is a really large question for democracies. It’s also a really large question for companies, and it’s a really large question for non-democracies.

So, arguably, you’ve put your finger on the largest question—what can be done? One thing you can do is to create institutions that are insulated from some of this noise. So if you have, let’s say, a department of technical fixes to the problem of highway safety. They will try to figure out what makes people get crashed into less and what makes highways safer.

And so if people are yelling all around, the department of technical fixes to highway safety is just figuring it out. They’re democratically accountable in the sense that if they screw up, that’s going to be a problem. But they’re not going on Twitter and seeing what people think about their actions.

They’re trying to figure out what’s true. And I like that, I confess—the idea of people who know what they’re doing having a significant role in solving problems. There’s also data from Jim Fishkin at Stanford, which suggests if you get people together under conditions in which they’re provided with material that is helpful, and moderators, and asked to try to figure something out together under circumstances of trust and cooperation, then you don’t get the kind of group polarization that we find in, let’s say, more ordinary circumstances.

So the better angels of our nature can be appealed to, and companies that do well often are like polarization machines. And leaders can say something like, “You know, I really want us to figure out what’s right. And if I detect us marching in a direction toward the original consensus, I’m going to get really nervous.” Those sorts of leaders can help bad things not to happen.

I can say in the White House, I saw President Obama, whether you like him or not so much. He was very, very good at counteracting a group marching toward a more extreme position in line with its pre-deliberation tendencies. So he’d find someone in the room who thought something different, he thought, from where the group was marching, even if that person was the least important person in the room.

And he’d say, “What do you think?” That person would both feel, my gosh, the president wants to hear from me, and also the president really wants to hear from me, meaning he doesn’t want me just to echo where we’re going. And President Obama was also very good at being quiet himself at the beginning of many meetings, because he knew that if he said what he thought there was a risk everyone would say, “You’re a genius. I agree with you.”


“I like that, I confess—the idea of people who know what they’re doing having a significant role in solving problems.” – Dr. Cass Sunstein


Annie: So I love that as kind of a segue into talking about a topic that you talk quite a bit about in Noise, which is decision hygiene. Can you talk a little bit about what decision hygiene is and how it might actually help some of the issues that we’ve been discussing?

Cass: Okay, great. So let’s notice that when people make bad judgments, it’s partly because of biases. So it might be that you’re really optimistic and you think that the project you’re working on is going to take two weeks, when it’s actually going to take two months. That’s everyone, basically.

Annie: Yes. The planning fallacy.

Cass: Well, let’s suppose you are learning from something that recently happened. Where something either went spectacularly well or went sour, and the availability heuristic, as it’s called, leads to a bias in your judgment, and there are a bunch of biases.

Those are systematic departures from truth and individual life, and businesses and governments are affected by biases. Then noise is variability and judgment. Where you might think on a happy Monday morning, let’s go for it. And on a tired Friday afternoon, let’s not. And whether the decision arises on Monday morning or Friday afternoon ends up being determinative.

And this can be true for doctors deciding whether to test patients, judges whether to give people stiff sentences, or anyone deciding anything. And variability can screw us up terribly. We could overshoot sometimes and be too cautious other times. And systematic error is also, of course, a terrible thing.

So what are we going to do? Decision hygiene is a way of counteracting both noise and bias. So one way to deal with a problem with variability and bias is to just have guidelines. So when I was in the government for regulation, we had something called a regulatory impact checklist. You can find it online. It’s a little over a page and it basically says seven or so questions, and you’re supposed to check it, whether you’re doing something involving immigration, or highway safety, or occupational safety, or air pollution. Check each of those things.

And it’s a way of preventing bias or noise. Some of the things to check, by the way, are do we consider alternatives? Not the fanciest thing in the world. But it can really help avoid errors. Another thing you can do in the nature of decision hygiene is just to make sure you aggregate the views of lots of people, not just rely on the views of one or two.

So typically groups of, let’s say seven, will be much less noisy than individuals and, at least under reasonable conditions, groups will be less biased than individuals, at least if they’re not talking to each other and screwing each other up. So if you aggregate the views of seven individuals, it may be that the biases will be canceled or the bias on the part of one or two will just be drowned out by the non-bias on the part of five. Not always. So groups do much better, if you take the individual judgments and aggregate them, in counteracting noise than they do bias.


“Decision hygiene is a way of counteracting both noise and bias. So one way to deal with a problem with variability and bias is to just have guidelines . . . another thing you can do . . . is just to make sure you aggregate the views of lots of people, not just rely on the views of one or two.” – Dr. Cass Sunstein


Annie: So just to clarify, because I’m thinking about your comment about Obama keeping quiet. When you say individually, it’s not sit in a group and have everybody announce one at a time, but to actually collect those judgments individually, and maybe asynchronously, in a way that the other people don’t know what those judgments are, which is a systematized way to do what Obama was doing. If I say something, it’s going to be a problem. So I need to get people’s judgments without them hearing what I’m hearing. But across a group, you could do that for every single person if you collect the judgments in the appropriate way.

Cass: I noticed in my government job, I was confirmed by the Senate on a certain date, and I became the boss of a small organization, and I noticed that having worked there, I was part of the team.

And then once I was the boss, my jokes became really funny and became amazing. So if I said something about what we should do, I noticed people would say, “That’s great.” And I knew I didn’t know what I was talking about, and so when I stated what I thought we should do, I meant, really, here’s a possibility, what do you all think? But if I stated it as this is a good plan, isn’t it, they’d think that was his view and it would be pointless to argue with me.

So I decided I would not say anything and instead, as you say, elicit the views of maybe 12 people who were terrific and make sure they didn’t feel constrained by the views of what the other 11 had said, either because I would ask them privately or because I would say, “Let’s be free, everyone, to say what they think.”

That is essential to making the wisdom of crowds work, that you have decision hygiene, meaning you elicit the views of people independently. Guidelines can be really helpful and also, really, checklists or aggregating independent views can be really helpful. Also, not relying on holistic intuitions, but thinking hard about what are the ingredients of intuitions?

What are the seven things that would have to happen in order for something to be a good idea and what’s the probability of each of them? So that is a little less simple than aggregating independent judgements and using guidelines. But it’s really important in decision-making in organizations of all kinds where we might have a holistic judgment, you know, let’s do this. My gut tells me, let’s do this. But for this to be a good idea depends on nine things, and the probability of each of those coming to fruition might be 50%. And then you run the numbers.

Annie: That’s pretty bad. So let me try to reflect that back. So often we’re making broad judgments, right? Holistic judgment where there are components to that that are implied in the judgment. So for you to say that you think that something is a good idea, there are components that are implied and you’re saying that’s a good idea. Where decision-making can really go wrong is that we tend not to make those components explicit or think about them. They’re just sort of implied.

And so you’re saying, I think, a couple of things. One is if you could identify what those components are in advance, you could create a checklist, which would be really helpful. If you identify the components in advance, you could use those to create a decision rubric where people have to make judgments about the component parts before they’re allowed to judge the whole. And then you can stress test that to make sure that then the whole makes sense given the judgments of the component parts, which you’ve now made explicit.

And then the last piece is that if you’ve done all of that, you can get people’s independent views of those component parts that then you’re going to be able to see, like when, let’s say, Cass is super smart and somebody else on Cass’s team is really smart and they’re very far apart on their judgment of some sort of component part. We’re now allowed to see that because there isn’t sway in the group that then is going to allow us to come to something that’s more accurate, even if sometimes maybe it’s taking an average. Is that sort of a fair description of the process that you’re thinking about?

Cass: It’s much better than what I said and completely consistent with it.

Annie: Okay. So to close out, I feel like I would be remiss if I didn’t ask you a very specific question. So, you’re obviously very famous for co-authoring a classic, Nudge, which just had its final edition with Richard Thaler. And the idea of nudging is obviously that you can architect a choice in a certain way that will push people to one decision or another.

So, one of the classic examples of that would be opt in or opt out. One of the main findings is if you want to increase organ donation, have it be an opt out, that you’re automatically enrolled. But you have to opt out versus an opt in that you’re automatically not enrolled, so you have to opt in. That’s been true for, for example, retirement plans, that when you hire someone, they’re automatically opted in to a retirement plan and they can only opt out, not the reverse. But that will increase people’s retirement savings. Okay. So that’s just nudges in general for people who aren’t familiar with that. But this is something I know that you’ve thought about in terms of, for example, governmental entities. And I think that one of the big criticisms for nudges, and I’d love to hear your thoughts on addressing this, is that, well, isn’t that a nanny state?

Aren’t you deciding what’s good for people or not good for people? So the frame that you’re presenting it to them in, isn’t that pushing them in a way that creates a nanny state? You know, we’re free, we should be free to choose our own path. So I know that that’s an incredibly common critique of nudging, and I would just love to hear your thoughts on responding to that critique.

Cass: Okay, great. So, think of a nudge as a GPS device that helps you get where you want to go, that allows you both to override its judgment about how, and allows you to specify where you want to go. So a nudge might be a label that says this has shrimp and peanuts in it. So if you’re allergic to shrimp, as I am, or peanuts, as I am not, you are nudged not to do that, not to eat that. But if you want to, go for it.

It might be that a nudge consists of a fuel economy label. So when you buy a car, it will tell you something about the fuel economy of the car. There’s a big label that does that. If you don’t care about fuel economy because you’re, you know, very wealthy or something, you don’t have to pay attention to the label. But most people have at least some interest in knowing whether their car has good gas mileage and they’re nudged to consider that. So the nanny state, I think, would not be found in labels about fuel economy or in calorie labels or labels that tell you something about shrimp and peanuts in your products. The whole point of nudging is to preserve freedom of choice.

So if you are automatically enrolled in a retirement plan, you’re allowed to opt out if you want the money now. There has to be some design of a retirement plan. If it’s opt in, then you are nudged not to participate. You’re not foreclosed from participating. But because the status quo is non-participation, and we know the status quo has some power, people are less likely to participate than if they’re automatically enrolled. So the only options really are to automatically enroll or not or tell people, “You can’t work here unless you make an affirmative choice.” That’s called mandated choice often, and that will sometimes annoy people who don’t want to make a choice. So it’s paternalistic in getting them to make a choice they don’t want to make.

Which design you choose is a nice question. People have pretty well converged on automatic enrollment on the grounds that it helps most people subject to opt out. It’s hard to think of, I think, a nudge that is objectionable on nanny state grounds so long as freedom of choice is fully preserved. And the whole point of nudges as opposed to mandates is to say it’s ultimately up to you.


“Think of a nudge as a GPS device that helps you get where you want to go, that allows you both to override its judgment about how, and allows you to specify where you want to go . . . the whole point of nudging is to preserve freedom of choice.” – Dr. Cass Sunstein


Annie: So let me try to think about what you just said, that in a lot of ways it’s a red herring because I think that it assumes there’s such a thing as a default list choice. But because everything does have a default by definition, right? There’s no way to not either be enrolled or not be enrolled to start. There’s no way to neither have to opt in nor opt out to start. You have to choose one or the other. That it’s a red herring to say, “Oh no, this creates a nanny state,” because any way that you do it is already nudging someone somewhere. So we ought to be thinking about where, probably, we want to nudge people. And we know that countries are happier with more retirement, for example.

Cass: I think so. Now it might be that particular nudges would be objectionable. So if it were assumed that everyone was a member of a particular political party, unless they opted out, that would violate neutrality. But there’s an architecture for a retirement plan or a healthcare plan. It might be that to automatically enroll people in healthcare plans isn’t a good idea.Then it’s a good idea to say, which one do you want? But that’s itself an architecture. So the ones on the menu are going to be chosen by someone, I profoundly hope, who has the interest of people at heart. And if those people aren’t in charge, then we’ve got to get new people in charge.

Annie: And, regardless, people still have the choice to say no. Right? Which is the difference between a nudge and a mandate as you pointed out. So you might be able to sling that criticism at mandates, but not at nudges because nudges are designed for people to be able to choose for themselves.

Cass: I mean, if you’re worried about the nanny state, you might want to be on a campaign to prohibit compulsory state seat belt laws or compulsory motorcycle helmet laws. I don’t object to compulsory seat belt and compulsory motorcycle helmet laws, but those are much more nannyish than nudges. Not just saying, “It’s your choice, go your own way if you’d like. Eat the peanuts.”

Annie: So just to take it from the labeling standpoint, you could make it a mandate that no food can have peanuts in it. So that would be a little more nanny state, or you can just label it that it has peanuts in it. And that would be on the nudge side

Cass: Completely. And if you buy things in the United States, there’s nutrition facts so you can find stuff about the ingredients—(Is that nanny state? I don’t think so.)—that warn people about the content of what they’re purchasing. Automatic enrollment in retirement plans is a little more aggressive. You know, there are people who think it’s not a good idea. It has strong bipartisan support. Republicans and Democrats agree the data is very celebratory, for one, but if you don’t want to be in our plan, the whole point of a nudge is to say, opt out if you want.

Annie: So, you know, obviously this is The Decision Education Podcast. I know you’re familiar with the Alliance for Decision Education, so the question that I have for you is what impact on society do you think there will be when the Alliance succeeds in its mission to ensure Decision Education is a part of every student’s learning experience in K-12?

Cass: Well, I think the opportunities are boundless and it will be that students, young people, will be making much better decisions. They won’t be making reckless decisions. They won’t be making unduly cautious decisions. They’ll be figuring out what’s best for them and the people they care about.

Annie: Thank you for that. That’s what we hope for. So what decision-making tool, idea, or strategy would you want to pass down to the next generation of decision makers? You have to choose one.

Cass: Think about costs and benefits. It’s not something that there’s a banner associated with, but it’s very beautiful. If you think about the costs and benefits of your decision, and just ask yourself to think about that, it is a calming thing. It lowers blood pressure and it increases the likelihood you’re going to do the right thing.

Annie: So you would say any checklist should include that as one of the items on the checklist, maybe the number one item.

Cass: So the question is, should you take a vacation this summer in some place that you have the resources to get to? What are the benefits and what are the costs should you take the summer job? Try to figure out what are the benefits and costs of that job? Should you go to Washington DC in the next year? Benefits: the nation’s capital, it’s great. Costs: it might be hard to get there. Costs: there might be something else that would be more fun.


“If you think about the costs and benefits of your decision, and just ask yourself to think about that, it is a calming thing. It lowers blood pressure and it increases the likelihood you’re going to do the right thing.” – Dr. Cass Sunstein


Annie: It’s August and it’s impossibly humid and the worst weather on the planet. That’s a cost. I love that answer because that would probably be my number one answer too is, what are the costs and benefits?

If listeners want to go online and learn more about your work or follow you on social media, where should they start?

Cass: They might read the book Nudge. They might read the book Too Much Information, which is very close to my heart. I appear to be on Twitter and everyone’s welcome to follow me on Twitter.

Annie: And what’s your Twitter handle?

Cass: I think it’s @CassSunstein. I think it’s a mysterious one.

Annie: Well, thank you so much, Cass. I just want to say for any books or articles that we’ve mentioned today, any of the research that we’ve mentioned today, everyone can check out the show notes on the Alliance site, where you will also find a transcript of today’s conversation.

This has been so much fun, Cass. Thank you so much for coming on and getting super wonky with me to talk about decision-making, particularly as it applies to some of the world that you live in, you know, regulation, which you make incredibly exciting. I always love our time together and thank you so much for spending time with us.

Cass: Well, thanks to you. A great pleasure.

Published September 13, 2023

Share this episode to your favorite platform!

Check out our other latest episodes

  • Episode 029:

    Changing Minds in a Polarized World

    with David McRaney

    Why do people sometimes become more entrenched in their beliefs when they are challenged? In this episode, David McRaney, science journalist and creator of the [...]

  • Episode 028:

    Rethinking the Workplace

    with Dr. Adam Grant

    Can giving advice actually be more valuable than receiving it? In this episode, Dr. Adam Grant, organizational psychologist and world-renowned author, joins host Annie Duke, [...]

Stay informed and join our mailing list