Episode 022:

Deciding, Fast and Slow

with Dr. Daniel Kahneman

July 19th, 2023

Listen or subscribe to our podcast on your favorite podcast provider!

Episode description

What have we misunderstood about decision-making? In this episode, Dr. Daniel Kahneman, Nobel Prize winner and author of Thinking, Fast and Slow, joins host Annie Duke, co-founder of the Alliance for Decision Education, to discuss common misconceptions about decision-making and “System 1” and “System 2” thinking. Together, they discuss the significance of evaluating individual components before making judgments and uncover the surprising parallels between human cognition and modern Artificial Intelligence. Daniel shares a compelling analogy between perception and cognition, illustrating how cognitive shortcuts can lead us astray. Additionally, he sheds light on why new restaurants continue to open in seemingly “doomed” locations and the valuable lessons we can learn from studying the paths of those who went before us.

Dr. Daniel Kahneman changed the way the world looks at decision-making when he released his global best-seller Thinking, Fast and Slow, introducing us all to the concept and impact of cognitive biases. In 2002, he won the Nobel Prize in economic sciences, awarded for applying psychological insights to economic theory, particularly in the areas of judgment and decision-making under uncertainty.

Daniel, born in Israel, is the Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs; the Eugene Higgins Professor of Psychology Emeritus at Princeton University; and a fellow of the Center for Rationality at the Hebrew University in Jerusalem. His many awards include the Presidential Medal of Freedom and the American Psychological Association Lifetime Achievement Award.

In 2021, he released Noise: A Flaw in Human Judgment, introducing the idea that human decisions are impacted by both cognitive biases and noise, which creates variability in judgments that should be identical.

One of the most influential psychologists of our time, Daniel is a leading expert in the fields of psychology, decision-making, and behavioral economics.

Daniel is also a member of the Advisory Council at the Alliance for Decision Education. See Daniel Kahneman’s publications, awards, and more here.

Annie: I’m so excited to welcome our guest today, psychologist Dr. Daniel Kahneman. Danny is a Nobel laureate in the economic sciences, awarded for applying psychological insights to economic theory and is often considered with his research partner, Amos Tversky, to be a founding father of the field of behavioral economics.

His 2011 global bestseller, Thinking, Fast and Slow, changed the way the world looks at decision-making, introducing the concept of cognitive biases into the public conversation. In 2021, he released Noise: A Flaw in Human Judgment with co-authors Cass Sunstein and Olivier Sibony. This book explores how decisions are impacted by noise, which is the chance variability in human judgments that should be identical.

Danny is currently a professor of psychology and public affairs emeritus at the Woodrow Wilson School, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem. As well as his Nobel Prize in Economics, he has received the lifetime contribution award of the American Psychological Association in 2007 and the Presidential Medal of Freedom in 2013.

He is also, I’m happy to say, on our Advisory Council here at the Alliance for Decision Education. Welcome, Danny. So excited to have you here.

Daniel: My pleasure.

Annie: So we’ve known each other for a while. You were obviously incredibly helpful with my last book. I feel that I’m really lucky to be able to bounce ideas off of you considering what a giant in the field you are. I would love to actually start kind of at the beginning because I feel like, for you, there’s this origin story in terms of your line of work that goes back to your time in the Israeli army. I’d just love to hear a little bit about that kind of origin story that leads to your life’s work.

Daniel: Well, okay. That’s a good beginning. So we are talking 1955, which is a really long time ago. So I was a lieutenant in the Israeli army. I had completed one year in the infantry. I had a BA in psychology and mathematics, and I was assigned to the psychology branch.

There were no psychologists actually in the psychology branch. My boss, who was brilliant, was a chemist. I mean, everybody was improvising like mad. And, after a few months, I was assigned the task of setting up an interview for combat soldiers. And I was given a book to read, which turns out to be a real classic.

The book is by Paul Meehl, and it’s Clinical Versus Statistical Prediction. It had come out the year before, and I read that book, and obviously it left a big impression. I mean, in the sense of you have to look for objectivity and you should not trust your intuitions too much.

All that, I assimilated. So I constructed an interview that was inspired by my reading of Meehl. So I defined six attributes of the soldier. You know, it included things like responsibility, sociability. I had something that was called masculine pride. Today, it would be more problematic to use that phrase, but then it was pretty obvious what it meant.

So I constructed that interview and for every one of the six traits, there was a set of questions. It wasn’t just a questionnaire; they were not supposed to read the question. It was supposed to be conversational. But the idea was that they would ask questions relevant to one of the six attributes and score that attribute and then go on to the next set of questions.

So that was the plan. Now, all these people were selected for high IQ, so they were all very, very bright and they had been trained to do interviews. But the interview that they used to conduct was—they were trying to get an idea or a sense of how good a soldier the person would be. So they were, they were all about intuition.

That was a standard, unstructured interview, where you spend some time with somebody, you try to form an impression, and then you give a final score. So, my way of doing things they found very irritating. I mean, more than irritating, almost humiliating because there you go. They, you know, they had the clinical intuition to follow.

And here I was telling them what to do basically. And one of them I remember told me, “You are turning us into robots.” And 60 years later, almost, this sounds familiar. And I sensed that they were almost, it was almost a mutiny. I mean, they were really annoyed with me. And so I offered a compromise.

The compromise was: you do things my way, and I was quite specific. You try to be reliable. The setup and the set of questions, they are in charge of validity so validity is taken care of. You do your reliable job. But if you want to use your clinical intuition, here is how you do it. You do the interview, you complete the six ratings, and then close your eyes and make a judgment. How good a soldier will that person be? And they did. That was the compromise. And a few months later, we started to get the various criteria for the army; it started coming in and there was a surprise. Clearly the interview was much better than the previous unstructured interview, but the big surprise was that the intuitive judgment at the end was just as valid as the average of the six ratings, and it added new content. So actually the formula, which is in use I think until this day, and the Israeli army has played around with it, but not very much. I think the interview is still being used in the Israeli army 68 years later.

So the final score is the average of six ratings, the structured interview, and the intuitive rating at the end. And that really set me up. I mean, it took a very long time for me to start working with Amos Tversky, it was almost 20 years later, but my views about the basic skepticism about intuition, that the sense that you can’t dismiss intuition altogether—that stayed with me and I’ll end with just one remark. The last book that I did with two colleagues, in 2021, with Olivier Sibony and Cass Sunstein, it’s called Noise. And we recommend procedures for how to make decisions, how to evaluate, how to make judgments. And the procedure is basically the same procedure as in the Israeli army.

That was very satisfying actually, to go back all the way. We haven’t learned any better way of doing interviews or, I think, of making structured decisions than break up the problem, evaluate the aspects independently, and then, and only then, use your intuition.


“Break up the problem, evaluate the aspects independently, and then, and only then, use your intuition.” – Dr. Daniel Kahneman


Annie: So I’d like to, I want to dig into that a little bit more, just thinking about this idea of: break the problem down into its component parts, make sure you’re scoring those and then, and only then, you know, add your intuition onto it and then that adds signal. So can you explain why doing it in the reverse order would not work in the same way?

Daniel: Well, I mean, intuition, the problem with the intuitive judgment is that it’s too fast and we form intuitions very quickly, and in the unstructured interview, and there are data on this, the interviewers tend to form an intuition very quickly and they spend the rest of the interview justifying their first impression, so basically the rest of the interview adds very little information.

So doing it the other way means that you would have a massive primacy effect, an effect of the first impression that people form. And the idea of delaying intuition is intended to achieve precisely the opposite, so that by the time you allow yourself to think globally and to form a general impression, by that time you have assimilated a lot of information and that information has been collected so that the different items of information, the different scores, are as independent from each other as possible. Because you’ve asked factual questions, you form judgments on the basis of factual questions about a well-specified attribute. So that’s the combination, and really we didn’t find anything much better when we thought of generalizing.

Annie: So I’d like your thoughts on this because, you know, I think that a lot of people who work with people on strategic decision-making and decision-making really go back to this model. And one of the pushbacks I know that I get when I’m asking people, for example, to make a forecast, which would be using this type of model, right? Let’s make it explicit. Let’s think about what you’re forecasting, what’s implied in the decision that you’re making, or to make these judgements on a scale of zero to seven, how strong do you think the person is in this category? One of the bits of pushback that I get is, well, how could I know? You know, as if the fact that you’re asking people to make a relatively precise judgment, like give me a point forecast on a scale of zero to seven, that there now becomes almost a right or wrong answer that you’re asking them to say explicitly.

So I get this particularly when I’m asking people to make forecasts, right? What do you think the probability is that X will occur in the future as an important component of a decision? So we could think about, for example, if you’re making an investment. You would have different probabilities that you might want to forecast of things occurring. And people think that, you know, if it turns out that the thing occurs that somehow they were wrong if they gave it a low probability, or people will view them as wrong if they gave it a high probability and it doesn’t occur. They’re afraid people are going to view that as wrong.

We see this in some political forecasting, for example. And what I try to say to people, and I’d like to understand your thinking about this in terms of this structure is—but it’s already included in the intuition. So if we think about defining the attributes of a soldier, if you’re going to make a good intuitive judgment, those things would already be included in the intuition. So that’s sort of how I try to sell it is, well, but then we can make it explicit. It’s going to help you to discipline your intuition. So first of all, I’d just love to get your thoughts on that. And then I have a follow up question after that.

Daniel: Well, there is something specific to judgments of probability. They’re really hard.

Annie: Yes.

Daniel: Because there is no obvious mapping, but when you are making judgements of quality, it’s much easier. So, you know, you can ask silly questions and people will have an answer. So, you know, if I ask you how tall a building in New York would have to be to be as tall as that man is intelligent, you could give an answer. So we can use intensity scales very easily and with a sense of confidence. We can match across scales; we can do this probability. It’s just hard because we know there is no substance there. So that’s why some people try to say it’s all frequency. To make it more solid. But if you are a Bayesian, if you are what you’re supposed to be according to modern statistics, then judging probability is really very difficult. So my answer, by the way, would be that wherever possible I would avoid judgements of probability. I would say, well, this is an idea that I’m having now, I didn’t develop it. But with respect to investment, I would rather ask how promising the investment is on a scale from one to seven than how likely is it to succeed on a scale from one to seven? Because taking an adjective and turning it into an intensity judgment, that is something that people find much easier to do than judging probability.

Annie: So, I guess the question that I have is: included in deciding that you want to invest in something would be how likely is something to succeed? Right? But people do seem to not want to make these things explicit. There seems to be some comfort in leaving it implicit, either something that’s echoing for me from what you said—for the people saying, “You’re turning us into robots”—so they’re still the ones making the judgment, and yet they say, you’re turning us into robots, on the one hand. On the other hand, there’s something I think that people find that there’s a comfort, there’s a safety in leaving it implicit, in leaving it to your intuition. And I’d love to just get your thoughts on what that divide is between explicit and implicit.

Daniel: Well, I think that what is happening very often is that people know where they’re going, that they know the conclusion, and that happens to us all the time. That we know what we want to do or we know the conclusion, and then we start constructing arguments that fit that conclusion. And in general, and you know, we have a lot of evidence for that, people construct the evidence to fit their beliefs. Well, we tend to think that people feel that they have beliefs because they have reasons for their beliefs. Well, that psychologically is not the case. People have beliefs and then they invent reasons very often. And the reasons tend to be all consistent with the beliefs. So you don’t want to separate how much you want that thing from the probability that it will succeed. You’ve already decided implicitly you want to go there.


“We tend to think that people feel that they have beliefs because they have reasons for their beliefs. Well, that psychologically is not the case. People have beliefs and then they invent reasons . . . and the reasons tend to be all consistent with the beliefs.” – Dr. Daniel Kahneman


Annie: Right.

Daniel: And then you are generating responses, all of which support your decision. Being explicit is not sufficient. You’ve got to ensure that you make the partial judgment, that the separate attributes that you make those judgments earlier before you read the conclusion. When you reach the conclusion, the judgment that you bring to justify it is not useful to anybody.

Annie: Right. Okay, so let’s now think about that. So we have this divide between this explicit, the judging the component parts before you get to the intuitive decision-making. We could use an old term from psychology and say, before you get to the gestalt that you have to break it down into its building blocks. So as we think about where this ends up going in terms of your work and cognitive bias, obviously this worked really well in the Israeli army, but as you now get a PhD in psychology and start embarking on research, from today’s world, we look and we say, well, everybody knows that there’s cognitive bias, right? I mean, I think that people are pretty familiar with the idea that we aren’t perfectly rational actors and…

Daniel: Well, you know, if you go back—and now you’re asking about a period in my life that was about almost 20 years later—so in 1969, I started working with Amos Tversky, and when we started working on judgment. And actually our first question was about professional psychologists and how they make decisions about what size of the sample they want to run. So that sounds very modern, but that’s exactly how we began. And we found systematic mistakes, just a lot of systematic mistakes. And both of us have taught statistics. When you teach statistics, there are concepts that are just very difficult. The concept of regression to the mean is very unfamiliar. It’s very difficult to think—standard errors are really not an easy concept to convey. So there are concepts that, you know, people don’t have, but people don’t have the law of large numbers, you know, they just don’t have it built into their repertoire of intuition. So we started thinking of errors, which are systematic and are cognitive in origin.

When we started that work, there was a theory of error and it was completely different. The dominant theory of error was motivational. It was motivated belief, and psychoanalysis was in the background, and psychoanalysis explained a lot of errors by unconscious motives and so on. That was in the background. So on that background, what we changed was we said cognition is very much like perception, and in perception there are illusions, but we know that basically perception works well. The illusions are a side effect of the way the mind works in order to achieve the marvelous accuracy that it usually achieves.

This is the way of thinking that we were going to apply to cognition. So the biases are equivalent to illusions, and there are side effects to what the mind does in order to reach conclusions that, in general, quite often are quite reasonable. But here we were misunderstood in a very big way. People thought that we were saying that everybody’s completely irrational all the time, and we really didn’t think so. We really were taking very seriously the idea that cognition is like perception and nobody says that perception is no good.


“Cognition is very much like perception, and in perception there are illusions . . . the illusions are a side effect of the way the mind works in order to achieve the marvelous accuracy that it usually achieves.” – Dr. Daniel Kahneman


Annie: Even though we can see a visual illusion, have it explained to us, we’ll still see it. So let’s think about one of the central findings of prospect theory. Let’s take something like the availability heuristic then, and let’s think about—it works pretty well but then it doesn’t, you know, which is kind of what you’re saying about our visual systems. They’re mostly really good, but then I can trick it. But that doesn’t mean that we don’t see well in general. So just for our audience, just to clarify, who may not be familiar with the availability heuristic, essentially, it’s that when something easily is recalled, in other words, it’s very available to your mind, we tend to judge it as more frequent.

So one of the classic examples of that would be to think about people’s judgments of how likely you are to die by a horse, like falling off a horse, versus how likely you are to die in a shark attack. And what people will do is vastly overestimate the probability of getting attacked or hurt or dying by shark and really underestimate the probability of dying because a horse kicks you or because you fall off a horse. And the reason for that is that we have things like Shark Week on the Discovery Channel, which makes shark attacks very vivid. We have movies like Jaws, which makes us feel like this is a very high danger when actually it’s quite rare, but we rarely hear about somebody falling off a horse, for example.

A way that we can think about that, you know, in modern terms is that things like terrorist attacks, for example, get a lot of coverage on the news. And in the wake of a terrorist attack, people will start to really vastly overestimate the danger that they’re actually in or how much exposure they have to being attacked by terrorists as an example. So this is something that is a heuristic in the sense that it’s a rule of thumb that if it’s easy to recall, then it’s probably more frequent. It’s not a bad rule of thumb except it fails in a lot of cases, but then you see these cases where it ends up creating the equivalent of a cognitive illusion. So how would you apply that to—we’re usually pretty good, but then we have an issue?

Daniel: Well, you know, if you want to compare, I would say cognition is very much like perception. Cognition is really quite poor. We are very much weaker in the way we think than in the way we see. And it’s fairly clear if you think about it, perception is something that we share with other animals. It’s got, you know, many millions of years to develop and to develop beautifully, because we’re not all that better in our perception than cats or other mammals. In fact, you know, we’re inferior to birds in many ways. Cognition has had much less time to develop, and it’s a work in progress in terms of evolution. And we’re seeing that when we are comparing people now to AI. That’s, you know, how limited human cognition really is.

Annie: Going back to an error like the availability heuristic, if I’m evolving in a very small social group, availability heuristic may not be too bad for sort of figuring out the frequency of certain things that are occurring so that I can make those judgments.

When I’m trying to make judgments that are global across, say, the whole world or the whole population of humans, things that I have encountered frequently or recently or that are easy for me to recall, maybe now that becomes an error. So I’d like to think about like—a lot of these things when you talk about heuristics, they’re actually not terrible rules of thumb. It’s that rules of thumb can’t cover every single case, and it’s those cases where it tends to fall apart.

Daniel: Well, you know, I really agree entirely with your last statement, and we said that in our paper, except people didn’t pay attention. Heuristics are not, their function is not to create biases.

Annie: Right.

Daniel: People think of them that way. The function is to create useful thoughts quickly, and that’s the availability heuristic, you know, it looks familiar, feels familiar. Then I probably have run into it recently or I probably run into things like that many times and that’s a very good view that we derived from an event in our memory to a fact in the real world and frequency and probability.

So definitely it should quite well. But under predictable conditions, they lead to errors. And the predictable conditions are when, for example, something is highly available, not because it really is frequent, but because something has happened to you recently that has impressed on you a lot. So if you see, you know, a car burning by the side of the road, then, for a while, this will really distort your judgments about fires and about cars and about the safety of the road. A lot of things are going to be distorted because that event will be very highly available and many things are going to evoke memories of it. So that’s an example. You can tell in principle in advance when a heuristic will go bad, and it will go bad when availability or a sense of familiarity is evoked, but in fact, not by frequency, but by something else.

Annie: So is it fair going back to the idea of the visual system—and if we look at visual illusions, they’re actually taking advantage of those shortcuts that our visual system makes—so it seems like that maps almost directly onto what you’re saying about cognitive bias.

Daniel:I think the mapping is direct. I mean, we actually use the example of a cue that is called aerial perspective, which is basically how blurry the contours of objects are. So when you look at several mountain ranges, one behind the other, they get progressively more blurry as distance increases. So blur is a cue to distance, but now you can predict that there will be bias that on exceptionally clear days, objects will appear nearer than they really are. And on exceptionally foggy days, everything will look more distant than it really is. And that follows directly from the use of the cue that, in general, is a very good cue, which is how blurry contours are.

Annie: I’m hearing you say, look, there’s a straw man version of our work, and the straw man is: we’re bad all the time, we’re so irrational, how can we even get down the stairs? It’s amazing because we’re just terrible at decision-making. And I have heard people attack that straw man. In other words, in trying to argue against some of this, they say, but look, we wouldn’t have survived for this long if we’re so bad at judgments and decision-making. And I can hear that almost as if you’re a little irked, because you’re obviously so intentional and so careful about saying, here’s what we really mean, and if you’re going to argue against it, could you please use the steel man as opposed to the straw man? Is that fair that I’m reading that correctly?

Daniel: Yeah, yeah. I mean, this is inevitable. When you get to be, you know, as old as I am or you don’t have to be as old as I am to see this. Things are moving, you know, it’s not as if ideas stay where they are forever. Things are moving, the field is moving, new ideas come, and there is an urge, a need, to get rid of the old ideas. So what started out as an exaggerated version of what we said in some ways becomes a caricature of what we said, and then you can dismiss the caricature and go on with the business of what you really want to do, which is something else.


“It’s not as if ideas stay where they are forever. Things are moving, the field is moving, new ideas come, and there is an urge, a need, to get rid of the old ideas.” – Dr. Daniel Kahneman


Annie: Right, right. Well, that makes sense. You seem to be very sanguine about it, which is good, I think. No, not too…

Daniel: It would not help me if I thought that theories would stay forever.

Annie: Right. Obviously, one of the biggest ideas, I think, that came out of Thinking, Fast and Slow, and I know this is one of the biggest ideas that came out of it, because people who are not psychologists who, you know, will just say to me, oh, System 1 and System 2… So first of all, can you kind of explain what the idea is between this, these two systems, the way that they interact with each other, and then what I’d love to hear after that is kind of on this idea of the difference between what you say and the version of what you say that people either are you against or internalize for their own use, right? What are people getting wrong about it?

Daniel: Oh, a lot. The distinction between System 1 and System 2 is not ours.

Annie: Yeah.

Daniel: So people have been thinking about along those lines for 40 years. So the basic idea, which I think is completely obvious, is that there are two basically different ways of reaching ideas. Some ideas come to you and other ideas you have to produce. So some ideas come to you effortlessly and others you’ve got to construct. And then it’s mental work and everything that goes with it. You can’t do many at a time, so that basic distinction—the experiential distinction, just the subjective impression, I think—is very clear. And I just don’t believe anybody who denies that they held that distinction. Now we hardened it, that basic distinction, into, you know, different types of thinking. And then I made the decision, the choice in Thinking, Fast and Slow, and it’s a choice very few others in the field had made. The notions of System 1 and System 2, the words actually, were given by Stanovich and West a long, long time before I applied them. But they gave up on that, because the word system as it is often used and as we use it, it’s an agent and there is a real prohibition in psychology that you internalize early as a psychologist. You don’t explain the behavior of a person by the behavior of little persons who are inside that person’s head. That’s called a homunculus, and it’s a no-no. You’re just not supposed to do it.

In Thinking, Fast and Slow, I deliberately broke that taboo and I said, yes, there are agents in your mind. It’s very good to think about in that way. And there is an interesting piece of reasoning behind it, I think. Not that I believe there are really two agents in the mind. I don’t. Even the word system is not applicable, and it’s more of a continuum and, you know, it’s much vaguer than it sounds. But it turns out that people’s minds are specialized for thinking about agents. They are surrounded by agents with intentions, with personalities, and we understand agents. Understanding categories or types—we are much poorer at it. You know, we’re very poor at constructing lists, but we’re very good at remembering routes, you know, routes through a terrain, which is why, you know, we had that technique for remembering things, which is you place them around a longer path. So the idea of the two systems, really, it’s the deliberate attempt to create in the mind of readers, sort of stereotypes of two types of mental operations—one group called System 1, the other, System 2, and, instead of giving a description of two types of thinking, we are saying, well, there are two agents and each agent produces a certain type of thinking.

So it really is an equivalent way of saying something else. I think psychologically it’s much easier to think for the readers, and, frankly, it’s easier for me to think about agents than it is about this. But I’ve been sort of rapped around the knuckles and scolded for doing that by people who feel that it’s a no-no and a broken taboo.

Annie: Well, I mean, I like something that I’m hearing echoing through this whole conversation, which is that, you know, you’re obviously very much a scientist at heart. But you also understand the value in communicating to people in a way that they can process and understand and actually apply. So even if you go back to what we talked about with forecasts of asking somebody instead of how likely do you think this is to succeed? To say, how promising do you think it is? Which is a different way to ask a similar question, but one that’s going to allow the person to feel like they can answer it. You know, judging intensities, relative intensities, instead of saying, how intelligent is this person? How tall would this building be if it were as intelligent as this person? So that there’s different ways to cut at things, and obviously speaking to another scientist, you might use different language, but if you’re trying to communicate to allow people in general, right, to understand a concept and to be able to apply it and get the gist of what you’re saying, that you have to take a practical approach to that.

Daniel: Yes. And I should say, by the way, in psychological research, I found the two concepts very useful, still. So I’m currently writing a paper with some collaborators, and it really is about System 1 and System 2, and we use those terms and they help us think. And we know that there are no systems, there are no agents in the mind and so on, but it really is a useful way to think.

Annie: Right. Okay, so I’m going to throw out a couple of misconceptions I think that I’ve heard and I’d love for you to address them. So here’s one: System 1 is biased. System 2 is not biased.

Daniel: False.

Annie: So can you explain why? I’m sure you’ve heard this before yourself.

Daniel: Now, System 2 is often described as the rational system and System 1 as the irrational system. That’s nonsense. System 2 is a system that is reaching conclusions more slowly and with effort. It’s actually, it involves not only thinking, but it involves activities like self-control, which are not thinking at all. It’s the effortful part, and System 1 is intuitive, and fast, and automatic, and so on. Now, System 1 has biases. They come from the types thinking that it’s specialized in doing, like thinking in terms of similarity, matching intensities, being influenced by availability. There are certain characteristics of what we call System 1 thinking that…System 2 is not perfect. It’s just slow. It’s slow. And there are certain activities like formal reasoning that you can approximate in System 1, but you can only do them perfectly only in System 2, and System 2—many people do not reason perfectly. Many people are susceptible to systematic cognitive errors in their reasoning when they’re attempting to reason properly, and their memory is limited and their capacity is limited. So people make many mistakes when they’re in System 2 mode, and they get a lot of things right when they’re in System 1 mode. In fact, they’re getting almost everything right, but occasionally they make mistakes and occasionally they make big mistakes.

Annie: So I think this actually ties in nicely to another thing that I think that people say, that System 1 is not just irrational, but not actually particularly useful. Like if we were, you know, super beings, then we could just do everything in System 2. And wouldn’t it be nice if we could just toss System 1. Which I think is also, yes, a misconception and not really what we would ever desire for. And if you could address that, that would be great.

Daniel: Well, you know, there are many activities that we don’t want to think about that we want to be able to carry out without effort. So we breathe without thinking about it. We walk without thinking about it much. We drive mostly without being aware of what we’re doing. Those things are automatic. And automatic is the definition of System 1. And so skills are automatic, some of our skilled activities are automatic. And we want them that way because that minimizes effort.

So the idea is that if we were running on System 2 most of the time, we would be so slow. I mean, just imagine thinking as you walk. Impossible. I mean, clearly that has to be relegated, like breathing, to an automatic system. Now, what is true is that there are situations when sometimes they’re recognizable in advance where you are prone to illusions, where System 1 is prone to mistakes, and there, in general, you’d better slow down and think about it. And that’s where System 2 can be superior. But by and large, we are not trying to tell people, use your System 2 all the time. I don’t think it’s feasible. I don’t think it’s desirable.


“There are many activities that we don’t want to think about that we want to be able to carry out without effort. So we breathe without thinking about it. We walk without thinking about it much. We drive mostly without being aware of what we’re doing. Those things are automatic. And automatic is the definition of System 1.” – Dr. Daniel Kahneman


Annie: Well, I think we can also come up with examples where, to your point, like, look, if you can get into System 2 on figuring out your sample size for an experiment, that would be really good. And please do that, because that’s going to be far superior to System 1, where you’re just trying to intuit, you know, I think this is how many people we need. But I think that the reverse is true as well, that there’s situations where we actually wouldn’t, we really don’t want you to go into System 2. Now, I think that you could walk fine, you’d be a little bit slow, if you were doing that as a System 2 exercise.

Daniel: I mean, if you walk in a minefield, you want to be in System 2.

Annie: But here’s an example where I think it would be a disaster to say no, like use System 2. You’re driving and a deer jumps out in front of you. In that case, it would be horrible if you said, let me think, let me write down my options and think about what I’m supposed to do. One hopes that you’re making those decisions in System 1.

Daniel: In general, you’ll become aware of what you did after you do it.

Annie: Right.

Daniel: So anybody who had an accident or a near accident will tell you that.

Annie: Yeah. So, I mean, and I think that that’s interesting because I do think that people sort of hear that as System 2 is superior, where it’s very easy to come up with examples where System 1 would be the superior system to be in than System 2.

Daniel: I mean in general, System 1 is superior, and it’s superior because that’s where all the skills are.

Annie: Right.

Daniel: And it’s sort of interesting that when we’re talking about artificial intelligence that all of us are talking about these days, they’re much more similar to System 1 than to System 2.

Annie: Yeah.

Daniel: That they cannot do logical reasoning in a proper way, in the way that System 2 does. And they’re, they are associative in very much the way that System 1 is. And it’s hard to call them intuitive because they don’t have intuitions.


“Artificial intelligence . . . they’re much more similar to System 1 than to System 2 . . . they cannot do logical reasoning in a proper way, in the way that System 2 does . . . They are associative in very much the way that System 1 is.”– Dr. Daniel Kahneman


Annie: Right.

Daniel: But the mode of operation is more similar to System 1 than to System 2.

Annie: So let me just, I just want to make sure to understand the skill portion. We assume that when Raphael Nadal is playing a tennis point, and he’s doing it better than any of us could ever imagine doing it, that he is playing that point in System 1. Is that fair? Maybe at moments?

Daniel: Almost, almost entirely. But occasionally he is thinking strategically. I mean, he will be saying, well, I haven’t aimed to the left in a while. This may be time to change. You know, he seems to be getting tired. So there are a lot of thoughts that are strategic and are System 2 and are much more deliberate, but most of the actions that he takes, he takes completely without awareness.

Annie: Yeah. So I just wanted to sort of clarify that, right? That System 1 is incredibly valuable, right? And that a lot of the really high performance things that we witness are actually people mainly existing in System 1.

Another really important concept that you introduce into the more popular vernacular, which would be the inside view and outside view. So if you could explain inside view and outside view, what the difference is between the two, maybe even thinking a little bit about your original work with the soldiers even, or what you were working with with the statisticians.

Daniel: Actually, the origin story of the distinction between inside and the outside view is, I was engaged with a group of people, which included the chairman of the education department, in writing a textbook for decision-making for high school. And we were meeting every week and things were going well. And then on a certain day, I asked everybody, “How long do you think it will take us to have a book that we can hand in to the department of education?” And everybody, including me, wrote a number on a piece of paper. And all our numbers were between a year and a half and two and a half years.

And then I asked the chairman of the department—I didn’t know what was going to happen, but I asked him— “Can you think of groups like us, you know, who tried to develop curriculum where none existed before?” And he said, “Yes, I can, you know, that’s my field.” And I said, “Can you imagine or visualize those groups at the state of progress that we are at?” He said, “Yes, I can.” I said, “Well? How long did it take them?” And you know, in my story, he blushed, but he certainly took a long time. And he said, “You know, first of all, I realize that not all of them succeeded.”

Annie: Okay. So you have a whole group who don’t finish.

Daniel: Maybe 40% of them.

Annie: Oh, my gosh.

Daniel: And of those who finished, I can’t think of any that took less than seven years, and I can’t think of any that stayed for more than 10. And so that’s the outside view. This is taking a case and not looking at the details of the case, but looking at the statistical distribution of cases like it, and it’s immediately obvious which is superior.

Annie: Right.

Daniel: I mean, you know, why would we expect to be different? Actually, I asked him that question. I asked him, “Compared to those groups that you’re thinking about, how strong are we?” And he thought for a while, and then he said, “We’re below average, but not by much.”

Annie: But not by much. Okay.

Daniel: Not by much. That tells you the story. And the startling thing is that he had said something like two years. So what came naturally to him was the standard way that we think about how long the project will take. You know, you have a scenario and then you make adjustments. And it turns out that’s not the right way of doing it because you under adjust and you’re anchored on your plan.

And so the best way to start thinking about those things is to think of cases like it and look at the statistical distribution. So that’s the outside and inside view. Now the inside view is the default mode. The outside view is something that you’ve got to teach.

Annie: Yeah.

Daniel: I was not aware of the distinction until that day.

Annie: Until that moment.

Daniel: Until that moment.

Annie: So it sounds like to me, if we’re mapping this onto System 1 and System 2, that the inside view can exist in both systems, but the outside view could only exist in System 2. Is that fair? Like…

Daniel: Oh, yeah.

Annie: Yeah. I just want to just for the audience say, you said something really important. Like when you were talking to the department head, you said how long has it taken groups like ours? And I think that that sentence is so incredibly important, this idea of groups like ours in a situation similar to the one that we’re considering, which is getting to the heart of a concept in Bayesian probability and Bayesian thinking, which is called a reference class.

So if you could just quickly say that, cause I think that phrase is so important for people to get to. Why is it that we need to be thinking so clearly about reference classes? What is a reference class? How do we kind of think about, how do we relate ourselves to that?

Daniel: Well, if you take the example of an author who writes a book and how long will that author take, you have several possible reference classes, and it’s not obvious what you should do. You know, what kind of a book is it and how long do those books generally take? And if that author is a serious author, how long does he or she take to complete a book? So there are, quite often, more than one reference class, and the advice is to consider them all but weigh them by their relevance. And you get a sense that some of them are more relevant than others.

The outside view is now recommended formally by the American Planning Association. So this has been recognized. There is a professor at Oxford who has made it his life’s mission to collect planning errors in the public sector. The kinds of overruns where overruns are standard, you know, in infrastructure projects, and the amount of exaggeration—we call that the planning fallacy, the kind of error that follows from taking the inside views. You’re way too optimistic because you’re anchored on your plan. Now, this is different in different types of operations. So contractors who plan for constructing, you know, about a home, they won’t be far off because they have a lot of experience. But when they talk of building a unique project, then that’s where you get your huge overruns.

Annie: Right.

Daniel: So it’s—sometimes you can predict your own optimism. You can predict where you are running a risk of getting it wrong.

Annie: So the fewer repetitions you have of something, the more likely that you’re going to…

Daniel: The more novel the thing is, the less the experience, then it’s harder to find a good reference class. And that’s where people will focus on the unique aspect of their project and go drastically wrong.

Annie: But we can, we can think about something being “unique to you” versus “unique”. So if you’re a very experienced builder who’s now building a museum, which you’ve never built before, you can still go and look at how long in general have other museum projects taken.

Daniel: Yeah.

Annie: In order to discipline your point of view.

Daniel: You need a reason to say, I’m different from the others.

Annie: Right.

Daniel: You’d better know why you think that you are different. Because in general, you know, you see that in some places. Like you get a place where there are seven restaurants in a row. If you live in a place long enough, then you know there’s seven restaurants and none of them last. They all take a year. Now, there’s something wrong with that location. It’s not a good location for restaurants, and yet you get optimists who think that they have, you know, a recipe for…

Annie: Some special skill.

Daniel: For pizza—a special skill. They’re different from all the others. So the base rate is not relevant to them, and they probably are making a mistake.

Annie: Do you think that part of this is playing into overconfidence in the sense that my judgment of where I sit on that continuum, right, like how much control I have over the aleatory aspect of our lives, right? That this, it’s just…

Daniel: Oh, yeah.

Annie: I’m just, I have myself in the wrong place. And so we’re much more likely to think that there’s something special about us, which is what I’m hearing with the restaurant situation. Seven restaurants have failed before, but I’ve got myself in the wrong place.

Daniel: Yeah. But you know, they didn’t have my recipe for tomato sauce or something.

Annie: Right. Now, obviously you’re so well-known for your work on cognitive bias and systematic predictable error that people make. But you wrote a whole book recently with Cass Sunstein and Olivier Sibony on noise. And so first of all, can you just sort of try to explain to people the difference between bias and noise and why you think that noise is an underappreciated source of error?

Daniel: Well, you know, we think of judgment as measurement. That’s the easiest way to think about judgment. That is, you assign, when you make a judgment, you assign an object to a value on a scale. Now, when you make measurements, you make errors. Every measurement has some error. So if you make a lot of measurements of the same type of thing, or a lot of measurements, it’s easiest to think of the same thing. You’re not going to find the same result every time. Now, here is the distinction, the average of the errors that you make—that’s your bias. So you may overestimate more than you underestimate. The average error is your bias. But your errors vary from time to time, from trial to trial, from measurement to measurement. The variability of your errors—that’s noise. The variability of your judgment of the same thing. That’s noise. So we define noise as unwanted variability in the judgments of people, like many underwriters looking at the same risk, many physicians looking at the same case, many judges looking at the same defendant. And then how much will they disagree?

And the answer is: very likely, they will disagree a lot. And, almost certainly, they will disagree more than you think. So we have a phrase: wherever there is judgment, there is noise, and there is more of it than you think. And so far we haven’t found an exception to that.


“The average of the errors that you make—that’s your bias . . . the variability of your errors—that’s noise. The variability of your judgment of the same thing. That’s noise.” – Dr. Daniel Kahneman


Annie: So I think that people intuitively can get the idea that if, you know, you have like a hanging judge and a softie, right? Like, I say, I think they can understand different judges looking at the same facts of a case could come up with different judgements and that that would make the system noisy. I think what’s less intuitive to people, because I think that people like to see themselves as consistent, is the idea that the same judge looking at the exact same facts of the case on different occasions might come up with a different judgment.

Daniel: I mean, judges have good days and bad days, and on bad days they’re more severe. But I would like to add that the most interesting part of noise is one that’s not differences in severity and not differences between good and bad days. Different judges have different tastes. So you can imagine a judge who is lenient with young people and another who is particularly severe. One judge who hates crimes against old people. The other one is focused on fraud or on violence. They have different patterns. We call that pattern noise. And each judge is consistently different from the others. And, in fact, what we don’t realize is that we don’t live in the same world as other people. Each one of us lives in a different world.

And that’s hard to accept, because I know that I see the world the way it is. So if you are a reasonable person, I think you see the world the way I do, but you don’t. And you think I should see the world the way you do because we are each trapped into our own world. And it turns out that’s the main lesson I think from the Noise book, is that we live in worlds that are much more different from each other than we can imagine, because each of us is trapped in our own little world.


“What we don’t realize is that we don’t live in the same world as other people. Each one of us lives in a different world.” – Dr. Daniel Kahneman


Annie: Yeah. You know, I remember having a conversation with my kids once. We were looking at the side of a bus and there was the color orange. And I tried to explain to them, we both agree that we’re supposed to call that orange, but we have no idea if we’re seeing the same color. We just know that whatever that thing is, that we’ve agreed that it’s orange. But we actually don’t know if we’re seeing the same color. Because I was trying to sort of explain this concept to them, and they seemed to get that. But I think that that’s a distressing idea for most people, right? This idea that we have shared experience and, as you said, that we all see the world as it is. And if I see the world as it is, then you must see it that way, too. And I think that is very difficult for people.

So I’m going to ask you a question that I think I know the answer to just based on this kind of thinking about noise and thinking about the history of your career. If you were to offer a decision-making tool or strategy that you would want, say, the next generation of decision makers to employ, that would really change the outcomes of their life in a significant way, what would it be?

Daniel: Well, you know, if you can, if the problem is sufficiently important to slow down and solve it, then there is some ways that approaching problems are better than others. And there is a whole list of things that you had better do because they will make it more likely that your judgment is in the ballpark.

There’s no guarantee, but you can affect the likelihood of that. Most of the time, just go ahead and follow your intuition because you’re experienced. Every one of us is an expert in our own world, and most of the things we do in that world are okay. Occasionally, when you face a problem where the stakes are high or you are likely to make a mistake, slow down and think. That’s the basic advice.


“Most of the time, just go ahead and follow your intuition because you’re experienced. Every one of us is an expert in our own world, and most of the things we do in that world are okay. Occasionally, when you face a problem where the stakes are high or you are likely to make a mistake, slow down and think. That’s the basic advice.” – Dr. Daniel Kahneman


Annie: Okay. So just coming back to the Alliance for Decision Education, let’s imagine a world where we actually managed to teach what you just said to kids in K through 12. Like what, what impact do you think that that would have on, on their lives? If we could just get them thinking about, for example, your statisticians, right? Do the calculation. Don’t go with your intuition in certain cases.

Daniel: I think the fundamental thing that you would be teaching is slowing down. And I think there is research on that. I touch on it with Sendhil Mullainathan. I think there was research on underprivileged children on the training program in Chicago. And basically what I remember from that is that teaching people to slow down when they are getting excited is a very important skill. And we sometimes know when we’re making a mistake. And we go ahead anyway. But the skill is when you’re about to make a mistake, slow down. Don’t send angry emails, probably not before tomorrow morning.

Annie: Yeah.

Daniel: So slowing down has a lot of value.

Annie: Give some time and space and start teaching that skill early.

Daniel: I would think so. And then, when you slow down, then that makes people curious. What is the best way to think about it? And then you can try to fill the time that people give themselves with good ways to reach an adequate solution. But the basic skill is slowing down.

Annie: It’s slowing down. I love that. All right, so if listeners wanted to learn more about your work, where is a good place for them to start?

Daniel: Well, Thinking, Fast and Slow. Noise is a much harder book, but it’s a much more practical book. I mean, Noise has advice about what to do, which Thinking, Fast and Slow doesn’t at all. So pick and choose. Both books are long, and not all the material is relevant. So pick and choose. But those are the two books.

Annie: Great. And then separate from your own books, what book would you recommend to listeners who are looking to improve their decision-making?

Daniel: Well, right now I would recommend a book by Annie Duke called Quit.

Annie: No. You’re not allowed to recommend my book, but I do appreciate that.

Daniel: I know. I would.

Annie: Well, thank you. All right. Separate and apart from that book, what would you say might be the most valuable in the books that you’ve read?

Daniel: I would look for anything by Adam Grant. Yeah. I would look for anything by Atul Gawande. This is because I think he’s just great. And his view of things. So I would go by people, and there are some people, I think Chip Heath, you know, there are people in that field who are just very, very good.

Then there is a whole approach, which is quite different from mine. And that’s the approach of people who analyze experts, people who are very good at what they do. So Gary Klein is the guru of that group. He doesn’t like anything … I mean, he is really opposed to my point of view, but I sort of like what he does. He is really good at figuring out what expertise is in a given domain, whereas, you know, I’m good at sort of saying, well, this is pseudo expertise. It’s not the right.

Annie: Well, we didn’t have time to get to it, but just in terms of Gary Klein, you famously have done an adversarial collaboration with Gary Klein, and with others. We didn’t get a chance to get to it. But I do want to just shout out to that, because I think that the way that you think about engaging with people who have different points of view in a structured way, to actually sit down and, like, write papers with them and try to figure out where it is that your views are differing. I think it’s such a great exemplar of great decision-making, right? How do you become open-minded? How do you listen to what the other side is saying? How do you get outside yourself to be able to poke holes in your own arguments? And obviously Gary Klein is one of the people that you’ve had one of those collaborations with, so I really recommend that people go and look at that work as well.

Sadly we didn’t have time to get to it. I wish we did because we would’ve talked about the happiness work, which I think is so interesting. But we’ll make sure to put in the show notes your work on money and happiness, which is another adversarial collaboration. And, on that point, any books, articles, et cetera that were mentioned today. Just please everybody, check out the show notes on the Alliance site, where you’ll also find a transcript of today’s conversation. So, Danny, as always, such a pleasure. I feel so lucky every time I get to talk to you. I get to learn from you every time we have a conversation.

Daniel: I enjoy them all.

Published Jul 19, 2023

Share this episode to your favorite platform!

Check out our other latest episodes

  • Episode 029:

    Changing Minds in a Polarized World

    with David McRaney

    Why do people sometimes become more entrenched in their beliefs when they are challenged? In this episode, David McRaney, science journalist and creator of the [...]

  • Episode 028:

    Rethinking the Workplace

    with Dr. Adam Grant

    Can giving advice actually be more valuable than receiving it? In this episode, Dr. Adam Grant, organizational psychologist and world-renowned author, joins host Annie Duke, [...]

Stay informed and join our mailing list