Thinking 101
with Dr. Woo-kyoung Ahn
Episode description
Why do projects often take so much longer than expected? Join us in conversation with Dr. Woo-kyoung Ahn, Professor of Psychology at Yale and the author of Thinking 101: How to Reason Better to Live Better, to explore the thinking errors we make every day and discover powerful tools to mitigate them. Together, we delve into the reasons why job interviews can be misleading, why we tend to procrastinate, and more. Woo-kyoung offers valuable insights on how to improve our thinking, giving us practical ideas that we can apply in our daily lives, including some fun tips on how to make sure confirmation bias isn’t on the menu the next time we visit a restaurant.
Dr. Woo-kyoung Ahn is the John Hay Whitney Professor of Psychology and director of the Thinking Lab at Yale University. After receiving her Ph.D. in psychology from the University of Illinois, Urbana-Champaign, she was assistant professor at Yale University and associate professor at Vanderbilt University. In 2022, she received Yale’s Lex Hixon Prize for teaching excellence in the social sciences. Woo-kyoung’s research on thinking biases has been funded by the National Institutes of Health, and she is a fellow of the American Psychological Association and the Association for Psychological Science.
At Yale, Woo-kyoung developed a course called “Thinking” to help students examine the biases that can cause numerous challenges in their daily lives. It quickly became one of the university’s most popular courses. In her class, students examine “thinking problems” like confirmation bias, causal attribution, and delayed gratification, and how they contribute to our most pressing societal issues and inequities. Woo-kyoung presents key insights from her years of teaching and research in her recent book, Thinking 101: How to Reason Better to Live Better.
Books
- Thinking 101: How to Reason Better to Live Better – Woo-kyoung Ahn (2022)
- Psych: The Story of the Human Mind – Paul Bloom (2023)
Resources
- Confirmation Bias – Alliance for Decision Education
- I Let Algorithms Randomize My Life for Two Years – Max Hawkins (2017)
- The Law of Large Numbers – Britannica
- Bayes’ Theorem – Stanford Encyclopedia of Philosophy
- Fundamental Attribution Error – Alliance for Decision Education
- Loss Aversion Bias – Alliance for Decision Education
- Endowment Effect – ScienceDirect
- Intellectual Humility – Alliance for Decision Education
- Present Bias – Alliance for Decision Education
Websites
Articles
- Effects of Genetic Information on Memory for Severity of Depressive Symptoms – Woo-kyoung Ahn; Alma Bitran; and Matthew Lebowitz (2022)
- Chapter One – The Planning Fallacy: Cognitive, Motivational, and Social Origins – Roger Buehler; Dale Griffin; and Johanna Peetz (2010)
- Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval – Ralph Hertwig; Stefan M. Herzog; Lael J. Schooler; and Torsten Reimer (2011)
- The Optimism Bias – Tali Sharot (2011)
- Regression to the Mean: What It Is and How to Deal With It – Adrian G. Barnett; Jolieke C. van der Pols; and Annette J. Dobson (2005)
- Not All Emotions Are Created Equal: The Negativity Bias in Social-Emotional Development – Amrisha Vaish; Tobias Grossmann; and Amanda Woodward (2008)
Charles: We’re excited to welcome our guest today, Dr. Woo-kyoung Ahn. Woo-kyoung is the John Hay Whitney Professor of Psychology and the director of Thinking Lab at Yale University. Woo-kyoung devised a course at Yale called “Thinking” to help students examine the cognitive biases that cause so many problems in all of our daily lives. It became one of the university’s most popular courses. Woo-kyoung shares some key insights from her years of teaching research in her recent book, Thinking 101. Welcome, Woo-kyoung.
Woo-kyoung: Hi. Thank you for having me here.
Charles: We have been reading the book at the Alliance, been passing it around. We have multiple copies. I think Grace and I each have a copy actually. But before we dive into questions, there’s a backstory to the course and the book, and I’d love to know why, when you were teaching at Yale, you thought, I really need to go back to the beginning. Like, let’s do a course called “Thinking.” So how did that come to be?
Woo-kyoung: So I used to teach Introduction to Cognitive Science, and my research expertise is in the high-level reasoning processes. I offered senior seminars on that topic, and I felt like I really need to teach more broadly about the importance of rationality. And I didn’t feel like titling that course “Rationality” or “Higher Level Reasoning Processes,” it’s not going to attract the students. So I just called it simply “Thinking,” and I offered it to be with no prerequisite. And it’s all about just the thinking processes. The first time I offered it, I got the highest course evaluation ratings that ever happened in the psychology department for a lecture course.
Seminars, you know, can get high ratings, but for a lecture course, I was excited! And then two years later when I offered it again, it became like 500 students. But I started capping it because of the pandemic also. It kind of needs more conversation during the class, and you can’t have any conversation with 500 students. So now I’m capping it to 100 students and offering it more frequently. It has a long wait list, but now I have a book. So I tell them, if you can’t take the course, there’s a book.
Charles: Right. What do you think was drawing the students to the course?
Woo-kyoung: So the main thing is, I think most students take the course thinking that they’re going to think smarter than other people. They just want to outsmart other people. And I don’t deny it at the beginning. But then as I go on, that was not what I intended at all. And then what I do, usually, is do a lot of demo experiments in class, especially at the beginning before they realize their correct answers should be everything that’s counterintuitive to them.
So at the beginning I get all the data and then I say, “This is what you guys said. But this is irrational for these reasons.” And then they finally realize how many thinking errors that they make in everyday situations. And I also try to talk about a lot of everyday examples in relation to them.
And for the book, my editor kept reminding me—your audience is not undergraduate students, can we talk about something other than career or classes? So I tried to come up with those too. So I think one of the most common comments I get from my students is that the course is so useful and they can relate to this in everyday life. Yeah.
Charles: I remember reading in the book about you getting students to come up and, like, attempt to do dance moves, and I think that’s going to lead nicely into Grace’s first question. So I’m going to pass over to Grace.
Grace: Wonderful. Absolutely. I know, it’s such a good way to start the book. I was totally imagining from my undergraduate days being in that class, it would definitely be my favorite class.
So I’m jealous of all those students who get to take it, and I’m sure all the Yale students love learning all of the content in their course. So yeah. Charles and I really enjoyed reading your book. What we’ve done is we’ve picked out a few of the thinking errors and cognitive biases that we think really connect to the theme of our podcast, which is decision-making and how to get better at decision-making.
So the first topic that I’d love to talk about is the fluency effect and the planning fallacy. When I was reading this, I realized how much it connects to all of daily life. And it reminded me of this time not too long ago when I decided I wanted to crochet a blanket for a friend’s baby. And so I went on YouTube, and I had all my materials, and I was watching the video of this obviously very skilled woman crocheting.
And then I watched it, like, on 0.5 speed thinking like, oh yeah, this looks great. And so I got ready and it probably took me maybe 30 minutes to do what took her 30 seconds. Like, it took so long; it was so much harder than I imagined. So now I have language to describe what I think was happening at that time. So I’d love it if you could talk to our listeners a bit about what is the fluency effect? How does it show up in our daily lives?
Woo-kyoung: So Grace just illustrated perfectly with a great example. So basically the formal definition is that if it looks fluent, you think you can do it easily. And it’s worse than YouTube these days. There’s now TikTok. They can make a chopped salad in 20 seconds. And, of course, I know it’s going to take more than 20 seconds. So I thought, okay, it’s going to take me five minutes to do exactly that, and washing all these vegetables, and, by definition, you have to chop it, right? All the vegetables. And it took me 30 minutes.
So this kind of fluency effect happens all the time, because before YouTube, before TikTok, this heuristic actually worked, right? If somebody does it easily, it’s a good clue telling you that this is an easy thing for you to do as well. But if it looks harder, then it’s going to take longer for you.
So we have evolved to use this kind of feeling of fluency as a guide to determine whether we can do something, how long it’s going to take, and so on. But then, at the same time, when we’re planning for the future tasks, what we do is we mentally simulate in our head what we should do. So step one, I’ve got to do this. Step two, I’m going to do that. And I should be able to finish this by the end of the day. But when you do it in your head, like everything runs so smoothly. There is nothing happening. Like, you know, this morning I had a dentist appointment at 9:00 AM. So I got up at seven, had a coffee, drove there leisurely, thinking that after this dentist appointment, I would have like two full hours before this podcast recording.
And then I should be able to finish my data analysis, blah, blah, blah. When I arrived at the dentist’s office, I realized that I entered that appointment on the wrong date. So I did not have a dentist appointment, though they had a last minute cancellation, which I could take. But I had to wait another 30 minutes. So, to make a long story short, I had to spend the entire morning at the dentist’s office and barely had lunch before this podcast. And this is something that happens to a professor who’s been teaching planning fallacy for over 30 years.
Grace: Oh my goodness, that’s so relatable, I think, right? Not only did it take you longer than you thought, you forgot to even account for the fact that you might have got the date wrong or some other unexpected thing might come up.
So I guess, you know, we’ve all been in situations that are like that, where we’ve thought that something’s really easy and then tried it out and it’s been a lot harder. Or where we’ve been planning something and we’re not taking into account all the things that can happen, and we underestimate how much, you know, time or money or effort it’s going to take us to complete the task.
But what’s the solution? You know, we’re all wondering—how can we do better with this?
Woo-kyoung: I mean, for a chopped salad, it’s simple. You just try it out and then you are going to give yourself feedback that you cannot finish this in five minutes. So next time you need to allow for 30 minutes. But for planning fallacy, it’s harder because the whole point of planning is to project into the future without actually trying it out.
On top of that, we also have optimism bias. We want the task to take a shorter time. We want to be able to accomplish all of those. And we think we can do it, which is a good thing, you know, to have a good feeling of agency, you know, going into the task. But that can always lead to the planning fallacy. And the other thing is, you can remind yourself that there’s something always going to happen . . . you know, I’ve never imagined myself putting a dentist appointment on the wrong date. I’ve never done that in my 60 years of life. That’s not something that I can imagine.
So I could not prepare for that either when I was planning for today’s, you know, morning. So what I just do is, I just double the amount of time that anything is going to take and just force myself. So the data analysis that I squeezed in my plan was nothing urgent. It can be done within a week. So I plan it that way. Something is bound to happen for sure. And then I just operate based on that. Yeah.
“If somebody does it easily, it’s a good clue telling you that this is an easy thing for you to do as well. But if it looks harder, then it’s going to take longer for you. So we have evolved to use this kind of feeling of fluency as a guide to determine whether we can do something . . . but that can always lead to the planning fallacy.” – Dr. Woo-kyoung Ahn
Grace: That’s so interesting, yeah. Because if, you know, next time you plan a dentist appointment you thought, oh, maybe I’ll put it on the wrong day—you’ll say no, this time is different, right? But this time might be different, but it’s probably going to be similar in that something else unexpected will come up. It just won’t be that you put it on the wrong day.
Woo-kyoung: Exactly.
Grace: I like the example in your book, you said that you just add 50 percent to things. And I can feel the tension in my body when I imagine having to do that to everything. But also it would be great if you could do things earlier than you said. So if you had an extra hour, you know, worst case, it would be really good.
I think you also gave an example in the book that we imagine that if we chart out, like, these are the exact steps, that we would be more accurate. But then, I think, does that do the opposite?
Woo-kyoung: Yeah. So sometimes you can have a microplan, right? Okay, it’s going to take me 21 minutes for me to get to the dentist’s office and all these things, right? And my hygienist, she’s usually punctual. And then the doctor should be there, and I should be back home by 10:35 . . . but then, when you do that, it actually creates more illusion of fluency. Because, yeah, how wrong can you go? Right? And, of course, on top of my error, there was also graduation on campus today, which I did not realize. So there were a lot of things that happened today.
Grace: And maybe they will next time you have a dentist appointment, they just may not be the same thing.
Woo-kyoung Exactly.
Charles: I didn’t get the wrong day with a dentist appointment, but I actually went to the cinema, and I went to sit in my seats in the cinema. And there were some people sitting in the seats and I was about to say, “Oi, what’s going on? These are my seats.” I’d—literally, it was the wrong day. I bought tickets for the previous day, so my seat was empty the previous night for the same show. I just didn’t turn up. So, yeah, it happens to the best of us. So, you know, we obviously can’t get into all the biases that are in the book, but the next one that I wanted to talk about—you say in your book, you say, I believe this is the worst of the cognitive biases that I’m aware of.
So we’ve got to talk about this. So this is the confirmation bias. I think you even mentioned it before we started the recording, you know. So it’s very prevalent, very powerful, and pernicious. So maybe if you could just start by telling us what it is and maybe how it can affect our decision-making
Woo-kyoung: Right. So confirmation bias is famous and notorious. And it’s basically the tendency to confirm what you already believe, right? But what most people understand as confirmation bias isn’t what it actually is. So most people think that it can only happen to those who are stubborn or self-righteous or some weird people.
So if you’re a conservative person, you may think that only liberal people commit confirmation bias. If you’re a liberal person, then you may think only the conservative people commit this bias. And so on. But it can happen to anybody almost every moment. So, for example, I drink coffee first thing in the morning, and I do this to wake myself up. And that is a confirmation bias that I’m committing every morning. Because I might be able to wake up without coffee, but I’ve never tried it in my life. I mean, like, since I started drinking coffee. So it’s something that I just keep on doing without trying to disconfirm my own belief about it. And when you think about it, like, almost everything that we do by habit is an example of a confirmation bias.
So we all commit this. And what is wrong with this? It can go wrong because, for instance, you can hire the same kind of person because you think they are the ones who do a good job. So in the past, let’s say, men—they got into all the math and science graduate schools, they got tenured, and they won all the Nobel Prizes, and so on. So you have even more evidence showing that men are good at science. So you keep on admitting male students, keep on tenuring male scientists, and so on. So it gets into this vicious cycle. So that’s where things can go wrong. I mean, just look at the coronavirus, right? It was—the majority of the people who developed the vaccine were female scientists.
So it can actually hurt us if we stick with our existing belief and try not to disconfirm this, you know, traditional belief. So what can we do about that to, you know, counteract this? And I said, this is the worst bias, because it’s extremely difficult to counteract this because it’s in our everyday life habits. So I wake up in the morning, drink coffee, and I have toast for breakfast. And I go to school, taking the same highway, the same route, and so on. It’s exactly the same pattern every day. And it kind of works for us because we don’t have to think about trying to disconfirm our beliefs every day and at every moment of our life.
So, for example, in the morning, I couldn’t, you know, wonder, okay, which route do I want to take to get to my office? Maybe I should try something new. Who knows? I might discover some more interesting cupcake place or something. But, yeah, that’s too much time, right?
“I drink coffee first thing in the morning, and I do this to wake myself up. And that is a confirmation bias that I’m committing every morning. Because I might be able to wake up without coffee, but I’ve never tried it in my life . . . almost everything that we do . . . by habit is an example of a confirmation bias.” – Dr. Woo-kyoung Ahn
Charles: Right, right. I read in your book, with delight, about a guy at Google. Tell us about this app. Everyone I’ve told thinks it’s kind of crazy, but very intriguing and that . . . yeah, tell us about that.
Woo-kyoung: Yeah, so he was . . . I don’t know whether he still works at Google, but he was basically a computer scientist. And he was developing apps, and he realized his life was so predictable in San Francisco. So he decided to try some new app, where it kind of finds a random place in San Francisco and calls an Uber for him. And he does not know where this Uber is going, because the app did it. And then it just takes you there and then, you know, you get dropped off. And the first time he tried that, he was dropped off in front of the psychiatric emergency room or something. And he loved it because of this, you know, surprisingness, right? So he made it into the app, and the app is picking these random events from Facebook and takes you there. And then he just ends up having, like, white Russians with the Russian people. Or he was attending some retired psychology professor’s birthday party . . . and he makes all these new friends and some weird places he never thought about going.
And when I read about him—he has a TED Talk, and he is pretty famous. But I always told my students that I’m going to try this during my leave. And, of course, I’ve never tried. And this gets into the point of why overcoming confirmation bias is so hard. It’s because it just feels so risky. It sounds fun when someone else is doing it, but who has time to do it? And nobody likes uncertainty. We all want to do something predictable.
And so, going back to the scientists, when male scientists have been doing a great job so far, it feels so risky to try hiring female scientists. And that goes into, like, racism and ethnicism, and, you know, everything that we see, all the societal problems.
So my suggestion in the book is—they’re twofold. One is we can’t play that app that takes us to some unknown places, but maybe we can try that kind of app on a smaller scale. So when you go to a Chinese restaurant, you can ask for the Chinese menu and they just randomly pick a menu from there, not knowing what that could be, right?
“Overcoming confirmation bias is so hard . . . because it just feels so risky. It sounds fun when someone else is doing it, but who has time to do it? And nobody likes uncertainty. We all want to do something predictable.” – Dr. Woo-kyoung Ahn
Grace: That sounds like so much fun. I want to do that. That’s doable too. I agree.
Woo-kyoung: Exactly. That kind of doable thing, or what I do is I try a new recipe every week. I stick with that. And then it’s actually, of course not everything is great, but it’s kind of just fun. And my husband is all in this game too, so that works out fine.
So you can do all that at the individual level. But then at the societal level, since people are not going to commit to big scale, risky actions, maybe it’s the institution that should provide some kind of incentive to the people. So at Yale, where I work, if there is a very promising minority faculty candidate, then Yale would just create a new faculty line for us. So it’s like a win-win situation for us. We are not losing any faculty line. And we can also, you know, take a chance with someone who we would have never hired before.
Charles: There’s a study you talk about in your book, which I think is really interesting. It was about depression. As is often the case in many psychological studies, you tricked the people you were studying. I don’t know why anyone signs up for a study because there are always tricks involved! But, you know, the way the question was framed had a big impact on the outcome. I’d love it if you could just tell us a bit about that.
Woo-kyoung: Well, not all psychology experiments involve tricks. This one did, and we had to debrief the participants later and everything. So the volunteers who agreed to receive a package from us, they gave us a mailing address, and then we sent out this package that contained two things. One was a small amount of mouthwash, and they had to rinse their mouth first and spit it out. And it also had some little test strips. And after they rinsed their mouth with the mouthwash, they had to put it under their tongue and enter the color change that they noticed in the test strip during the online experiment. And, unbeknownst to them, the mouthwash contained a lot of sugar in it. And the test strip was actually a diabetes test strip.
So, for everybody, the color changed in front of their eyes. But then we told them that this is a saliva test that serves as a proxy for measuring whether they have a genetic risk for major depression. And we told them what the symptoms are . . . and then after they entered the color information onto the online experiment, participants were randomly assigned to one of the two conditions. In one condition, they’re told that, oh, that means you don’t have a genetic risk for depression. The others were told, you can guess, yes, that means you have a genetic risk for major depression. After they received this feedback, they were asked to answer some questions that were developed to measure their levels of depression.
And it is a very simple task. It’s just basically assessing what their past two weeks were like. So how sad were you? How depressed were you? How pessimistic were you? What was your appetite, your sleep? And 14 is considered as like a cutoff for when clinicians classify them as people with moderate depression. And those who were told that they don’t have a genetic risk had an average of 11, which is significantly lower than 14. So on average, people are not depressed. That’s what that means. But when they’re told that they have genetic risks, their score was significantly higher than this cutoff of 14.
So, in other words, we could create the depression in three minutes, literally three minutes. And the reason why this is an example of a confirmation bias is that when they’re thinking back, they must have retrieved only the sad events to fit with this genetic feedback they had just received.
Or, even if some events were kind of ambiguous or neutral, they could have interpreted them as depressing events. You know, I was alone, probably, those were pretty depressing events for me. So, you know, confirmation bias happens to almost everybody, right? Secondly we need to be really also careful about this genetic feedback that we just give out to these people all the time. And you just pay like $200 and you can get the whole thing.
Charles: If it had been phrased differently, as, like, you were checking whether they had a genetic predisposition to, like, an upbeat temperament, as opposed to being, I don’t know what the opposite of depression would be, but it could have in three minutes, created that as well. Arguably.
Woo-kyoung: Yeah, yeah. The genes for happiness. You are genetically wired to be happier than a normal, average person. Yeah. Do you want to know the truth or . . .
Grace: So I was going to say this is one of my favorite topics, but honestly, every time I read a new topic, it was my new favorite one. So a big part of making a good decision is not just seeking out an example or an anecdote or asking your friend what their experience was, but, you know, using data and statistics.
So you know, for example, if I was to buy a car, you can either ask your friend, what did they think of their car? Or you know, see reviews of people who are the same family size as you or who live in Philly and have to deal with Philadelphia parking, for example. But yeah, I definitely, when I was reading it, I also intuited that if I were in a situation where I was buying a car, I would put a lot of stock in what my friends and family say and not so much in the data. Though I would consider it somewhat. So can you tell us: why are anecdotes and examples so powerful and why is it that we undervalue data and statistics so much?
Woo-kyoung: So here’s another example of that phenomenon, which happened after my manuscripts were submitted for a book. So it didn’t make it into the book. But at the end of last year, my son was a junior in college. It was past tense. He was majoring in computer science. He was interested in developing software for his career, and he was given an opportunity to join, to start a company, of course. So he said he had taken two years of leave from college because, you know, they were investors who already gave him money and so on.
And then my thought was, oh my gosh, he’s probably not going to come back to college. Who is going to come back to college two years after real life, right? And so I was telling all my friends about my anxiety about this issue. What am I going to do about this? And everybody that I talked to mentioned either one of the two people. Number one was Steve Jobs. Who is the other one?
Charles: Did Bill Gates drop out of college?
Grace: Yeah, totally. There we go!
Charles: Worked out pretty well for those guys.
Grace: Steve Jobs is the one that came to my mind too. Oh my gosh.
Woo-kyoung: It was hilarious.
Grace: Congratulations. Your son is the new Steve Jobs! Woo-kyoung, that’s amazing.
Woo-kyoung: Do you know how many students dropped out of college and didn’t become a billionaire? So it’s exactly that. But you know, actually my husband also said, “Yeah, but there’s Steve Jobs” too. Yeah, and he’s a psychologist! So the reason why we get sucked into anecdotes or specific examples is that that’s how our brain works. We are not evolved to deal with numbers or very abstract statistics like average. What does “average American” mean, right? Or you know, what does an average bachelor’s degree mean? It’s too abstract for us. We are more used to perceiving something concrete, or hearing something, taste something. We receive all our input into our brains through our senses, and then we process them into something abstract. So we are not used to dealing with these large numbers, especially.
We have to rely on statistics, but I would say you always add a couple of more examples on top of the statistics to make the point. Otherwise it’s not going to work. So for instance, when people are trying to convince climate deniers about the, you know, the risks that we are facing, it might be worth presenting some map of where your town is going to be a hundred years from now. Is it going to be under the water?
So something like that might be more powerful, or just watching the, you know, the ice breaking in the North Pole. Or, you know, the polar bear floating on the ice. That’s always a great one. Yeah, always present something really tangible along with the numbers and not just the numbers.
“The reason why we get sucked into anecdotes or specific examples is that that’s how our brain works. We are not evolved to deal with numbers or very abstract statistics . . . we receive all our input into our brains through our senses, and then we process them into something abstract.” – Dr. Woo-kyoung Ahn
Grace: That’s so interesting, also, because I feel like it’s a difference if you are the consumer of the information, in which case you want to seek out the data. But if you want to communicate the information, you want to use the data but also examples, because those are the things that really stick in people’s minds.
In your book you mentioned several different rational principles, and I think you were arguing that one of the reasons that we struggle with data and statistics and why anecdotes loom so much larger in our minds is because we really struggle with some of these concepts.
I think it was the law of large numbers, regression toward the mean, and Bayes’ Theorem, which, by the way, I’ve never understood Bayes’ Theorem. And you were right. You have solved the mystery of how to explain it to people who are not experts in this field. So thank you for that. But I’m curious if you want to speak to one of those rational principles and sort of what they are and why it is that we don’t understand them.
Woo-kyoung: So Bayes’ Theorem, I cannot explain it without slides.
Grace: Read Thinking 101, listeners, if you want to understand Bayes’ Theorem. We can skip that one because they can buy your book.
Woo-kyoung: Yeah. Because it has to have a formula. And I have to actually prove it with the numbers. But I mean we can try the regression to the mean, which is . . . so when someone performs really, really badly or someone performs really, really, you know, superbly, then next time they try the same thing again they tend to regress toward the mean. Even though it’s not like they became arrogant. It’s not like they became more motivated. It’s not like that. It’s just a purely statistical phenomena. And the reason why that happens is, in anything that we do in life, there are always random errors happening.
Oh yeah, the tennis player who just dropped out of the French Open—he got injured, right? But what that also means is that, up to that point, he could avoid those injuries. I’m not saying that he’s, you know, an excellent tennis player just because he was lucky. That’s not what I’m saying. But in some statistical sense, he was lucky, in the sense that he didn’t get injured that seriously. But, you know, people don’t always have that kind of luck all the time. And, you know, the luck . . . these random events are, by definition, they’re randomly distributed.
So when they keep on doing it, when they’re at the top, you know, bad luck can happen and then it looks like they’re performing worse afterwards. So it’s that kind of thing that’s really hard for us to grasp because we don’t see these random events. We don’t think about statistics or probabilities. So we tend to rationalize it when bad things happen. And people have all sorts of other biases that allow them to rationalize all these things so beautifully. So when someone catches COVID, right? It’s purely random bad luck. But then maybe, secretly, when you hear your friend getting COVID, you might think, what did he do wrong?
“When someone performs really, really badly or someone performs really, really, you know, superbly, then next time they try the same thing again they tend to regress toward the mean. Even though it’s not like they became arrogant. It’s not like they became more motivated . . . it’s just a purely statistical phenomena . . . in anything that we do in life, there are always random errors happening.” – Dr. Woo-kyoung Ahn
Grace: What do you mean? I’ve never had that thought! No. [laughter]
Woo-kyoung: What did they do wrong? That’s called a fundamental attribution error. We tend to blame it on the person first. So yeah.
Charles: I was just going to add that one of the interesting aspects of that is, say you have bad performance of a team or a bunch of students. And, you know, it would regress to the mean. But often it’s when it’s gone bad that we introduce something to try and remedy it. And then it gets better, and you assume, well it was because of that intervention we did. But you could have done nothing, potentially, and it would have regressed anyway. So it could get really murky.
Woo-kyoung: How can we sort this out in a clever, experimental design, right? The way to do it is you have to have two groups of students who both perform badly at the beginning, and then only one group gets intervention and the other doesn’t. And they both take the test again. And then if there’s additional increase beyond this regression to the mean, then we can say that was an intervention effect. Yeah.
Grace: That’s really interesting. And I think that that must be pretty common, especially if you think about it in a school setting. There’s definitely a lot of impact from that.
Another place that this comes up is job interviews. I know you mentioned in your book that, you know, if we interview—we’re actually interviewing for a position in our department right now, so this is super relevant. If we interview, let’s say, 20 people and we pick the best one, he or she, when they start the job, are likely to be not as good as they seem just because that’s probably the person who—it’s their birthday and it’s great weather outside and they are not sick and they, you know, just found out in their current job that they got a raise and they’re just in a great mood.
So we’re sort of capturing them at the peak of their performance and that actually, most likely, they’re going to regress toward the mean, at least somewhat. You suggested in your book that maybe not interviewing is the way to go, but I can’t bring myself to do that. So maybe you said also interviewing many times or seeing a candidate in a lot of different situations might be better. Is that correct? And could you tell me more about that?
Woo-kyoung: Yeah, yeah. Probably doing it multiple times, then you can get a more average picture of the person and there are many tricks you can use. I don’t know whether you guys ask for recommendation letters. And letter writers really are reluctant to say bad things. So I just call them up. I just pick up the phone and say, okay, this is totally not going to go anywhere. And then I ask, like, really important questions. Have you seen the person doing this or that? And so on. And the thing about recommendations—the letter writers or their previous colleagues—is that they had multiple samples of this person. It’s not just one interview. So they are the perfect person to ask.
Charles: I am a positive person by nature. But now we’re talking about the negativity bias. What is negativity bias? And I’m kind of interested to know why we do it, like it seems counterproductive. So what is it and why do we do it?
Woo-kyoung: You know, as usual, I’m starting out with an example. Let’s say there are two Chinese restaurants that you’re considering going for dinner tonight. And you searched their reviews, ratings. And they both received average ratings of four out of five. But then you start reading the reviews and Restaurant A, you know, had an average of four because almost everybody gave four stars to that restaurant.
There’s nothing really amazing or spectacular, but there’s nothing really bad either. But let’s say another restaurant, Restaurant B, had a four out of five rating because the majority of people actually gave five stars to this restaurant. But then there are some killer reviews. There are like one, two, and three—a few of them. And you read the reviews and it says something like, oh, the service was horrible or there was hair in my dish. And so even though there are all these five—the majority of people raving about this restaurant, that hair—that did it. So you might end up going to the first restaurant that almost everybody said, it’s an okay restaurant.
And that is exactly the negativity bias. Even though overall they’re the same thing, we tend to give more weight to the negative information. Why are we so sensitive to negative information? It might be because—this is just like my theory, it’s not really scientifically based—we might have evolved during the time when resources were not that abundant around us.
So we could be happy about some positive things, but we should be more sensitive to any kind of loss or negative or any kind of hint of a threat. So we might be overfiring at this kind of negative information.
Charles: Right. Because I suppose if you are really on the edge, then one negative experience can push you over the edge, right? So it doesn’t matter if you miss a few positive experiences. Yeah, that makes sense. So then loss aversion, the way I have understood it, and maybe you can help me if this is right, is that sort of like a special case of the negativity bias to do with the negativity of losing versus the positivity of gaining?
Woo-kyoung: Exactly. Yeah, exactly. In the case of loss aversion—so let’s say we toss a coin. And I’m going to offer you this game, and if it lands on heads, then you’re going to get $1,000 from me. But if it lands on tails, you have to give me $1,000. Would you take that? No. No way.
Charles: I would not, no.
Woo-kyoung: So let’s make this a little bit more attractive to you. So this time if it lands on, you know, whatever, the tails, then you have to give me still $1,000. But if it lands on heads, then I’m going to give you $1,200. So the average payoff is like, you know, a hundred dollars.
Charles: Hmm. No, I’m not going to take that.
Woo-kyoung: So most people don’t want to play this game until the gain-loss ratio is one to 2.5. Yeah. Huge ratio, right?
Charles: That seems crazy. I mean, I can feel it in my bones. Like I would, it would have to be really attractive for me to be comfortable losing that amount of money. But also mathematically it doesn’t make any sense. I must be missing out on lots of good opportunities, right?
Woo-kyoung: Exactly. Exactly. Because we tend to focus too much on negative information. That one hair in a dish . . .
Charles: It’s just one hair.
Grace: It doesn’t feel like just one hair to me. That’s one hair too many!
Charles: Yeah. There was an example. You spoke about car salesmen starting with a low price and adding on things versus starting with a high—so maybe talk us through that.
Woo-kyoung: Yeah, so let’s say a new car costs like $10,000. That’s the low basic option, basic model. But then the car salesperson says, if you add this, you know, add that. Then if you spend just $200 more, you’re going to have this power, whatever. If you add like $300 more, then you’re going to have, like, tinted windows and all these fancy options adding up.
And so the salesperson explains this in terms of the gain, what you are going to, you know, gain by adding these options. But then the other salesperson can start out with $15,000 for the full option model. If you add everything, then you’re going to have this. But then you can have a cheaper one, but then you are going to lose this one. Or if you take this away, then you will have to roll down the windows yourself. I don’t know whether they make that kind of car anymore. So there are all these things that you can lose. So they did an experiment with this kind of a simulation. And then people ended up spending more money on the car when it was framed in terms of losses. Yeah, losing options.
Charles: Right. So people just don’t like the idea of losing something with—I mean, you don’t actually own it in either case because you haven’t got the car yet—but that sense that that’s a thing that I’ve sort of got and I would have to give up.
Woo-kyoung: Right, right. So here’s my own real life example of, you know, how I apply this concept. So our department is going to move to a new building this summer. And I have all these books, and we have smaller bookcases in the new building. And there are, like, so many books that I have not even touched for over 20 years. So I need to get rid of them. And I started making a pile of which books I want to get rid of. And, of course, every one of these books is so precious. I can’t discard . . .
Charles: Can’t get rid of this one. Not this one. Maybe the next . . . no, not this one either!
Woo-kyoung: So that’s the loss frame. But then if I frame in terms of a gain frame, so instead of creating a pile of things to throw away, I’m making a pile of books that I’m going to purchase from my own bookshelf. I’m not paying any money. It’s going to be $0, but I have to buy them somehow. Which one do I want to buy? Then I feel like, oh my gosh, I only need like five of these books. Something like that.
Charles: Right. And in a sort of less academic world, I read about in the book, didn’t you Marie Kondo your wardrobe?
Woo-kyoung: Yes. Instead of trying to make a pile of discards or a donation pile, which is what most people do, just dump everything. You don’t own them. Which one would you want to buy if you had to buy them all over again and it’s going to be free? And of course they are clothes that I don’t want to take again, even if it’s free.
Charles: Right. Is that what is known as the endowment effect? Or is that an example of loss aversion? Or is the endowment effect just a special case of loss aversion?
Woo-kyoung: It is an example of an endowment effect. Yeah, endowment effect. So once you own it, then it looks more valuable than before you owned it. And it is an example of a loss aversion. But some people say the endowment effect can happen for other reasons as well.
Charles: Right. I mean, I still have all my CDs. And if someone said, “Hey, here’s a really good deal. A big box of CDs from the 1990s,” I would not snap it up. But . . .
Woo-kyoung: I should share my story about how I got rid of all my CDs. My younger brother is worse than me when it comes to the endowment effect. So he owns a car that has a CD player. He took all of my CDs.
Charles: Perfect. He’s happy. Yeah. That’s brilliant.
Grace: So you’ll have to just give them to Woo-kyoung’s brother. There we go. Solve your problem.
Charles: Yeah. Special feature: this car you can add-on comes with a CD player.
Grace: That’s so funny. I love that. I had a question about temporal discounting, so it was really interesting. I don’t think I previously connected the idea of delaying gratification and the merits of that with procrastination. It sounds like, from your book, that both of these are examples of temporal discounting where we overvalue our present self and undervalue our future self. Is that accurate?
Woo-kyoung: Yes, that is exactly the case. Yeah.
Grace: Okay, yeah, I never put those things together. So we all do this temporal discounting. We think, oh, it’s Friday Grace’s problem. That’s not Today Grace’s problem. So that’s fine. I had this example this weekend. I bought a cake and I was like, I don’t need this on Sunday. I’m full. I’ve just had a great meal, but I’m going to buy it and I’ll have it one evening this week at work when I’ll be so happy to come home to this cake. And my husband put the cake on the table and I was like, this is so hard. I can’t. I really wanted to eat the cake. So what’s the solution to this? I would love to hear an insight for all of us who struggled to not eat the cakes that are put in front of us. How can we do better?
Woo-kyoung: So the issue emerges for multiple reasons. There are many issues when we think about the future self. So in your case, the cake case, I feel like a therapist. What happened was you had a harder time resisting temptation, right? And instead you should think about the future, your healthy self.
So the reason why we cannot resist temptation and think about the future, one of the reasons, is that the future is, by definition—it’s abstract. It’s uncertain. It’s far in the future. So it’s much easier to think about current situations. So that’s why thinking about the specific future events can help you to think about your future self and then reward yourself for the future.
“The reason why we cannot resist temptation and think about the future . . . is that the future is, by definition—it’s abstract. It’s uncertain . . . so it’s much easier to think about current situations.” – Dr. Woo-kyoung Ahn
Grace: So we sort of make the future self more vivid or closer to how vivid the present self is. Is that the underlying thread?
Woo-kyoung: Exactly. And you mentioned procrastination and it works in an opposite way. So you think that future pain feels less than the immediate pain. So if you have to do this project that you don’t want to do, then you will just put it off for three days from now because you feel like that’s going to be a little bit less pain. Which, how does that work, right? That never works out that way. So maybe you should think about what’s going to happen three days from now, right? Really, really specifically.
Grace: Look at my calendar and see what that day is going to be like. Right.
Woo-kyoung: That’s great. Exactly. And hopefully you might want to get over with that project right now.
Grace: Right. You tell a funny example about how to use this for, not sinister, but sort of for your own personal gain. About how if you want a hard-to-get speaker to talk at a conference, you should invite them really far in advance. So we’ll probably have to edit this out so they don’t know our strategies.
But if we want some podcast guests who we would love to have, we should invite them far in the future. Obviously it’s a coincidence that we invited you a long time ago to do this today!
Woo-kyoung: Yes. Yes you did.
Grace: But yeah, you didn’t fully—it wasn’t so vivid when it was so far in the future. Yeah. That’s great. Thank you for the tips. I’m going to be a lot better at saving cake for the best moment to eat the cake now. And just to add also that I think it’s important to note that sometimes a lot of these examples in lots of literature that we read imply that eating a cake now is a bad thing. And I just want to add that it depends what your values are and what your goals are. So if your goal is that you want to try all the cakes from all the bakeries in Philadelphia, then eating the cake now is fine. You know, I think sometimes it’s, just for all of our listeners who want to eat a cake now, there are some reasons why that can be a positive too.
Charles: I was just going to add on that something Grace and I both appreciated in that delayed gratification chapter was at the end of the chapter, you do add a bit of a counterweight. You know, you say a lot of young people have such programmed lives, they need to get this on their resume, and they must sort of deny their present self at all times. And you do make a point of saying, your present self merits some fun and enjoyment as well. So I really appreciated that sort of balance in the book.
Woo-kyoung: Yeah. So that’s one demo experiment that does not work with Yale students in my class. So I ask them, do you want $100 right now or $125 three months from now? The correct answer should be you want $100 right now, right? But Yale students say, yeah, wait for three months.
Grace: Wow. That’s how they’ve got to where they’re going.
Woo-kyoung: Exactly, exactly. And then, it is almost sad. So I tell them, look, this experiment is not going to work. I’m not going to even try it with you. So what I do to them is have them think about how they felt about getting into Yale when they were in high school. Didn’t you feel like everything is going to be, like, smooth right after that? Once you get into Yale? You just need to put up with everything up to that. But as soon as you get the admission letter, everything will go smoothly. Your life is all set. Didn’t you think that? And they all laugh out loud. They all laugh. And that was all that I needed to tell them about what was wrong with living for the future.
Charles: Right. That makes sense. Just a slightly broader question, like what we focus on at the Alliance is sharing tools and strategies based on, you know, evidence from cognitive science, decision science, sharing those with young people so they can make decisions which are aligned better with their values.
And one part of it is nurturing dispositions that are important because, if you’re aware of all the tools, but you’re not motivated, really, to employ these kinds of tools to make better decisions, then the tools themselves aren’t going to be that useful. So I’d just love it if you could share some of the sort of mindsets or broader dispositions that you think are helpful for people to have if they’re going to be more effective decision makers.
Woo-kyoung: Probably one of the reasons why some people might not be motivated to overcome their thinking errors is that they might think that it doesn’t happen to them. And that’s called “not me” bias. So that’s one of the things that I try to emphasize in the book, that these are the things that happen to everybody, including—I commit planning fallacy while preparing for a lecture on planning fallacy, you know?
“One of the reasons why some people might not be motivated to overcome their thinking errors is that they might think that it doesn’t happen to them.” – Dr. Woo-kyoung Ahn
Grace: I love that. So meta.
Woo-kyoung: Yeah, it happens to everybody. But the other part that I want to emphasize here is that it doesn’t get fixed right away. Just because you read about this, it’s not like you can fix it right away. You have to practice continuously, you have to talk about it. You have to talk with other people in a constructive way and practice it.
Charles: Yeah. You speak in the book about seeking other people’s perspectives, again saying also being humble around that we also can fall prey to these biases. So I suppose intellectual humility seems to be a part of the package.
Woo-kyoung: Yeah, exactly. Exactly. There is research on wisdom, which I didn’t get into in the book, but there are many dimensions of wisdom. But the number one dimension is intellectual humility. Yeah. That I can be wrong. But one thing I want to add is that, some people ask me whether that would make us anxious about everything. It’s not like believing I’m wrong. It’s like I can be corrected. My belief currently is the best belief I have, but I can be corrected. I am open-minded. That’s all that that is.
“There are many dimensions of wisdom. But the number one dimension is intellectual humility.” – Dr. Woo-kyoung Ahn
Charles: You know, we like to know what guests are working on. So if there’s something that’s coming down the pipeline, you can give us a sneak preview of it.
Woo-kyoung: Yeah, yeah. What I talked about actually, in relation to the future reward, delayed gratification, and how some people can do too much of it. So too much sacrifice now for the future, persistence, you know, believing that your mind can grow and change. All these things are very, very important, especially for people who don’t believe those things. But at the same time, if you believe that there can be some negative sides of this. So one of the projects that I’m working on right now is that people who believe that the mind can grow and change and we can all work on it, they tend to actually blame other people more if they did something stupid. Because they believe that other people can also change.
Grace: Fascinating.
Woo-kyoung: Because this is unpublished data, so that’s the data that I was going to analyze.
Charles: This podcast is holding up the release of that data.
Woo-kyoung: So that’s kind of a new project that I’m looking into. Yes.
Grace: I’m sure it’s no coincidence that, when you’ve been working at Yale, that you see the downsides of some behaviors that are otherwise viewed as very excellent.
Woo-kyoung: Yeah. Yeah. I just tell my students, you know, the pain has to be enjoyable pain. Otherwise it’s not worth it. Yeah.
Charles: I think you say in the book, if you can’t breathe, stop. Yeah.
Woo-kyoung: Exactly. I have to remind myself all the time too. Breathe.
Grace: Yeah. So first off, we’ve talked a little bit about what Decision Education is, and Decision Education, amongst other things, includes recognizing and resisting cognitive biases and also things like intellectual humility. So it really connects a lot to the work that you’ve been doing.
So with that in mind, what impact on society do you think that there would be when the Alliance succeeds in its mission to get Decision Education into schools so that it’s part of every student’s learning experience?
Woo-kyoung: I think people might become happier. Not just the society, but the people themselves would be happier. Yeah. How miserable is it to believe that you have to be always right? Right? Just say, I can make a mistake. Just don’t take yourselves too seriously sometimes, right? We are all human beings and here are the errors you can make. And it’s not just others who make these mistakes, it’s you. You can also make the same mistakes. So be more respectful to other people. And have better relationships as a result.
“We are all human beings and . . . it’s not just others who make these mistakes, it’s you . . . so be more respectful to other people. And have better relationships as a result.” – Dr. Woo-kyoung Ahn
Grace: That’s wonderful. I love that answer. I was so curious when you said happier, like which way you would take it. But yeah, the idea that it allows us to fail in a safe way because we’re all playing the same game and we all have the same thinking errors that we make. And yeah, you can notice them in yourselves, which gives you intellectual humility, and notice them in others, which gives you a little bit more empathy for them. Yeah. That’s wonderful. Thank you for sharing that.
If you could pick one decision-making tool to pass down to the next generation of decision makers, what would it be?
Woo-kyoung: One decision tool is—always think about the other alternatives. So that would be basically directly related to the confirmation bias. You know, I’m just full of stories. So here’s my last story.
Grace: Please. I love them.
Woo-kyoung: And this was not in the book either. So during the lockdown and pandemic, I was teaching over Zoom in sweatpants and leggings. And the refrigerator was just downstairs, and there was a bed. I could always take a nap anytime. And I knew I was going to gain weight. So I was prepared. So I went on the treadmill and exercised three times a week. And then a year later, I still gained 10 pounds. And immediately I thought exercise is totally useless, right? I should have gone on with the diet. And then it took me 24 hours to realize that was an example of confirmation bias. I gained 10 pounds with the exercise, but if I had not done the exercise, I might have gained 20 pounds.
Grace: That’s fascinating.
Woo-kyoung: It’s always that kind of alternative that you have to think about before you convince yourself that you are right. So yeah, exercise might not have been useful, but I don’t have data about what would have happened during that time if I had not done the exercise. I could be still wrong about it.
Grace: Fascinating. That’s a great example. Because yeah, I can imagine being like, why did I spend all this time exercising when it didn’t work? But it did. It may have achieved what you wanted to an extent.
So the next question is about what book would you recommend, and we are going to be recommending Thinking 101 because I absolutely loved it. We’re going to recommend it to our work book club, I think. I’m already going to recommend it to many family members who wonder what is it that you do and why is it important? And I can say, this is the basics, this is why it’s really important and how it impacts our daily lives. So I guess other than your book, what book would you recommend as priority reading for listeners who are keen to improve their decision-making?
Woo-kyoung: So any book by Malcolm Gladwell. He just really writes it at the right level.
Paul Bloom used to be my colleague at Yale. Any of his books are just fun and great books to read. He recently has a book titled Psych, so that’s also a great book about human psychology.
Grace: Fantastic. Some great recommendations. This has been wonderful. Thank you so much for coming on the show. If listeners want to go online and learn more about your work or follow you on social media, where should they start?
Woo-kyoung: They can email me always. It’s just my name with the first name dot last name. So W-O-O dash K-Y-O-U-N-G dot A-H-N at Yale dot E-D-U. That’s it, and they can email me. It will be actually better if they read the book first!
Charles: The answer might be in the book.
Grace: Yeah. Don’t just email Woo-kyoung if you have a follow-up question, read the book first and if it’s not there . . .
Woo-kyoung: So that’s the best way to contact me. Yeah.
Grace: Great. I think you have a website as well.
Woo-kyoung: Yeah, I have a website. But the papers that are there are all technical, you know, with lots of statistics. No anecdotal reports.
Grace: Sounds good. The fun stuff is in the book. Got it.
Woo-kyoung: Yes, exactly.
Grace: Wonderful. So for any books, articles, or podcasts mentioned today, check out the show notes on the Alliance site, where you can also find a transcript of today’s conversation. Thank you so much for joining us, Woo-kyoung. It has been an absolute pleasure. Really good fun.
Charles: Thank you so much.
Woo-kyoung: Thank you so much for having me. This was a great conversation.
Published August 30, 2023
Check out our other latest episodes
Episode 031:
Unlocking the Brain
with Dr. David Eagleman
November 26, 2024Annie Duke chats with neuroscientist and host of the Inner Cosmos podcast, Dr. David Eagleman, about how the surprising ways our brains process and interpret information influence the decisions we make.
Episode 030:
The Happiness Blueprint
with Dr. Laurie Santos
November 13, 2024Dr. Laurie Santos, a Yale professor, cognitive psychologist, and host of the popular podcast The Happiness Lab, joins our host, Annie Duke, to explore the science behind happiness.
Stay informed and join our mailing list
"*" indicates required fields