Q1. We received an entire series of questions for Danny about Artificial Intelligence and its potential to deliver unbiased, noise-free decisions. At his suggestion, we are sharing an excerpt from a presentation he made at University of Toronto on the economics of artificial intelligence. We hope this not only answers your many questions, but also gives some nuance to his views.
Danny: One of the recurrent issues is whether AI can eventually do whatever people can do. Will there be anything that is reserved for human beings? Frankly, I don’t see any reason to set limits on what AI can do.
We have in our heads a wonderful computer. It’s made of meat, but it is a computer — it does parallel processing and is extraordinarily efficient. But there is no magic there. And so, it’s very difficult to imagine that with sufficient data, there will remain things that only humans can do.
We like to think that humans are unique because they can evaluate. Some call it judgment, but it’s really evaluation — basically the utility side of the decision function. I really don’t see why that should be reserved to humans. On the contrary, I’d like to make the following argument: the main characteristic of people is that they’re very noisy. You show them the same stimulus twice, they don’t give you the same response twice . . . because there is so much variability in people’s choices given the same stimuli.
The reason that we see so many limitations (with AI), I think, is that this field is really at the very beginning. I mean the idea of deep learning is old, but the development took off (11) years ago. You have to imagine what it might be like in 50 years. The one thing that I find extraordinarily surprising and interesting in what is happening in AI these days is that it is happening faster than was expected. People were saying that it would take 10 years for AI to beat Go, and the interesting thing is, it took eight months! This excess of speed at which the thing is developing and accelerating, I think is very remarkable. So, setting limits is certainly premature.
Go is a highly complex game in which players take turns placing stones on a 19×19 grid, competing to take control of the most territory. It is considered to be one of the world’s most complex games, and is much more challenging for computers than chess.
Most of the errors that people make are better viewed as random noise, and there’s an awful lot of it. One implication is obvious: In domains where people are willing to cede authority, it will be advantageous to replace humans with algorithms, and this is really happening. Even when the algorithms don’t do very well, humans do so poorly and are so noisy that just by removing the noise you can do better than people. And the other (implication) is: when you can’t do it, you try to have humans simulate the algorithm. By enforcing regularity and processes and discipline on judgment and on choice, you reduce the noise and improve performance because noise is so poisonous.
Do humans always prefer emotional contact with other humans? That strikes me as probably wrong. It is extremely easy to develop stimuli to which people will respond emotionally. A face, an expressive face, a face that changes expressions, especially if it’s sort of baby-shaped — I mean there are cues that will make people feel very emotional. Robots will have these cues. Furthermore, it is already the case that AI reads faces better than people do and can, and undoubtedly, will be able to predict emotions and development in emotions far better than people can. I really can imagine that one of the major uses of robots will be taking care of the old, because I can imagine that many old people will prefer to be taken care of by robots, by friendly robots, that have a name, that have a personality, that are always pleasant. In quite a few situations they will be happier with that option, rather than being taken care of by their children.
How would the robot be different from the individuals? I propose three main differences. One is obvious: The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. And the third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having a narrow view; that’s the essence of wisdom. It’s broad framing, and a robot will be endowed with broad framing. And I really do not see why, when it has learned enough, it will not be wiser than us because we don’t have broad framing. We’re narrow thinkers, we’re noisy thinkers. It’s very easy to improve upon us, and I don’t think that there is very much that we can do that computers will not eventually be programmed to do.”
Q2: In business, and in so many areas of our lives, self-confidence drives success. And yet, Dr. Kahneman talks about self-confidence as something to watch out for, a negatively-impacting bias. How do we strike a balance?
Danny: Optimism is the engine of capitalism. Overconfidence is both a curse and a blessing. The people who do great things, if you look back, the highly successful people have been overconfident and optimistic. Overconfident optimists take big risks because they underestimate how big the risks are. But, if you look at everyone, overconfident optimism is not justified, because there are far more failures than successes.
Q3: Does noise and the idea of internal inconsistency seem to undercut what we tend to think of as expertise?
Danny: It’s very difficult to imagine from the psychological analysis of what expertise is, that you can develop true expertise in, say, predicting the stock market. You cannot, because the world isn’t sufficiently regular for people to learn rules. How could one learn when there’s nothing to learn?
Q4: How do we reduce noise in our own judgments?
Danny: The first thing we need to do is acknowledge that noise is real. We need to recognize and accept that it is part of human judgment and something that we can work to reduce. In the book, we talk about decision hygiene – methods that we all can adopt to help improve decisions.
In an organization, you can collect independent decisions, then aggregate them. This dramatically increases “quiet” and fairness.
As individuals, we can think about ‘how might I make this decision on a different day?’ or we can consult with others, ask what they would do, and then aggregate the results.
One of the main ideas of decision hygiene is how to deal with problems. Probably the most important suggestion is to take problems and break them into units, and deal with them independently. And — this is fundamental — delay your intuition of the global evaluation until the end. Don’t dispense with your intuition, but delay it. Thus, you are not influencing the outcome before you consider all the parts.
If you do not apply these methods, you run the risk of being influenced without knowing it by all kinds of forces that introduce some randomness in your decisions.
Q5: What strategies would you suggest for improving decision-making in finance and in life?
Danny: I would offer four strategies. 1) Use algorithms whenever possible. There are very few examples of people outperforming algorithms in making predictive judgments. We have the idea that it is very complicated to design an algorithm, but an algorithm is just a set of rules. Also, train yourself and your teams to approach problems in a way will impose uniformity. 2) The single best advice we have in framing is broad framing. See the decision as a member of a class of decisions that you’ll probably have to take. 3) Consider regret. It is probably the greatest enemy of good decision-making in personal finance. Assess how prone you or your clients are to it. The more potential for regret, the more likely you or they are to churn the account, sell at the wrong time, and buy when prices are high. High-net-worth individuals often have a limit on the amount of money they are willing to risk losing, so try to gauge just how loss averse they are. 4) Seek out guidance. The best person to advise you on your finances is someone who likes you, but does not care about your feelings.
Q6: How do we strike the right balance between gathering information or generating alternatives, and analysis paralysis?”
Annie: The challenge for any decision maker is that you want to accomplish two things at once: You don’t want to waste too much time, and you don’t want to sacrifice too much accuracy. The key to achieving the right time-accuracy balance is figuring out what the penalty is for making a lower-quality decision than you would have if you had taken more time. How much leeway is there to sacrifice accuracy for speed? The smaller the penalty, the faster you can go. The bigger the penalty, the more time you should take on a decision. The smaller the impact of a poor outcome, the faster you can go. The bigger the impact, the more time you should take.
Q7: As employees, leaders, and parents, we want to sound confident in our recommendations and decisions, but Annie, you recommend saying: “I’m not sure.” Why?
Annie: Well because you are not sure. We are afraid to say that we are guessing when we’re making judgments, but anytime that there’s uncertainty, then there is hidden information. The answer is that you cannot possibly be sure how it is going to turn out. To say: “I know for sure; I know this is right or wrong; This is how it’s going to turn out; I know this is the right decision” – none of those is an accurate representation of the world or the state of your knowledge or your ability to guarantee anything.
The more accurate your representation of the world, the better your decisions will be as a result. That does not mean that I shift all the way to saying, you should not even try because you are not sure. In fact, just the opposite.
We have imperfect information and the information that goes into any decision that we make is not complete, but it is also not zero. Our job as decision makers should be somewhat singular and focused on: how educated can we get on our guess? Because the more educated we can get on that guess, the more accurate our representations of the world are going to be. The closer we get to what is objectively true of the world, the better our decisions will be. We can really do that with anything. When we are deciding, we want to start thinking about two things: what are the things I know that can inform the decision? And what are the things that I can find out? That really unlocks a lot in terms of decision-making as we build better models of what is objectively true of the world.
Q8: What is the benefit of teaching young people decision-making skills and dispositions?
Joe: Our decisions are the single most consequential way that we can improve or undermine our own lives and the lives of those we care about. Decision Education equips students with essential, life-long skills needed to make better decisions. Drawing on decades of research, Decision Education teaches students to see decisions as opportunities, manage themselves and decision situations, follow a structured decision-making process, resist cognitive biases, apply probabilistic thinking, and make and manage good habits. The quality of our decision-making can be improved through specific learning and reflective practice. The opportunity is to teach students at a time and in a structure that impacts them most – as part of their educational process.
Decision Education brings together a series of skills in an organized and systematic way that teachers and students can embrace, learn, and apply with powerful results.
No matter a student’s interests, goals, or values, improving their decision-making has a vital, cumulative, and positive effect that is a clear way to help them reach their full potential. Better decisions lead to better lives and a better society. Teaching students Decision Education empowers them to pursue their very best possible life.