CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 213: Two things shape the course of your life: luck and your decisions (with Annie Duke)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

June 6, 2024

Should people spend more time becoming better decision-makers? What are the main things that determine how our lives turn out? What's wrong with pro / con lists? When should we deviate from making decisions based on expected value calculations? What kinds of uncertainty might we encounter in the decision-making process? Are explicit decision calculations self-defeating? How similar is intuitive decision-making to decision-making that's based on calculations? How useful are heuristics? How can we know which decisions are significant enough to warrant calculations? What makes a decision hard? What's the omission / commission bias? What lessons can we learn from monkeys and pedestals? Should decision-making strategies be taught in primary and secondary schools?

Annie Duke is an author, speaker, and consultant in the decision-making space, as well as Special Partner focused on Decision Science at First Round Capital Partners, a seed stage venture fund. Her latest book, Quit: The Power of Knowing When to Walk Away, was released in 2022 from Portfolio, a Penguin Random House imprint. Her previous book, Thinking in Bets, is a national bestseller. As a former professional poker player, she has won more than $4 million in tournament poker, has won a World Series of Poker bracelet, and is the only woman to have won the World Series of Poker Tournament of Champions and the NBC National Poker Heads-Up Championship. She is the co-founder of The Alliance for Decision Education, a non-profit whose mission is to improve lives by empowering students through decision skills education. Connect with her on Facebook, Twitter, YouTube, LinkedIn or via her website, AnnieDuke.com; or subscribe to her newsletter on Substack.

Further reading

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Annie Duke about good decision-making as the key to individual and societal success, short-term versus long-term decision strategies and bringing decision-making curriculum to primary and secondary school classrooms.

SPENCER: Annie, welcome.

ANNIE: Thank you for having me, Spencer.

SPENCER: So why is it important that people spend time becoming better decision-makers?

ANNIE: I guess, to me it's obvious but what I've sort of discovered through my work is it's not necessarily obvious. But the way that I would put it is that when we think about how our lives turn out, there's only two things that determine how our lives turn out. One is luck, and we have absolutely no control over luck. So you'll hear people say, "Oh, I make my own luck." But you can't actually make your own luck because luck is an exogenous force that just acts upon you. So there's luck; that's one thing that determines how your life turns out. And then the other is the quality of your decisions. And that's it. Those are the two things. So actually, if we go back to when people say, "I make my own luck," what they're really meaning is, "I make decisions that reduce the chances that luck is going to have a bad influence on the way that my life turns out." So for anybody who actually wants to improve the quality of their outcomes, improve the chances that they have a good life versus a bad life, the only thing that you should be focusing on is the quality of your decisions because that's the thing that you have some control over.

SPENCER: If you look up simple decision-making advice, one of the most common things you'll see is, "Oh, go make a pros-and-cons list." What do you think of that advice?

ANNIE: I personally hate pros-and-cons lists. I like the idea that it's asking people to actually think through what are the positives, what's the upside to a particular decision, what are the negatives or the downside. I suppose that that might be better than not thinking about it at all. But the problem with the pros-and-cons list is that it tends to amplify bias. We need to realize that when we start a decision-making process, we're not coming in neutral, no matter how much we'd like to believe that we're coming in neutral. And Spencer, I don't know, maybe you've felt this in your own decisions before where you can kind of sense that when you're thinking about a decision, you're rooting for it to go one way or the other. So a good decision process is supposed to help us to see the decision more neutrally as opposed to help us get to the desired decision that we already would like to have; this is where a pros-and-cons list can go kind of wrong. The first problem is that a pros-and-cons list is flat. And what I mean by flat is that it doesn't really have any way to distinguish the magnitude of the pro versus the magnitude of the con. So you're creating just a list of pros, and all of those pros are basically taking up the same space on the pro side. And you're making a list of cons, and all of those cons are taking up the same space. So as you're making that list, you could say, "I have a chance that I could die," and "I might get a rash." So those are taking the same space, so there's a flat list. And obviously, those two things shouldn't take up the same space on the con side. Nor should, "I might die" versus "I could have a really tasty burger," which would be a pro, should those take up the same space. So it's really hard to actually navigate the decision, in a way if it's a flat list, that lets you understand what the weight of each of those pros and cons are supposed to be. And then what happens, because we're losing that idea of magnitude, it's very easily manipulated. And when I say manipulated, I don't necessarily mean that you, Spencer, would purposely be trying to create a list that was going to get you to where you wanted to go consciously. But, bias is unconscious, and so you will do that. So if you were making a pros and cons list for moving to Austin, Texas, and that was a place that you really wanted to move to, you could imagine expanding the pros side of things. Just putting down a lot of pros and maybe reducing the cons in the sense that you can always think about having a con that could be split out into five different things or maybe collapsed into one category. And vice versa, if you really didn't want to move to Austin, you might do the reverse. And then when you're looking at the pros-and-cons list, you're now going to get yourself to a decision that you want to get to. So, I'm just generally not a particular fan of them because I think that they amplify cognitive bias.

SPENCER: Do you have a way of tweaking them that you think addresses this, or do you think we should just throw out the whole concept?

ANNIE: You can definitely tweak it because if you actually run a good decision process, it's going to include something that's akin to pros and cons, which is just thinking about the upside and thinking about the downside. So the upside are the things that you can gain, and the downside are the costs that you might bear. So you can see that that maps relatively well onto pros and cons. But in order to run a good decision, you just have to go beyond that list. You have to look at, for any of the things that you could gain, what's the magnitude of the things that you could gain on average and what's the probability that you're actually gaining those things? And likewise, on the downside, what's the probability that you'll bear certain costs and what's the magnitude of those costs? So you always need to include some sort of probability: what's the chance that that's going to happen, which is not included in a pros-and-cons list. It's another problem with the pros and cons list. What are the chances that the thing is going to happen, and how much is it going to help me gain ground toward my goal or lose ground toward my goal? Let's say that you were trying to decide whether to climb Mount Everest, and you were thinking about what the upside was. That would be things like (just as a simple example) accomplishing something that very few people in the world have accomplished, feeling really good about myself because of my grit, whatever those things are. And then we might have costs that we might bear, which might be serious injury or death. And if we decide to do that, as many people have, when you're entering into that decision, basically what you're saying is this downside outcome, which is death, has a low enough probability in comparison to the things that I'm going to gain, that I feel that on average is a good choice for me, that I'm going to get more out of it than I'm going to lose because the probability of this very bad thing happening, which is death, is low enough to make it worthwhile to try to grab the stuff that's good. But those probabilities can change. So if you're now, for example, on summit day and a blizzard rolls in, now the probability of death has probably become too high. And it will no longer balance out whatever the good things are that you might gain in terms of the way that you feel about yourself or how you perceive that others might perceive you — that sense of accomplishment that would come from summiting. That's why we need to understand so clearly, not just, "Is this good or bad" but "How good or bad is it," and "What's the probability of this good or bad thing happening given the decision that I might make?" Because otherwise, we can't figure out whether it's worthwhile or not.

SPENCER: Sounds like you're describing doing an expected value calculation. Is that accurate?

ANNIE: That's exactly what I'm describing. Now obviously when you're thinking about expected value, there are specialty cases where you have to take into account risk. But mostly your decisions are going to be based on expected value. Where you're going to bring in risk has to do with things like bet sizing. Once you determine that something is positive expectancy, you're going to worry about bet sizing. And then also in these tail situations, which is, "Can I actually afford to lose?" So the Mount Everest example is a good example of that, where death is something that you can't recover from. So, we would be thinking about risk a little bit in that particular situation. But just as you said, expected value is exactly what you're thinking about, which is a weighted average. It's probability times payoff. And if we really want to dig into detail, it's for any given outcome, what is the payoff, positive or negative, times the probability, and then we take those together. So just super simply, if you're flipping a coin and you're going to pay me $2 for heads and I have to pay you $1 for tails, I'm trying to think about whether I want to take that bet. Well, I know that when I win, I'm going to get $2 and that's going to happen 50% of the time. So if I multiply 50% times $2, that is positive. The upside is $1. But then when I lose, I'm going to have to give you $1. And that's going to happen 50% of the time. And 50% of $1 is 50 cents. So that's sort of the gross downside. And then I can just subtract 50 cents from $1. And what I find there is that my expected value is 50 cents for every dollar that I'm risking, which obviously is a bet that I should take all the time.

SPENCER: One of the things that's so mathematically appealing about expected value is, for small gambles, you can prove that really you do the best if you always do the expected value maximizing thing. But you kind of brought this idea of risk. So how does that come in here? When should we actually deviate from the expected value?

ANNIE: It has a little bit to do with bet sizing, but really what it comes down to is like risk of ruin. So what you need to think about as you're thinking about the expected value is what is the spread of possible outcomes. You can have two situations that have identical expected value, but one is going to have more volatility or variance. Think about this: we can do the thing where you're going to pay me $2 or I'm going to give you $1. So notice now I have a chance that I could lose $1. So if we were only betting once, I might lose my dollar. Even though the expected value is 50 cents, I could lose $1, I could get $2. There's some volatility there. But what if you just paid me 50 cents every time we flipped? The expected value there is going to be identical. In both cases, my expected value on every flip is 50 cents, but one has zero risk, and the other now I've added some risk into. So when you're just paying me 50 cents, I don't have any risk. So there I don't ever have to worry about how much I am betting each time. But if it's two to one, then I have to worry about it because of that chance that I could lose that dollar. So a question that we want to ask ourselves is, can we basically withstand the volatility? So let's imagine that all the money that I had in the world was a thousand dollars, that's all that I had in the world. Even though you're going to pay me $2,000 for every $1,000 that I lose, would I bet my whole thousand dollar? And the answer is no, you shouldn't do that because there's volatility. So it's not just what do I expect to win on this thousand dollars? Is there a chance that I could lose? And that's where we sort of take into account this idea of expectancy because it has to do with risk of ruin. So there's different ways to handle that. The simplest way would be the Kelly criterion. Are you familiar with that?

SPENCER: I am. Yeah.

ANNIE: So the Kelly criterion is the simplest way to handle it, which is to bet what your edge is. So the way that we calculate edge is, on that two-to-one bet that we're talking about, I'm going to make 50 cents on every dollar so my edge is 50%. So in that particular case, Kelly would tell you to bet half of what all of your money is. So if all the money I have is a thousand dollars, I would bet $500. And the idea there is that, on average, it's going to take me two flips to make sure that I win on average. So I'm going to win every two flips. And so that would be allowing me to withstand the volatility. Now I will tell you, very few people bet what would be called full Kelly, (which is what I just gave you) which is exactly what your edge is. And most people will do something like half Kelly or quarter Kelly, which gives you a margin of safety because even with a coin, even though on average, I'm going to win every two times, we can figure out the math pretty easily. So let's say that I'm going to win if it lands heads. If you flip the coin, it's going to land tails 50% of the time, and then the chance it lands tails again is 25%, in other words, 50% of 50%. And the chance that it lands tails again is 12.5%, and so on and so forth. So the fact is that a 25% chance of going broke is pretty high. So what a lot of people will do is, instead of betting full Kelly, they'll bet half Kelly, which would be to bet 25% of your bankroll. Some people will bet quarter Kelly, which would have you bet 12.5% of your bankroll. There are other ways to deal with risk. That would be kind of the simplest way to think about it.

SPENCER: So let me see if I can summarize this briefly for our listeners. If you're making lots of small bets and you know that you're going to get to stay in the game, like you're not going to lose so much that you're kicked out of the game, then you just want to maximize your expected value. That will apply both for gambling games. But in theory, it would apply to sort of any life decision. Do you agree with that so far?

ANNIE: I totally agree with that so far.

SPENCER: But then, if we think about the fact that if you're making larger bets or there's a lot of volatility in the outcomes, you could lose everything and then kind of be kicked out of the game. And this could happen in gambling, where you just lose all the money you have. Or in life, you die or something tragic happens where you're kind of kicked out of the game. And in these situations, we switch our analysis and we then think about the risk of ruin. Kelly is sort of maybe the simplest way to deal with this, but in practice, maybe it's a bit too aggressive. My understanding is that Kelly basically says maximize the long-term growth rate, whereas expected value says maximize the average amount on each bet. Is that right?

ANNIE: Yeah, that's right. — I'm going to carry you around to explain things behind me. It'd be very helpful for me. — One thing that I would add to what you said is that when we're making these small bets where there isn't a lot of risk of ruin, we can be less precise about what the expected value is. So one of the places that people will sometimes get hung up is that they'll have sort of figured out that the expected value is positive, but then they want to know exactly how positive it is. So one thing that we need to remember is that we were just talking about a coin where we know that it's going to land 50/50 and we're talking about a situation where I know what the payoffs are because I set them. You're going to give me $2, I'm going to give you $1. Too bad, Spencer [laughs], in the long run, you're going to lose all your money to me. I set it up that way. But in reality, we have to remember that uncertainty in our decisions isn't just coming from that probabilistic part, which is what we just talked about. You don't know for sure whether it's going to land heads or tails on the next flip. We just know it's going to happen 50% of the time that it will land one or the other. But we then have to go back a layer, which is that we often don't know what the probability of the different outcomes is for sure. We have some sense of what they are, but we don't know for sure. So things generally aren't very coin-like in that sense. And the other thing is that we're often making some guesses at what we actually think the gains or losses might be because we also don't know those for sure. So what will happen to people is that in those situations, they get very hung up in their decision-making. We've all heard "paralysis by analysis," where you can see these people sort of white-knuckling the decision because they don't know for sure exactly what those gains or losses are, or what the probabilities of those things are. Whether they're aware that that's the problem or not, they want to know for sure before they decide. But to your point, for the smaller decisions, once you sort of determine that it's positive expectancy, it's okay to just go ahead because those decisions are going to be lower stakes — the risk of ruin is very low. And when we're making lower-stakes decisions, we have a lot more tolerance for introducing error into the decision. Error, obviously, means increasing the probability you get an outcome that you don't like. But if there's not a lot of risk of ruin, if it's not going to have a big long-term impact, you'd prefer to spend your time on the decisions that matter more and just go faster on the smaller ones.

SPENCER: You mentioned two different types of uncertainty. I want to unpack that a bit more because I think it's a really interesting topic. So you've got a coin being flipped, and you know that there's a 50% chance it's going to land heads. That's one type of uncertainty. But then, you could have a situation where you have a coin, and you know it's weighted, you know it's not 50/50, but you're not sure what the probability is. That's a different type of uncertainty. And then you might even add a third type of uncertainty, which is, "Okay, I'm playing a game against someone. I think they're flipping a coin but I'm not even sure what the game is exactly that I'm involved in." And often real life maybe is like that, where it's not even that you have numbers and you're not sure what they are — like probabilities that you're just guessing — it's like that you're not even sure you have the right model for the situation.

ANNIE: Yeah. We can break it down into two types of uncertainty in terms of the academic sense. Aleatory is the first one, which is, "I know that the coin is going to land heads 50% of the time and tails 50% of the time, but I don't know which one is on the next time." So aleatory is what we would think about as luck. That goes back to the beginning of our discussion. There's luck and there's the quality of decisions. Luck is an aleatory uncertainty. But then the other that we've now added in is called epistemic uncertainty, which is that there's just stuff that we don't know. There's some very small universe of stuff that we know, it's tiny. And there's a very large universe of things that we don't know. Of those things that we don't know, to what you just pointed out, sometimes we know we don't know them, and other times we don't know that we don't know them. So that's a pretty broad universe. So, most of the decisions that we're making are at least partially behind the veil of ignorance, which just means that there's a whole bunch of stuff we don't know when we make decisions. Obviously that's hard. If we're in a world where we're only dealing with aleatory uncertainty, then we can make these types of calculations and be quite certain about what the expected value of any option that we have to choose is. But those situations, as you just pointed out, are rare. And normally, we have some epistemic uncertainty; there's just stuff that we don't know. And that makes the decision-making quite hard to navigate.

SPENCER: Some people argue that trying to do something like an expected value calculation, while you can prove kind of theoretically that that will be the right thing to do in certain kinds of situations, that when you get into real-life scenarios where there's so much uncertainty — there's even uncertainty about the uncertainty and there's unknown unknowns, etc. — that it may actually be self-defeating to try to do the calculation. So I'm wondering where you fall on that. Should we be trying to do calculations, or in real-life scenarios, is that often not actually desirable?

ANNIE: I actually have a very strong opinion on this. It doesn't mean that my opinion is correct, I just want to say that, but my opinion is strong. And it is that you should always be trying to do expected value calculations, at least in situations that matter. Again, for small decisions, I don't really care if you just wing it. But for anything that has any heft to it that matters, you should do it. And the main reason why I think that is because you're doing it anyway. It's impossible to make a decision without some sort of forecast of the future. So if you're forecasting the future anyway, which is an expected value calculation, we ought to make that forecast explicit because then we can examine it like an object. Other people can look at our reasoning and maybe help to point out flaws or to give us some of the information that they may have that we don't have. And that will relieve some of the epistemic uncertainty. So, I just have this very strong opinion that if you're doing this implicitly anyway, then you ought to do it explicitly because it's going to help us along, not just in terms of the quality of the decision at the time, but particularly in terms of our look back to understand what was our rationale for making the decision in the first place. As we get new information from the world ex post, it's going to help us to close the feedback loop better, which is going to help us to make a better decision. So let me just explain what I mean by "you're doing it anyway." Let's take a super simple decision. You're looking at a restaurant menu. It's a restaurant you've never been to. You're looking at the descriptions of the dishes. Whatever dish you choose is a forecast of the expected value of each dish given what your values and what your tastes are. You may value ordering the thing that's the most healthy. So whatever you order on that menu, you're gonna say has the highest expected value in relation to that value, which is "I want food that's healthy." Maybe you want the food that is the tastiest, regardless of health. Then whatever you choose, given that you want something that's really tasty and, given that Spencer has a different idea of what's tasty than Annie does, you're choosing the dish with the highest expected value, at least what your best guess of that expected value is for you. And maybe you want the thing that is the healthiest but also the tastiest. Then we're going to have the intersection of those two sets, and that's what you're going to choose to order off the menu. And that's true for big decisions, too: what job are you going to take; frankly, who are you going to marry; where are you going to live? These are all expected value calculations, just that kind of to your point, because we sort of sense that they're hard and that we can't come up with the right answer in the sense like 'two plus two equals four' right answer, that therefore we ought to just give up. But when you're making the decision, you're doing it regardless. So you should be doing it explicitly.

SPENCER: When you say that we're doing it regardless, I wonder how similar our intuitive decision-making is to an expected value calculation. In other words, if we could somehow monitor what's going on in our brains when we kind of just go with our gut, is it really the expected value? Or is it doing something else? Is it doing some kind of pattern matching or recalling the nearest example and making a decision based on that?

ANNIE: Our gut is definitely doing pattern matching and recalling the most recent example, and it's going wrong a lot. So if we knew objectively what the highest expected value option was, our gut is not always going to choose that option. In fact, often, depending on the type of decision you're making, it rarely is. But that doesn't mean that it's not doing an expected value calculation, it's just that it's doing a poor one. So when you're choosing an option, you're choosing the thing that you think you're going to like the best or that's going to turn out the best for you in relation to whatever your goals are. So you're basically choosing the thing that you think in the best of worlds is going to help you to gain the most ground toward achieving your goal. Sometimes we're in a situation where we have to choose the option that's going to cause us to lose the least ground, which stinks, but that happens. But regardless, that's the definition of what an expected value calculation is. So I don't want to confuse doing a good expected value calculation with what your gut is doing, which is a bad, inaccurate expected value calculation, that it's still not an expected value calculation. What's happening is that your gut is under the influence of a lot more cognitive biases that cause us to make bad forecasts of the future, that when we're thinking about what's going to cause us to gain the most ground toward the goal that we have, we're going to have a lot of error in that calculation if we're leaving it implicit.

[promo]

SPENCER: I wonder how you feel about heuristics for decisions. Take, for example, a situation where a friend reaches out to me and asks if I can help them with something important. It feels to me like what my brain does there is not so much like an expected value calculation but rather just like, "I'm the sort of person that aspires to always help my friends when they're needed, so unless there's some emergency or something horrible going on, I'm just going to say yes." It's like I'm applying some kind of default heuristic rather than expected value calculation. Do you think that there's room for decision-making heuristics that we rely on in certain situations instead of trying to do expected value even when it's an important thing?

ANNIE: Okay, I'm going to sound like a broken record. You just stated that the way that you want to think about yourself is as someone who helps friends. So when your friend calls you, you may have a heuristic which helps you to make the decision faster, but that heuristic is in reference to expected value, which is that you want to do things that help you to gain the most ground toward this goal that you have. In this case, you've stated what it is: "I want to perceive myself as someone who helps my friends."

SPENCER: Well, sorry, let me rephrase. It's not just how I want to perceive myself, I want to actually be that sort of person who helps my friends.

ANNIE: Fair, fair. I think about things in terms of what's your self-narrative. So yes, you want to be the person who helps your friends. So that heuristic that you're applying allows you to take a shortcut toward achieving that goal. It's still an expected value nonetheless; you're just taking a shortcut to the expected value calculation. So you can sort of think about it as not going longhand. Let's think about heuristics. Heuristics are actually really good things. They are rules of thumb that help us speed up our decision-making. We use heuristics every single day. Our visual system uses a lot of shortcuts, uses a lot of heuristics. One of the ways that it does that, that I think that people are familiar with, is distance cues. So let's say, I'm standing really close to a person. That person is going to cast a very large image on my retina. Because that person is really up close to me, there's going to be this very big image on my retina. In fact, that image will be bigger than if I'm looking at an elephant very far in the distance. So the elephant is going to cast a smaller image on the back of my retina than the person who's way close to me. Yet, we perceive the elephant to be bigger than the person who's standing in front of us, even though the image that the elephant is casting is smaller. Why is it that we do that? Well, because we have cues for distance. There's all sorts of things that have to do with perspective that allow us to have a quick and dirty calculation of how far someone is away from us or something is away from us. Now, that's not going to be a perfectly accurate calculation, but it's going to be kind of good enough for us to figure out that the elephant is bigger than the person who's right close to us. That's a heuristic. It's a heuristic that our visual system uses, and we use it all the time. Now, our cognitive system also uses heuristics, shortcuts. You actually mentioned one of them, which is called the availability bias or the recency bias. They are two different biases. I could interpret what you said either way. Recency is that if something has happened recently, I think that it's more likely to happen in the future. Availability bias would be the easier something is to recall, the more vivid it is in my memory, the more frequently I'll think it occurs. That's a shortcut; it's a heuristic. And like distance cues, it works pretty well most of the time. So, imagine that you're a hominid and you're evolving in a very, very small social group in a very, very small territory. Things that you come across a lot, which means that you're going to be able to recall them more from memory because there's just more examples of them in your memory, you've seen it more often. You are going to judge to be more frequent in your territory. And that's actually going to get you to the right answer most of the time. Here's where we have a problem with heuristics: because they are rules of thumb, because they're shortcuts, they can break down in edge cases. With the visual system, for example, if I take away distance cues, I will be a very bad judge of how big an item is in comparison to another thing. A classic example of that would be what's called the moon illusion: if the moon is straight above me in the sky, it looks smaller than if it's on the horizon. They think a big reason for that is that the moon, no matter what, is casting the same size image on the back of your retina, but it's a pretty small one because the moon is very far away. When it's on the horizon, I have lots and lots and lots of distance cues that tell me, "Holy crap, that thing must be really big because it's really, really, really far away from me." But when you're looking straight up into the sky, you lose those distance cues, so it tricks your brain into thinking that the moon is smaller when it's up in the sky with no distance cues than when it's on the horizon. We've all seen a lot of visual illusions where we're basically tricking our visual system. What we're really doing is tricking those shortcuts or tricking those heuristics, and that's true for cognitive shortcuts as well. So with the availability bias, for example, nowadays we live in a global world. So, we hear about a lot of things that we actually wouldn't encounter ourselves, like shark attacks, for example. We hear lots and lots and lots about shark attacks. And so people will judge shark attacks to be much more frequent than they actually are. People don't actually die from sharks very much. In fact, people die more often from coconuts than sharks. But we don't think that because the heuristics that serve us in a lot of cases, and certainly in the cases that we were evolving in, doesn't necessarily work when we start to get some of these edge cases. And that's where we introduce a lot of errors into decision-making.

SPENCER: Nice. That's a really good explanation.

ANNIE: Thanks.

SPENCER: One thing you mentioned briefly earlier is that not all decisions are worth investing time in. How do you think about which ones should you really go deep on versus which ones should you just kind of go quickly and just do what seems best?

ANNIE: That's such a good question. So, all decision-making involves two parts: sorting and picking. Sorting is the heavy lifting of decision-making, which is sorting things into options that are good versus options that are bad. Picking is the process of picking among the options that are good. So, we can think about looking at a restaurant menu. It's a perfect example of sorting and picking. I'm sure that you've experienced this yourself, Spencer. You look at the menu, and what do you do? You sort it. These are things that look like I might like them, and these are things that I know I wouldn't like. So now you've sorted the menu into good options and bad options. And then from there, it's a picking process of the things that I like. Which thing do I want to pick? So, we have to understand that before we get into, "What can we spend a lot of time on, or what can't we?" Because the thing that determines that is what is your bar for sorting something into a good enough option. We're going to have a different bar for that depending on how much tolerance we have for introducing error into the decision.

SPENCER: Are you saying, though, that sorting is most of the time in decision-making?

ANNIE: Not necessarily, because it depends on what the bar is.

SPENCER: I see.

ANNIE: So if you're trying to hire a CFO, you should spend a lot of time on the sorting because sorting is going to get you to options that you feel are good enough. But if you're trying to find an intern, you probably shouldn't spend a lot of time on the sorting. You should sort pretty quickly into good enough because the bar should be relatively low for that. And then, you are going into the picking process, and the picking process is always fast. So, the sorting process is sometimes fast and sometimes slow, depending on the decision itself. What type of decision you're making, is it high stakes or low stakes? The picking process should always be fast.

SPENCER: Even with something like picking whether to leave your partner or stay with them, wouldn't that be a slow picking process?

ANNIE: Well, no, because it's still fast because, by definition, once you get to picking, you either have one option that's a winner — that's an easy pick — or two options that are identical.

SPENCER: Okay, so maybe I'm not fully understanding the full process of sorting then. Because I just sort of thought sorting was getting down to a smaller set of options, but it sounds like it's more than that. Could unpack a little bit more what sorting involves?

ANNIE: Sorting is getting down to options where if you chose them, you would be fine with it.

SPENCER: Ah, I see. I see. So all the options are ones you'd be fine with at that point.

ANNIE: Right. When looking at the menu, for me, I eliminate anything with eggplant in it because I happen not to like it. I'm also vegan, so I can sort out all the options that have animal products in them. Now I have some set of options that satisfy my requirements. And from there, it's just picking. This might help, Spencer. You can think about this as that for any option under your consideration, you can ask yourself, "If this were the only option I had available, would I be okay with it?" And if the answer is yes, you're sorting it into an option that you would be okay picking. So when we're looking at the menu, you're trying to choose among three entrees. If you ask yourself, "If this were the only entree that I had, if I walked in and this is what they served me, would I be okay with it?" The answer is yes. For the second one, the answer is yes. For the third one, the answer is yes. So the sorting process is then done because all the options satisfy whatever the criteria are for something you'd be willing to order off the menu.

SPENCER: So how do we link this into whether decisions are worth spending time on or worth just being quick?

ANNIE: The way we link that is sometimes the bar for something being a good enough option is going to be quite high, and sometimes it's going to be quite low. And what's going to determine that are two things: impact and optionality. Let's start with impact. Impact is, how bad is it going to be if I get an outcome that I don't like? And we need to add into that, not this second, but in the long run. Because when we get a bad outcome in the moment, we process that much more intensely than what the actual long-term impact of having a bad outcome is. I'll go through this with you, Spencer. So you're at the restaurant, you have three things that you like, you end up picking one of them, and you got it. Let's say you're trying to choose between a chicken dish and a fish dish, and you end up choosing the chicken dish. We're eating a meal together, and you have the chicken, and it's yucky. You really don't like it. You're very sad. You're super sad that you ordered the chicken because it's like dry, really bad chicken. Okay, so you've gotten a bad outcome from ordering the chicken. So my question to you is, if I catch up with you a year later and I say, "Hey, Spencer, how's your year been? Has it been good or bad?"

SPENCER: The first thing I'm going to say is that the chicken I had a year ago is horrible.

ANNIE: That's exactly right. That's my question for you. In answering that question of, how happy have you been during the last year, how much does that one meal we had together where the chicken was yucky matter?

SPENCER: Right. So basically, by putting it in the context of the next year, you kind of see this is really insignificant.

ANNIE: Right. So then we can just shorten the time span. So if I see you in a month, does the bad chicken matter to your happiness over the course of a month?

SPENCER: I hope not. It must be some pretty bad chicken.

ANNIE: And then what about if I see you a week later, after you've had 21 more meals?

SPENCER: Yeah, probably not even in a week.

ANNIE: And then let me ask you this, Spencer. I'm not saying you necessarily do this, but have you ever been to a restaurant with someone who literally just can't choose?

SPENCER: Yeah, at least for five minutes.

ANNIE: They're like asking everybody what they're going to have. They're calling over the waitstaff like, "What do you recommend?" They're looking on Yelp so they can see all the pictures of the dishes. Now do you see what a colossal waste of time that is?

SPENCER: Yeah, absolutely. There's a story that comes to mind for me, which is I was at a bookstore back in the day with a lawyer friend who made some ridiculous hourly rate. And she had this $30 gift certificate that someone had given her. And she spent 90 minutes of really unpleasant time trying to figure out what to spend it on. And at some point I was like, "You realize that given the value of your time, you have lost so much money in the process of trying to figure out how to use this gift certificate, and you've also stressed yourself out in the process. There's no way this was worth it. You would have been better off just buying the first thing that you saw that seemed at all appealing."

ANNIE: Yeah, absolutely. So let's think about what's going on there with your friend or the person you have gone to the restaurant with or whatever. Here we have a very low impact decision. We've already figured that out. So why is it taking them so long to choose? It goes back to what we talked about, which is that there's two forms of uncertainty. There's luck. So, the chef could have a bad day, whoever was cooking the chicken could have just mistimed it or something, and then the chicken's like super overcooked or whatever. Those things that you literally have no control over. And then there's just stuff that you don't know. You haven't been to the restaurant before, so you don't actually know whether the chicken is prepared in a way that you like on the best of days. If the chicken is prepared like the best chicken that they've ever prepared, you actually just don't know if you're going to like it. So when we're trying to choose in this particular case, particularly in a case where you're going to get the feedback really quickly — the food's going to come, you're going to taste it, you're going to get the answer right away — we are so loss averse. Meaning, we're so worried about that downside outcome, that we're trying to choose among these three options (remember we're picking now) but we're still trying to sort when we should be picking. In other words, we're trying to somehow distinguish between three options on a menu that we have never had, where there's no possibility we can guarantee what the outcome is going to be. But somehow, we want to be able to know for sure. And so, what then ends up happening (what happened to your friend with the gift certificate) is that we have an illusion that we could somehow know for sure what the difference is, which is going to be small between the options. And we can't because we don't have the cognitive acuity for that. And then, we forget that it's not worth our time anyway, that that time could be spent doing something that's going to get us a lot more gain somewhere else. Instead of moaning over the menu for 15 minutes, I can talk to my friend Spencer. And that's actually long-term going to make me happier. But we get so caught up in wanting to parse out these small differences. But the point is that once something satisfies the only option test, it means that they're similar enough that it's stupid to try to parse them out. So one of the big decision-making ideas that I really, really love is to sort of keep in mind that when a decision is really hard, that actually means it's easy. And what I mean by that is that, generally, what makes a decision hard is that we have more than one option that seems really good to us, and we're trying to choose between the two of them. But, that means it's actually really easy because the fact that we have two options that we're having difficulty choosing between means that the options are so similar that you can just flip a coin. And once you sort of have that unlocked, it really makes a difference to how you approach those types of decisions.

SPENCER: That's a really fascinating point and I think often overlooked. I suspect that people are afraid on some level that, "Yes, right now it seems like they're equally good," but that's just because I haven't thought about it for an additional five hours or I haven't gotten that critical piece of information which tells you which is better. And then that leads to that kind of obsessive loop.

ANNIE: Yeah, exactly. We just have this worry. Well, think about it, Spencer. You order the bad chicken. You get the chicken. What's the first thing you say? "I made a mistake." But that's the wrong use of the word mistake. What you really mean is, "Man, I wish I were omniscient and I had a time machine. Then I would have known this chicken was going to be bad, and I would have chosen something different." But you're not omniscient and you don't have a time machine.

SPENCER: This is like a good segue into the idea of resulting. What's resulting?

ANNIE: Well, exactly right. So the reason why you want to spend that extra five hours or that extra 10 hours or whatever your friend was doing in the bookstore is because when we get a bad result, we forget that there was an influence of luck on the outcome. And we tie the quality of the result back to the quality of the decision. So when I'm looking at the menu, I only know what I know. I know I'm a vegan, so I need something without animal products in it. I know I hate eggplant. I know I really love broccoli or cauliflower or whatever. And oh, they have a cauliflower steak, and that looks pretty good and I ordered it, and it turns out that it's terrible. So maybe they didn't put on the menu that there was eggplant all over it. I don't know, but it turns out to be bad. That doesn't mean that my decision was bad. I was going off the information that I had. I like cauliflower. I've had cauliflower steaks in the past. That's really good. The description of the sauce seemed great. But when I got it, it was terrible. How could you say that my decision was bad? I don't know. This is like one of the biggest problems in decision-making that really hangs us up, I think, in two ways. One is it causes this paralysis in our decision-making because we're so afraid of getting a bad result and then thinking that we made a mistake, which is really sort of classic loss aversion from Daniel Kahneman. But then the other thing is that when we're trying to close the feedback loops, when we're trying to figure out, "What types of decisions should we make in the future?" It's going to screw up our forecast. Because if we're tying the quality of the outcome to the quality of the decision too tightly, a couple of things are going to happen. The first is that there are going to be decisions where we get bad outcomes, where we think we made a mistake in the decision, but we didn't. We just got a bad outcome. Maybe the outcome was going to occur 5% of the time, and generally speaking, that means we're going to observe it 5% of the time. I don't want to believe that I made a bad decision because I got a 5% outcome, particularly if I foresaw it. Particularly if it was included in my decision-making. So that's one thing: we're going to think that decisions were bad that weren't. The other thing — and this is really bad — is we're going to think that decisions were good that weren't. In other words, we're going to assign good decisions to our own skill when actually we just got a lucky outcome. So there's a really good example of that that someone recently told me. There was a company that was a startup and it had a suite of products. One of the suites of products was gummy multivitamins. They had just launched this particular product, which happened to be very heavy in vitamin D. So they launched it and sales are kind of going, and they're doing stuff to try to market it and whatever. As this happens, as they're launching the product, the pandemic hits. Obviously, the pandemic is out of their control. It's a matter of luck. But one thing that people start to think during the pandemic is that you should be macro-dosing vitamin D because somehow this is going to help you protect yourself from COVID. So, their sales go through the roof because people are really taking a lot of vitamin D. Now, of course, they attributed this to, "We have a great product. We're marketing this really well. We're super geniuses," because they're getting this great outcome. And they're tying it back to the quality of their decision-making and the quality of their product. So, they actually end up really discontinuing other product lines in order to really focus on this product because their forecast is that this is going to continue, because they're not thinking really deeply about what the exogenous factors are on the outcome that they happen to be observing. So of course, once the pandemic becomes less acute and people get vaccinated and people start going back out to their lives and nobody's in lockdown anymore, guess what happens to the sales of that product? It plummets. So the thing that we need to remember is that resulting is not a small mistake. It's a very big mistake because it impacts how you allocate your resources in the future. Because how you allocate your resources, which is a decision, is according to what your forecast of what the highest expected value is for the allocation for different options for how you allocate those resources. So it really matters.

[promo]

SPENCER: I've heard that in some poker circles, when they're discussing what they should have done in different hands, they ban discussion of how it turned out for them.

ANNIE: Well, for smart poker players, they do, for sure. Not everybody [laughs], not everybody. But yeah, when I would describe a poker hand, I would stop at each point. It's actually not even that I wouldn't discuss the final outcome of the hand. I wouldn't even discuss the outcome that occurred on that particular street or betting round. So if I were describing a hand to you, I might say, "So this person was in the one seat. (I'm just identifying positionally where people are sitting.) They had this many chips. (I'll tell you some things about the person, like if they play a lot of hands or if they're very aggressive or those kinds of things. So I'll give you some information about them.) They raised, I looked down. I had Ace-Queen. What do you think I should have done?" Now, obviously, Spencer, I'm telling you something that happened in the past, so I know what I did. But what I did is an outcome that I don't want you to know about. So, notice that what I'm doing is I'm putting you into my situation at the time that I had to make the decision. In other words, as we think about the two forms of uncertainty, what do I know? I don't know what his cards are. So if I'm going to describe the hand to you, I want you to tell me in the same epistemic state that I was in, where you don't know what the cards are. The other thing is that I don't know what he's going to do in response. I don't know what anybody after me is going to do in response either. That's not in my control. So when I ask your opinion, I want you to be in that state as well. And so I have to not tell you that because otherwise, if I tell you any of that stuff, you're going to naturally fit the narrative to what actually ended up happening in the hand. And that means I'm not going to get a quality opinion from you.

SPENCER: One thing I find really interesting about this perspective is it's really clear how it avoids bias and how it helps us think better. But people might say, "Well, in a lot of real-world situations, doesn't the outcome teach you something about whether it was a good decision?" Because a lot of times, it's not that easy to tell just from the facts about the decision whether it was good, and the outcome kind of provides evidence of whether it was good or not. What do you think about that?

ANNIE: What you're really getting at is that figuring out whether a decision was good or not is quite difficult, so it doesn't make sense then that the outcome tells you something. And the answer is, only under certain circumstances. The outcome certainly tells you more and more and more the more skill there is in what you're doing. In other words, the greater the influence of skill on the outcome, the more that the outcome itself (one outcome) is going to give you some good information. As an example, if we're playing chess and you beat me at chess, and we go talk to a friend of ours, and we say, "Hey, Spencer and I just played a game of chess, and Spencer won." And we asked them who was the better player. Now, they didn't watch the game. They don't know any of the moves you made or any of the moves I made. They can actually work backwards from the outcome and say that you were the better player. And the reason for that is that the outcome of the game of chess that we play is so influenced by skill. There was very little luck.

SPENCER: I'm pretty sure that there's almost no dose of LSD high enough that I could beat Magnus Carlsen at chess.

ANNIE: Right, exactly, exactly. So, there's some luck involved. If you were playing Magnus Carlsen, he could get food poisoning and be unable to complete the game. So there's some luck, but the skill gap is going to be pretty high. And so, in that particular case, the outcome is actually quite informative. The other way that outcomes can be quite informative is in the long run. So if I'm playing poker and I win a single hand of poker, it doesn't really tell you very much about what the quality of my decision-making in that single hand was. But if I play enough poker, the outcome in the aggregate over time will start to tell you something about how good I am at poker. In our coin-flipping example, if we flip enough coins, we can start to understand something about whether it was a good decision, separate from the math, whether it was a good decision solely from the outcome for me to take this bet from you. But we don't know after one flip. So if we flip once and someone doesn't know what the terms are of the bet and I come out ahead, they actually don't know if I made a good decision yet. We have to flip the coin many times for them to actually know whether Spencer made the better decision by betting, or did Annie make the better decision by betting. That's actually much more what life's decisions look like. Let me give you an example, maybe, of this mistake. So in the 2015 Super Bowl, the Seattle Seahawks — Pete Carroll was the coach — were playing the New England Patriots. They get down to the last 26 seconds left in the game, and the Seahawks are trailing by four points. Obviously, that means they need to score a touchdown in order to win. It's the last 26 seconds of the game, as I said, and the Seahawks are on the one-yard line of the Patriots, so they have to move the ball one yard in order to score. Now, there's an issue, which is that while the Seahawks have three downs to do this, because it's second down — so they've got second, third, and fourth down to try to move the ball across the one-yard line — they have a clock management problem. With 26 seconds left, obviously that's not a lot of time to try to get three plays off, and Pete Carroll happens to only have one timeout here. This is a very, very famous play in which everybody assumed that Pete Carroll was going to call for Russell Wilson, the quarterback, to hand the ball off to Marshawn Lynch, who was a great running back, and that Marshawn Lynch would then plow through the Patriots' defense and score. That's kind of what everybody thought the play was going to be. And instead, Pete Carroll called for a pass play. The ball was intercepted by Malcolm Butler, and the game ended. When I say that Pete Carroll was vilified for this, when I say that there were headlines that actually used the word "idiot" in them, when I tell you that not only did people think this, in general, the consensus was worst play call in Super Bowl history, but USA Today actually said worst play call in NFL history, probably intuitively, that kind of feels right to you. Like, "Look at this idiot guy. Obviously, he was supposed to hand the ball to Marshawn Lynch. He called the pass play; it was intercepted. He lost the game. He deserved all of those headlines." Kind of what you're saying here is that in these situations where it's hard to know, doesn't the outcome give us some information? And the answer is, some — it always gives you some because, in the aggregate, it's going to help you — but not enough. Because once we walk through that decision, what we find out is that it actually wasn't a bad decision. And we can start in a simple place. It's completely math-free. And I'll ask you this, Spencer. Imagine that Pete Carroll called the exact same play. So he calls a pass play, and the ball is caught for the game-winning touchdown. What do you think the headlines look like the next day?

SPENCER: They probably say, "Oh, he did the unexpected thing, and it was genius."

ANNIE: Right. So, now we're getting into the problem with that idea of how much information is the outcome actually holding. Because in this particular case, it's only one iteration. And what we can see is that our view of the quality of the decision is so incredibly different depending on whether it's the game-winning touchdown or whether it's intercepted. And in fact, if it's just incomplete, nobody's saying that's the worst play call in Super Bowl history. It's only under this very specific circumstance where the ball is intercepted that we're going to say it's the worst play call in Super Bowl history. So incomplete, it's like, "Whatever, nobody really cares." If it's completed for the touchdown, which is a great outcome, people think, "He's a genius, and he's going to the Hall of Fame, and he's better than Belichick." So here we can already start to see a problem. But then we can actually walk through and do some simple math and it's really simple. Once you sort of understand it, it's complex but simple in its thought process. So, you have three downs but only 26 seconds to get three plays off, and you only have one timeout. And as you know, in football, if you run the ball, and the runner doesn't score, and they don't get out of bounds, the clock keeps running. And if the clock keeps running, you're gonna have to use your timeout. And then if you hand the ball off again, and have it run, that's your last play. But when you pass the ball and it's incomplete, the clock stops. So by passing the ball either on first or second down, you can actually get two running plays, plus a pass play. Whereas if you just run the ball, you only get two plays. So the question is, "Alright, I think that we prefer against the Patriots defense to get three plays instead of two. In order to do that, we need to run a pass play. So what does it cost us? What's the cost of getting the third play?" And the cost of getting the third play is going to be the interception rate. How often are you going to observe this interception that we actually happen to observe? And the answer is somewhere around 1% of the time. At which point, it's like how can you possibly think that was a bad play? If you have a 1% chance of an interception, and what you're getting is three chances against the best defense in the lead instead of two. But how are you supposed to know that from the outcome itself? You would have to watch that play get run over and over and over again. And then you could start to observe how incredibly rare the interception is. And in fact, if we could actually live in a simulation, what we could do is run 10,000 times where you run a pass play, and then you hand it off to Marshawn Lynch twice. And we can see which one has the higher wind probability, which the second one would. But we don't get 10,000 runs, and then we end up making this mistake, which is we think that the actual interception matters.

SPENCER: Sounds like in 99% of worlds that play would have either been considered good or at least unremarked upon because it would just be followed up by a different play. Whereas he happened to be in the 1% world where it seems like a horrible decision. And it's also interesting because his choice was being pitted against an "obvious right choice." Whereas, if there hadn't been an obvious right choice, then maybe he wouldn't be so maligned for it.

ANNIE: Yeah. That's a really interesting point. I've mentioned loss aversion already, which is a concept from Daniel Kahneman and Amos Tversky, which basically says that we process losses more intensely than we process wins that are of equal size. So, you can think about winning $100 at blackjack feels about as good to you as losing $50 feels bad to you. What happens is that we tend to become loss averse, meaning that we're making choices that are trying to avoid that bad feeling of losing. Okay, so we've already sort of set up that idea, but now we can set up another idea, which is we have a bias toward the status quo, we like to keep things as they are, and combined with that, we have something called the omission-commission bias. So what's omission-commission bias? Well, omission, we can think about as not deciding, and commission is like committing an act. The simplest way to think about it is that if you're walking along the street and you observe somebody being attacked, and you didn't do anything about it, that would be an omission. But if you're attacking somebody, that's a commission. Now, it turns out that if I stick with the status quo, I think of it as an omission. So if I have a job and I just keep going to the job, I think about it as not making a decision. So if I'm pondering quitting, for example, and I happen not to quit, I'm omitting to quit. So, I sort of feel like I'm not really making a decision. Whereas if I quit and I switch to another job, that's a commission. It turns out, if we tie this back to loss aversion, we're much more tolerant of losses that result from the status quo than that result from a commission, in other words, veering in some active way from the status quo. So the simplest example of this would be the trolley problem. The trolley is going along a particular track. If it continues along that track, there are four workers on the track, and those workers will die. But there's a sidetrack you can divert the trolley onto another track if you pull a lever. And on that track, there's one worker, and obviously that worker will die. But instead of five people dying, there's going to be one person who dies. And the question is, should you pull that lever? And a lot of people won't pull the lever. And it has to do with this omission-commission problem and the way that we think about losses. So, we're going to feel much more intensely the loss that comes from the commission of pulling the lever and killing the one person than we are from just allowing the trolley to go along its way, even though more people are going to die. That's how strong this bias is. So let's bring it back to Pete Carroll. The status quo — as you just said, there's a preferred option — is you hand it off to the running back. That's the status quo. So now, Spencer, I'm going to ask you for another thought experiment that's going to come back to this omission-commission bias. Let's imagine that Pete Carroll has Russell Wilson hand the ball off to Marshawn Lynch, and Marshawn Lynch doesn't score, which is going to happen 50% of the time. So then he calls his timeout. And he again, just like everybody wants him to, just like as the status quo, has Russell Wilson hand the ball off to Marshawn Lynch again to try to get through the Patriots' line, and they fail to score. What do you think the headlines look like the next day?

SPENCER: Terrible luck?

ANNIE: "Patriots' defense held." Does that sound right?

SPENCER: Yeah.

ANNIE: "Clutch: the Patriots' defense in a clutch moment." Do any of those headlines say Pete Carroll's an idiot? No. Do they say it was the worst play call in Super Bowl history?

SPENCER: Surely not.

ANNIE: Right. That's where we can see this omission-commission bias coming in with loss aversion. The way that we, as observers, are thinking about the intensity of that loss is different, whether we're sticking with the status quo or whether we're veering from the status quo. So when we veer from the status quo, we feel those losses much more intensely. We're going to be more loss averse. In other words, we're going to be worried more about the downside before we're willing to switch. So now we can think about the history of the NFL. Why did it take so long for people to adopt what was clear math, clear expectancy around going for it on fourth? Well, because if you went for it on fourth when that wasn't the status quo, you as a coach were gonna get fired because of loss aversion. I've talked to a lot of people where they're thinking about quitting their job, and it's super clear that they're really miserable in their work, and you say to them, "Well, why aren't you quitting?" And they say, "Well, because what if I hate the new job too?" There you can see that sticking with the job is just an omission. It's just sticking with the status quo. And I'm willing to tolerate a near certainty that I'm going to continue to be unhappy rather than risk switching to something new, where I could possibly be unhappy, but it's going to feel so much worse because it was a switch. The way to get people out of that actually is to circle back to the beginning of the conversation, which is expected value. And I've actually done this with people before where I've said, "Imagine it's a year from now, you're still in your current job. What's the probability you're happy?" And they'll say 0% because they know they don't like it. And it's okay. So let's imagine you go out, you get a new job. What's the probability you're gonna be happy in that job in a year? And they'll usually say something like, 50-50 because that's what people answer when they don't know, but fine. And I said, "Well, is a 50% chance of happiness greater than zero?" And they just look at you like, "What? Oh, yes, I didn't think about it that way." And then they're willing to make the switch.

SPENCER: That's a great story. I think it really illustrates it nicely. Before we wrap up, there are actually two other topics I wanted to talk to you about. One is the California High-Speed Rail and what we can learn from that because I think it's such a cool story. Can you tell us that?

ANNIE: Sure. So there's been a little theme in what we've been talking about, which has been sort of hidden in there, which is around quitting. So one of the biases that we have when we think about shortcuts, heuristics, those kinds of things is that as adults — I'm not talking about children here who need someone to help them develop some grit — but as adults, we tend to be too gritty. And we actually don't like to quit things. Now, part of the reason, which we can relate back to what we were just talking about, has to do with status quo bias and omission-commission bias. If we're going to process the losses, from the switch it's greater and more intense than from sticking to it; it's going to stop us from quitting things. So, I'm going to be less likely to quit my job because I'm worried about the new job not working out. So we have this broad issue, which is that we tend to quit things too late. Omission-commission bias is one of them. Status quo bias is another one. And there's another one, which is called the sunk cost fallacy, which is that we tend to take into account what we've already put into something in trying to decide whether we should continue to put more resources into it. So in the simplest sense, we could think about, let's imagine we're looking at a stock that's trading at 40. And we decide that it's not a buy. So we wouldn't buy it at 40. That says we're fresh to the decision. If you already own the stock, if you bought the stock at 50 and it's now trading at 40, you'll have a very strong tendency to hold the stock, even when, if you were just looking at the stock fresh, you would not buy it today. And the reason that you're holding it has to do with the sunk cost fallacy and that feeling of "I want to get my money back." If I sell it now, I can't get my money back. And that's true for time and effort and attention. So someone won't quit their job. Why? "I've already put so much into it, so much time and effort. And I've learned the ropes and all these things." But those are resources that have already been spent. And what should matter is, knowing what you know today, would you start this today? And if the answer is no, then you shouldn't continue just because you've already put something into it. This brings us around to the California bullet train, which is a good example of these problems of quitting. So for those who aren't familiar, the California Bullet Train is a project in California, obviously, which is meant to connect San Francisco and Silicon Valley to the north, and Los Angeles and San Diego to the south with high-speed rail. The kind of high-speed rail that we see in Japan and Europe and whatnot. This was proposed mainly because the Central Valley, which is the part of California that lies between San Francisco and LA, is economically depressed. And the big economic powerhouses of the state are obviously LA and San Diego and Silicon Valley in San Francisco. So they want people in the Central Valley to be able to participate in that prosperity better by connecting them in a way that's pretty fast, obviously, because it's high-speed rail, to those areas. And then the other thing is that both of those areas have very congested housing markets. The idea is that if you can allow people to commute pretty easily from the Central Valley to those areas, that will relieve the housing market. That all sounded pretty reasonable. They floated a bond in 2010 on a budget of $33 billion. The bond was a $9 billion bond on a budget of $33 billion. In order to build this rail, the bond was approved by the voters in California and they started construction. They also approved a track between Madera and Fresno. Just so you get a sense of where that is, that's right in the Central Valley. So that's a little bit dead center between LA and San Francisco. They break ground in 2015. 2018 rolls around, and the engineers all of a sudden say, "Oh, we have a problem." So this is eight years into the project. At this point, they've spent $7.2 billion. And they say, "Oh, we have a problem. There are two mountain ranges." One is called the Diablo Range to the south of San Francisco. It's quite precarious and big, and that's why it's actually really hard to get from San Francisco to the south. And then the other is the Tehachapi Range, which is to the north of LA, separating LA from Bakersfield, also quite difficult to get across, which is part of the reason why people in Bakersfield don't work in LA. Okay, so somehow, these mountain ranges that have existed for millions of years, all of a sudden came to their attention in 2018, at which point they revised the budget to $88 billion. But they say there's a lot of uncertainty around that because they actually don't know if they can safely blast through the mountain ranges in a seismically active area. So, that seems pretty ridiculous. In 2019, having figured out that the revised but very uncertain budget is $88 billion, it goes to Governor Newsom, who now has a choice. He can quit the project. He can say, "Look, you said this wouldn't be $33 billion. You actually said at the time that it was going to be completed in 2020. That seems unlikely since you've only built one section of track. And you said that it would be operating and we'd start to create an operating surface that would self-fund the rest of the line. All of that seems unlikely, plus the budget has now more than doubled. Hey, maybe we should either stop the project or we could do something like an engineering feasibility study to even see if it's worth continuing." But again, because we don't like to quit things because of sunk costs, because no politician wants to say, "We're stopping even though we've quote-unquote wasted $7.2 billion in taxpayer money," Newsom says, "We're gonna keep going with the project." But the part of the project that he approves has nothing to do with the mountains. He approves two more sections of track to be built, one between San Francisco and Silicon Valley, which is not traversing any mountains, obviously, and the other between Bakersfield and Merced, which is to the north of that mountain range. So again, it's on flatland. So that's kind of where the state of play is. The budget has now been revised to well over $115 billion. And nobody's saying, "Hey, we should stop. It had an original budget of $33 billion. The budget has now quadrupled. Why are we continuing?" But once you have something into it, nobody wants to stop, even though it's pretty clear that nobody would start this project today. And that's really why we have to be really aware of these types of biases, and particularly this bias against stopping things, because otherwise, we end up mired in these projects.

SPENCER: And how does that relate to monkeys juggling fire?

ANNIE: That's a good question, Spencer. Alright, so Astro Teller, who is the CEO at X, which is Google's in-house Innovation Hub — he's actually called the Captain of Moonshots — really thinks very deeply about quitting. And the reason is that, as I just said, X is doing moonshots. So these are very highly uncertain projects that will have a very large payoff if they work, but the probability that they work is very small. So, recognizing how mired you can get into these types of projects like the California bullet train, he's obsessed with trying to make sure that you quit as quickly as possible. In other words, as soon as you get the signals that something is not actually going to turn out on average the way that you had hoped, you really need to stop. And what he knows is that we have the intuition that when we get those signals, when the budget has quadrupled, when someone tells us that we can't safely blast through the mountains that will actually stop the project. We think that at the beginning, but that intuition is actually wrong. As we can see with the California bullet train, and I'm sure people are thinking about their own lives with relationships they've stayed in too long or jobs that they've stayed in too long, or projects that they've stuck with too long, where looking back, you can always see that the signals were there, but you were very good at ignoring them. So he's obsessed with this idea of how do we actually make sure that we quit as soon as possible once we have seen signals that tell us we ought to walk away because he knows we're going to be bad at it. So, he has a mental model, which I think is really useful, called monkeys and pedestals, and it goes like this. Imagine that you've decided you're going to quit your job, Spencer. And the way you're going to make money is by training a monkey to juggle flaming torches while standing on a pedestal in Times Square. And if you did that and you put your hat out, obviously people would throw a lot of money in your hat because that would be pretty cool to see a monkey juggling flaming torches. There's two pieces of that project, of the act. The first is the monkey juggling the flaming torches, and the second is building the pedestal. So two questions: you're building the pedestal, but also can we actually train this monkey to juggle the flaming torches? Astro Teller's simple insight is such a simple one, which is, if you're going to do that, you need to start by seeing if you can train the monkey. You should never start by seeing if you can build the pedestal. And there's a few reasons for that. The first is that there's no point in having the pedestal if you can't get the monkey to juggle the flaming torches because the monkey juggling the flaming torches here is both the bigger unknown but also the bottleneck. If your act is to get a monkey to juggle flaming torches, that's the bottleneck. You have to figure out if you can solve that. Otherwise, there's literally no point in having the pedestal. So why would you build it? That's kind of insight number one. Insight number two is if you build the pedestal, it's going to give you the feeling that you've made progress when you actually haven't. And the reason why you haven't made any progress is because you already know you can do it. The thing you don't know, the big unknown is the monkey. But the pedestal, you already know you can build. So you should never do something where you're going to create the illusion of progress because you already know you can do it. And then the third insight is that if you build the pedestal, which may be unnecessary and is creating the illusion of progress, the fact that you've built that pedestal is going to make it harder for you to quit when you figure out that you can't train the monkey or that the monkey training is much more difficult than you thought it was. Why wouldn't you be able to quit? Well, because of sunk costs. "I put so much time and effort into building this beautiful pedestal. And if I abandon this now, what will people think of me? And I'll have wasted all of that time and effort and all the money that I put into the pedestal, and I don't want to do that." And so you won't quit because you want to "protect" or not waste the pedestal that you've built or the resources that you've put into building that pedestal. That's actually the really dangerous piece of it. Let's think about this in terms of the California bullet train. They started off by building a section of track between Madera and Fresno. That's on flatland. We already know that we can build a track on flatland; we've been doing it forever. They've certainly done it successfully in Europe. And so by building there, you're actually not really making any progress. You're certainly not solving the bottleneck, which is can you get through those mountains. Now, having already done that, and having spent $7.2 billion building this pedestal, when they were faced with a monkey that might actually not be solvable, might be intractable, they did not stop. That's what Astro Teller realizes. They didn't stop. Instead, they just went and approved building two more pedestals: Bakersfield to Merced, San Francisco to Silicon Valley, all on flatland, instead of actually trying to figure out if the monkey is intractable. Now, if they had applied monkeys and pedestals to the problem in the first place, they would have said, "Well, what are the monkeys here?" And there's an obvious monkey, the Diablo Range, and another obvious monkey, the Tehachapi mountains. And clearly, you would have started with that part of the problem, which would have meant you would have said, "Hey, whoa, Nelly, let's not do anything until we actually do an engineering feasibility study on those mountains." But what we can see is that while it seems so obvious when you learn monkeys and pedestals that you ought to go monkeys first, in practice, we really don't do it. California didn't do it because they wanted to start making progress right away. But we also don't do it in our own project planning because what does everybody do when they're planning a project? They all say the same thing: what's the low-hanging fruit? The low-hanging fruit by definition is a pedestal. You should never attack any low-hanging fruit until you've gone to the top of the tree first, until you know that that top of the tree is available to you. But we all want those easy wins and what we don't realize is that those easy wins create this accumulation of debris that makes it really hard to walk away. And what we'll do instead is we'll shift our thesis. We'll get that thesis creep or mission creep that allows us to keep going, even though we're not even doing the thing that we first intended to do.

SPENCER: I love that story. It's so great. Final question for you before we wrap up. Tell us about the Alliance for Decision Education and what your theory of change is with that.

ANNIE: Oh, my gosh, thank you so much. So the Alliance for Decision Education is a foundation, an organization that I co-founded along with Eric Brooks. The idea here is this: going way back to the beginning of our conversation, there's only two things that determine how someone's life turns out: the quality of their decisions and luck. And again, you can't do anything about luck. And luck includes things like the circumstances of your birth. I was a lot better off being born when I was born than being born in 1650. Clearly, I don't have control over that bit. But I have a lot of control over my decisions. And so the idea behind the Alliance is that we spend a lot of time in K through 12 teaching people to memorize facts, teaching people things like trigonometry, which is not actually a particularly helpful form of mathematics unless you're specifically going to be an engineer of some sort, or if you're going to sail. And even then we have computers that can calculate those things for you like cosine and sine and whatnot. But we don't really teach kids in K through 12 how to make good decisions. We don't teach them what a decision is. We don't teach them about agency. We don't do a lot of teaching them to think probabilistically. Statistics and probability are usually only taught as an elective, and only then somewhere around 12th grade. And for most people, and I guess I would ask this of you, did you get a class in K through 12, did you have someone who's actually teaching you how to make decisions? Like, what is a decision? How would you construct a good decision? Was that happening for you in school?

SPENCER: Yeah, I don't think we had a class that ever directly addressed that. At most, it would be very, very indirect.

ANNIE: What we feel like is, we should be teaching kids decision-making because it's the whole thing. Better decisions are going to lead to better individual lives. The better decisions I make, the more likely that I have good outcomes in my life. And better lives are going to lead to a better society. So if we want to think about how we create a better society, how we improve individuals' outcomes, teaching better decision-making, we really strongly believe, is the road to doing that. And so, we are trying to create an educational movement that is going to bring decision education into every K through 12 classroom in this country, and hopefully, worldwide. We really want world domination here. We have our sights set on at least USA domination at the moment because we think it's really going to matter. That's the thing that's really going to matter. And we should be teaching this instead of trigonometry. You can take trigonometry as the elective later. We should be teaching statistics and probability starting in kindergarten. We should be teaching kids what a decision is, how you might think about making a decision? Every kid should come out of high school knowing what expected value is, being able to think probabilistically, being able to understand what their own values are. How do you think about what I value, what are my preferences, what are my goals, how would I consider different options that would allow me to actually achieve those goals, what kind of decisions are going to cause me to lose ground toward those goals, how do I balance out what I want in the short term with what's good for me in the long run? These are all really crucial questions to becoming a better decision-maker, and we just don't help kids with them at all. So that's really what we're trying to do with the organization. We're happy to say that we have a very stellar advisory board as well as a stellar board. We have three Nobel laureates on our academic advisory board, including Daniel Kahneman and Richard Thaler. We have a fabulous board, including retired Admiral Jan Tighe, Andrew Berry, who's the GM of the Browns as an example, Simon Hallett, who's a giant in the investment world but also owns Plymouth Argyle, which is a football club in England. There's more people to name but we've got a lot of really great people behind us trying to help us in this mission who really believe in it.

SPENCER: Annie, thank you so much for coming on.

ANNIE: Well, thank you so much for having me.

[outro]

JOSH: A listener asks: "What is your take on hypnosis and whether or not it's a good tool for mental health and personal development?"

SPENCER: So I'm far from an expert on hypnosis. One thing I would say is that there's stage hypnosis and then there's hypnosis you would do with a hypnotist like in a one-on-one setting, and I think they're quite different. Stage hypnosis, as far as I've been able to figure out, seems to really be about techniques for selecting people that would be compliant. So there's a lot of methods, I think they use it quite successfully, to pick the right audience members to come up on stage to behave the ways that they can ask them to behave. And second, a way of putting people into a comfortable state that allows them to do things that normally they might be really embarrassed or really anxious to do, but they're able to do them more comfortably — which is quite a skill, but it's not exactly the skill that it purports to be, I think, right? It's not like someone really is hypnotized and thinks they're a rabbit or something. It's like someone is made so at ease by the hypnotist that they're able to act like a rabbit on stage and not be humiliated and not be visibly anxious to the audience, right? Which is itself quite impressive. But also choosing the right people to do that is part of the trick. Then there's hypnosis which is done one-on-one, and I think there's some pretty impressive results there around pain, where there's some studies that suggest that hypnosis may help people deal with pain, like during a surgery. There's some quite remarkable examples where people do it instead of anesthesia, and they claim at least that they experience far less pain, and so I think that's really interesting. I think insofar as it's real, one way to think about it is that it's putting your mind into a different state that you're not used to getting into, maybe in some ways akin to a meditative state, and when you're in a somewhat different mental state, certain things may become easier that might be more difficult normally. And so if you think about it that way, it's not so super mysterious. It's like, yeah, your brain can be in many different states. There's some states where maybe certain things are easier and certain things are harder, and you're more aware of some things and less aware of other things, and so that's kind of the frame I have on it.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: