Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
October 23, 2025
What does rationality mean when life won’t fit a spreadsheet? If models demand one common scale, what happens to values that can’t be compared? Are we optimizing choices, or narrowing them to what’s easy to count? When do toy problems stop teaching us about real ones? Can preferences be “mapped” if the act of asking reshapes them? When is precision a disguise for guesswork? What standard should judge error when the world is fuzzy by design? If we want better decisions, should we start by choosing better frames? How do fast intuitions and slow reflection share the work when stakes are high? When should we pause because the first answer felt too easy? How can diverse perspectives expose what one mind won’t see? How do we weigh the uncountable without pretending it’s all commensurate? What does a life well chosen look like beyond being error-free?
Barry Schwartz is an emeritus professor of psychology at Swarthmore College and a visiting professor at the Haas School of Business at Berkeley. He has spent fifty years thinking and writing about the interaction between economics, psychology, and morality. He has written several books that address aspects of this interaction, including The Battle for Human Nature, The Costs of Living, The Paradox of Choice, Practical Wisdom (with Kenneth Sharpe), Why We Work, and most recently, Choose Wisely (with Richard Schuldenfrei). Schwartz has spoken four times at the TED conference, and his TED talks have been viewed by more than 25 million people.
Links:
SPENCER: Barry, welcome.
BARRY: It's a pleasure to be with you. I'm looking forward to it.
SPENCER: Now, our listeners have a lot of interest in rationality and biases, but something they may not realize is that in order to study biases, you have to have a model of what a rational actor would do. Do we have a good model of what a rational actor would do?
BARRY: If you take out the word good, the answer is yes.
SPENCER: But the word "good" seems pretty important.
BARRY: The word "good" is very important. In fact, all this research has been done for half a century on the mistakes we make, errors, heuristics, biases. Daniel Kahneman's hugely successful book won him a Nobel Prize. Richard Thaler's Nobel Prize is all documenting stupid human tricks, as David Letterman might call them. All of it is predicated on the view that we know what it means to do it right, and the model for doing it right is what economists call rational choice theory. It's very particular. It's very specific. All of these things that are called errors are errors because they deviate from rational choice theory in one way or another.
SPENCER: Could you give us an example of that where a model is made of what a rational actor would do?
BARRY: Yes, imagine trying to decide where to eat dinner. Suppose there are three restaurants that you're considering. Create a little Excel spreadsheet and create the columns for restaurants A, B, and C, and then below that, the things you care about: quality of the food, the atmosphere, the service, as many things as you like. Below that, how important are each of these things? You might think that the quality of the food is most important and the atmosphere is substantially less important, and so on. Then you have to assign each restaurant an evaluation. How good is this restaurant, let's say, on a 10-point scale? How you come up with that number is something of a mystery, but you come up with a number. This is a seven, this is an eight, this is a nine when it comes to the quality of the food. Finally, you've got to ask, how likely is it that if I go to this restaurant, the food will be as good as I think it is? That is to say, we live in an uncertain world, and there are no sure things. You've now created a spreadsheet with the attributes that matter, an evaluation of how good each alternative is with respect to each attribute, and an assessment of how likely it is that you'll get what you're hoping for. Then you just push a button, and Excel does the math, and out comes the answer to the question, what restaurant should I eat at?
SPENCER: So essentially, that would be doing an expected value calculation.
BARRY: In effect, an expected value calculation, except that it's not really expected value. It's expected utility. Since it's a subjective quantity that you're trying to maximize, whereas, if you're making investments, you might be trying to maximize expected value. How much money will I have at the end of three years with one investment strategy or another.
SPENCER: So the rational actor, just to clarify in that model, is essentially aware of all the possible options, the probabilities of all those options. They're aware of their own utility function, essentially, how much they value every single option. And then they're doing this complicated calculation to decide what the optimal choice is.
BARRY: It's more than that. They're also aware of how to assign value to each of the alternatives they're considering. Who said this restaurant's a nine, this one's an eight, and this one's a seven? What do those numbers mean? We know what they mean when you're counting dollar bills. We know what they mean when you're at the blackjack table in a casino, but what do they mean when you're evaluating restaurants? Nonetheless, you do the best you can. The argument that rational choice theory makes is this is the best we can do. If we appreciate that the world is uncertain, if we appreciate that different options have different value, we should be as rigorous and quantitative as we can in calculating which of the options before us will provide us with the greatest satisfaction, with the greatest utility. The research that's been done, all the research on heuristics and biases, essentially give people options that are already very constrained. Do you want to make this bet, or do you want to make that bet? Do you want to save a certain number of lives for sure, or maybe all lives with a certain probability? The options are given to you. They're not options that you have to create or extract from the world.
SPENCER: Is that because in those scenarios, we can say what the rational actor would do, because we've already constrained the space so much? We can say, oh, the rational actor would choose this option, not that option.
BARRY: Exactly, and there's nothing weird about this. The way science proceeds is you study something in the simplest situation you can possibly create, get a real understanding, and then you gradually complexify. You don't study genetics by studying human beings; you study genetics by studying fruit flies, and you build a model of how genetics works. You keep on adding bells and whistles to the model to capture the incredible additional complexity that complex organisms have. That makes perfectly good sense, but there's an assumption built into it, which is that as you complexify things, the basic stuff you found doesn't change. You're building, right? You don't have to redo the foundation every time you add another floor. You're building on the foundation. Similarly, it makes sense for rational choice theorists to say this is the way that you should choose a job, where to go to college, whom to marry, and whether to have children, in the same way you would choose whether to double down or not at the blackjack table in a casino. All that's different is a matter of degree, a matter of complexity.
SPENCER: So it's a bit like when physicists are making a toy model in physics. They say, "Yeah, okay, well, real life is so complicated. Let's imagine this toy model. We've got an electron over here and an electron over there. What would happen?" But you need to make sure that whatever you're studying in the toy model will then apply in the real situation.
BARRY: No, that's exactly right. But the thing that people maybe sometimes don't appreciate is that people are not subatomic particles or merely biological systems. So when you increase complexity, there's a reasonable chance that people will treat complex situations in a way that's fundamentally different from the way they treat simple ones. When you're trying to decide where to go to college, I've just had several grandchildren go through this, and they completely mystified me in how they went about making their choices. When you're trying to decide where to go to college, how many options are on the table? 2, 100, 500? How do you decide which options you should take seriously and why? When you ask yourself, "I want to go to the place that offers the best education," What does that mean? How do you put a number on the quality of education that different institutions offer, such that you can say Harvard offers an education that's two units better than Princeton? In order to use rational choice theory, you have to take incredibly complex phenomena and reduce them to very simple, easily quantified components and pretend that you haven't fundamentally changed the situation by doing that. The argument in our book is that you have fundamentally changed the situation, and in the process of doing it, you have substituted, to use a slogan, counting for thinking. Instead of thinking your way through, "Why am I going to college?" you're counting, "What is the U.S. News and World Report's rating of the colleges? What percentage of kids who graduate from this college get into medical school?" The more you can quantify, the more "rational" you are.
SPENCER: Now, to make sure I understand your critique, are you saying that in these simplified scenarios, they're mismodeling the rational agent, or are you saying there's nothing wrong with the way they're modeling the rational agent in these simplified scenarios? It's more that these simplified scenarios don't generalize properly to the real world?
BARRY: The latter, that is to say, rational choice theory has its place, but that place isn't every place, and there's research that shows that people are so overwhelmed by quantified components of a complicated situation that whatever is quantified dominates decisions. The more you can quantify some things rather than others — I can't quantify the quality of education, but I can quantify the percentage of pre-meds who get into medical school. What that's going to do is get you to weigh the success of kids getting into medical school more heavily than you weigh the quality of education because you can't quantify that nearly as precisely and with nearly as much confidence. The result is that counting substitutes for thinking, not only because it's increasingly what you do, but also because it increasingly dominates the decisions you make.
SPENCER: Do we treat numerical information differently? Do we take it more seriously?
BARRY: I don't think we know the answer to that question, but my suspicion is that living in an age of science, it is inherently more respectable, more serious, and more likely to be right if we can attach a number to it. The ineffability of the lives we actually live is sort of secondary to the rigorous, clearly defined categories that scientists use to make sense of the world. We should try to be as scientific as we can. And of course, the more highly educated we are, the more true this is.
SPENCER: What would the world look like if people behaved like the rational agents in these models?
BARRY: The reason that we wrote this book, the name of the book is Choose Wisely. It's got a subtitle that I won't go into, and it is largely a critique of rational choice theory and the presentation of a skeleton of an alternative. There are two reasons we wrote the book. One, we think it's astonishing that 50 years of research has been done on the mistakes people make, without any of these researchers questioning whether the standard being used to call these things mistakes is an appropriate standard. What it means, the word rational, which has lots of honorifics attached to it, has really been deformed by rational choice theory into just a tiny skeleton of what it ought to be. Rationality requires judgment. It requires openness. It requires a willingness to change. It requires acknowledging that as you do research into a decision, the things that matter to you may change, and the importance of these various things may change, and so on. There's a kind of openness. It involves thinking both about the short term and about the long term, and figuring out how to integrate them, and all of that is invisible to rational choice theory. Our worry is that if the world is made up of people who think this is the right way to make decisions, we will, in effect, lose our collective capacity to think in a more complex and true-to-life way. It's not automatic that we're going to think about things in the right way. If we try to wedge everything we do into some version of rational choice theory, our ability to think wisely will get more and more degraded. So that's why we think it's dangerous, not just wrong.
SPENCER: This reminds me of the idea that suppose we know that quantum mechanics equations are more accurate than Newtonian mechanics. It doesn't mean that a physicist should try to use quantum mechanics for everything. Sometimes, they are going to do a better job just using the simpler Newtonian mechanics, even if it's less accurate, because for whatever they're working on, it's accurate enough, and it's actually just a better decision tool. I wonder if there's an analog here, where in theory, if you were a perfectly rational agent with infinite computation time and you could consider every single possibility to an unlimited degree, then maybe the kind of spreadsheet model would perform well. But given the limitations of humans, trying to imitate the spreadsheet model might actually lead to worse outcomes. What do you think about that?
BARRY: I don't think it's just about complexity, because in the computer age, you can make your spreadsheet as complex as you want. We no longer suffer from the kind of limits that we suffered from two generations ago. The problem is that not everything can be quantified. That's one problem. Another problem is that not every value can be compared to every other value on the same scale, and rational choice theory assumes that they can be. You assign a nine to a school for its academic quality, an eight to a school for the ability of kids to get into medical school, and a six and a half for the social life. Now, in what sense can you compare the quality of social life to the academic quality? The numbers are an invention that says there is really only a single scale, which is how much I value it, and anything can be evaluated on that scale. If that's not true, when you push the button to have Excel compute the answer, it's not going to be able to. It assumes that there's a common scale, right? That's what the number system is. I don't think it's just about complexity. I think it really is a significant distortion of the kinds of challenges we face in negotiating through our everyday lives. Also, the categories that we divide the world into are not always so discrete. In the beginning of the book, we talk about this example: suppose you're trying to decide whether to take your family to the museum or to the beach on a summer day, and you calculate that the museum will be more fun collectively than the beach if it's a rainy day, but if it's a sunny day, the beach is going to win out. You check the weather and there's a 20% chance of rain. What should you do? You can calculate that and figure out what your expected satisfaction will be at the beach and what your expected satisfaction will be at the museum. But when you hear 20% chance of rain, what exactly does that mean? Is it going to be a downpour? Is it going to rain all day? Is it going to be a shower? Are you going to the beach to spend the day playing in the sand and going into the water? Or do you like walking on the beach on cloudy days? You need to know a lot more about what counts as rain in order to know how much to reduce the possible satisfaction of going to the beach if it should turn out to rain. You have to assign a number, and a number loses all of that fuzziness of the boundaries between rainy and not rainy that the real world offers us. I don't think it's just complexity. It is really an assumption that the world comes in discrete packages that we can identify and quantify, and then it's just about the question of doing the math. That's not the world I live in.
SPENCER: My understanding is that the way rational choice theory tries to get around the issue of value is as it says, if you had a rational agent, you could just present to the agent two choices, and it would always be able to tell you which one is better of the two. If you did this enough times, it would imply the existence of this utility function which assigns a value to everything. Then you could say, maybe it's really difficult for humans to describe how much they care about one thing versus another, but if you gave that person a lot of pairs of choices and they just had to pick between them, there'd be this implicit assignment of how important every single thing is, right?
BARRY: So in effect, what you're doing is taking a bunch of discrete comparisons and turning them into a continuum of utility value. But who decides which pairs of things to give the person you're asking? Can the person generate new things? Is it possible that when the person hears about these various things that you're asking for, choices among entirely new possibilities occur to the person? The preference structure is changed by the very fact that you are trying to elicit the preference structure. All of that is also invisible to rational choice theory; preferences are, this is the slogan, exogenous. You come into the situation with whatever preferences you have, and our job is to see if we can map them. The notion that the very process can alter preferences is invisible. The notion that the way we frame the decision we're facing, short term versus long term, multiple options versus a smaller set of options, is invisible. Every decision we make is within a frame. We can't possibly consider every possible option. You wake up on a Saturday morning, and it's a beautiful day, and you have nothing specifically that you have to do, and you ask yourself, "What should I do today?" Many of us have this experience. Most of us probably wish we had it more often. But how do you begin to list the things you might do on a sunny spring Saturday? It will be Sunday before you get even halfway started on generating the list. So you need to put barricades up and put some things inside them and most things outside them, and then assess the things that you've put inside the frame. Then the question becomes, are there better and worse ways to create frames? From the point of view of rational choice theory, any framing is a distortion. Ideally, you want to be able to think about options unframed. What we try to suggest is that most of what rationality entails is being able to choose good frames within which to identify and evaluate various possible options.
SPENCER: So what's an example of a good frame or helpful frame?
BARRY: Let me give you a real-life example of what the nature of the problem is. Michael Pollan, a very famous journalist and professor, wrote an article in The New York Times Magazine some years ago, and what he was trying to figure out is, what is the cost of a pound of beef? He bought a little heifer, and then he tracked it as it got fed and got bigger and bigger and eventually went to market. He could calculate how much it costs to buy the heifer, how much it costs to sustain the heifer, and what he got per pound at the end of the process. This, he said, is the cost of a pound of beef. It's framed very narrowly: the inputs are the heifer and its food; the output is what you pay in the supermarket. The question he asks is, is this all there is to the cost of a pound of beef? What about the fact that cows are fed corn because they grow faster and bigger, and the corn farmers are subsidized by our taxes? What percentage of our taxes, what economists would call an externality, is part of the cost of a pound of beef, but it's not reflected in the supermarket? Cows have trouble digesting corn, so they get fed antibiotics to keep them healthy. Maybe that contaminates the meat; let's assume it doesn't, but what it certainly does is cause the development of drug-resistant bacteria because of all the antibiotics that are being fed to cows. What is the cost in days of work missed, doctor bills paid, hospital visits paid as a result of these relatively ineffective antibiotics? How much does that cost, and how is that reflected in the price of a pound of beef? You fertilize the field with petroleum products, which means that we need an adequate and certain supply of oil products, which influences our international policy. How much of that should be factored into the cost of a pound of beef? We know that the right answer isn't everything you can imagine should be included in some way in the cost of a pound of beef. But we also should know that the right answer isn't just the things that are reflected in the supermarket price. A wise policymaker and a wise decider is someone who consistently draws the frames broad enough so that non-obvious aspects of the decision are somehow included, but not so broadly that you're paralyzed into inaction. This is true of the individual decisions that we make, but it's even more true when it comes to making policy decisions, economic policy, social policy decisions, where there are always costs and benefits, and your job is to figure out whether the benefits exceed the costs, by how much, and what should count as a cost and what should count as a benefit. You will not answer any of those questions simply by attaching numbers to outcomes; you have to make judgments about what should properly be considered a cost and what should properly be considered a benefit. The problem we're facing is that we are reducing the inherent complexity of the decisions we make into something that can be done formulaically, and that's a bad thing.
SPENCER: How does your perspective relate to this idea of bounded rationality, where sometimes, when people are critiquing the whole world of studying rationality, they say, "Look, real beings are limited. They can't consider an infinite number of things," so they have to focus on some subset of those things. They don't have unlimited time to think about things, so they have to, at some point, stop thinking, and so on. Some of those critiques seem related to what you're describing, but I think what you're saying is also somewhat different from that. In the paradigm of bounded rationality, they say, to study rationality, you have to also think about the computational limits. Therefore, what's rational with computational limits might look different than what's rational to an unlimited being.
BARRY: I think that it is certainly true that we can't think about everything, and we can't think about everything in its full complexity, but my concern is that the limits that rational choice theory sets are just the wrong limits. They have us focused on the easiest parts of the decision and the least consequential parts of a decision, and they leave all of the hard parts off screen, as if some genie is going to solve them for us. So if you want to call it mechanical choice theory, that's fine, but when you call it rational choice theory, you are implying that this is the way rational people should act.
SPENCER: So what are the easy parts, deciding between a finite set of pre-described options?
BARRY: And where the features that are relevant are easily quantified. It's easy to decide what bets to make at a casino. It is and should be formulaic. It's not so easy to decide how to play poker, because it isn't just about probabilities. You have to understand the people you're playing against and try to, in effect, see through the backs of their cards. So that adds some complexity, but you can still calculate expected values. You know that staying in on some hands is bad, chasing the odds are so far against you, and on other hands, you almost certainly have the best hand at the table. You can calculate that just by knowing the odds of various cards in a deck. But that doesn't make you a winner at poker. That's a precondition. If you can't do that, you'll never be a winner at poker. If you can do that, you at least have a chance to be a winner at poker. Because especially as that kind of calculation gets more and more automatic, you can devote yourself to trying to read the other people at the table. And my worry about rational choice theory is that it really is a classic, you know, drunk at the lamppost joke. We treat what we can measure as important because we can measure it, not because it's actually what's most important.
SPENCER: And real life has incredibly more complexity than a game of poker, because we don't even know what we're optimizing for. We don't even know what our options are and so on.
BARRY: And most of the things we care about are the quality of a liberal arts education. I care a lot about that. I spent my whole life teaching in a liberal arts institution. But quantify it, compare Swarthmore or I taught to Williams or Amherst, it's a fool's errand. I can compare them in various ways, but not in ways that enable me to end up saying Swarthmore is a better place to go to school than Amherst or Williams. That's just foolishness. And US News makes a lot of money making just such claims.
SPENCER: Do you think that there's a fundamental incomparability between different features, or do you think that there is actually some principled way to say, well, this thing is better than this other thing, at least for this person?
BARRY: The word we use in the book is incommensurability. Different kinds of goods cannot be directly compared because the scales of value we use to assess them are different. That doesn't mean that you can't find a way to attach numbers on a common scale. It's just that when you do that, you're fooling yourself into thinking that you aren't, in a significant way, distorting the comparison that has to be made. I think incommensurability is the norm, is the rule, not the exception. If you use rational choice theory, you have to assume exactly the opposite. You just have to assume there is a common scale. Think about the example I gave of pollen and the cow. The idea here is not to attach a value to the taxes that you pay and to our foreign policy and to our susceptibility to illnesses that are harder to cure. It's not that each of these has a $1 value. Each of these has value, and comparing being in the hospital for two weeks to the cost of a pound of filet mignon is loony. That's fundamental. Without assuming that there is a common underlying scale, rational choice theory can't get off the ground. My worry, the reason why I think it is dangerous, is that the more we get used to using a common scale, the more we will find it natural and reasonable to do so, and all kinds of really important distinctions that we make in our everyday life will start to disintegrate. We'll use dollar value as the real bottom line, the real foundation for evaluating all of the things that we have to evaluate in our law.
SPENCER: Because that is the thing that's easiest to turn something into numerically.
BARRY: Yes, we're used to doing it, and we have a real zero point. It's not like a rating scale. Who knows when you say rated on a scale from one to ten what that means? Dollars start at zero; they go up to Elon Musk. So we have a pretty good idea about what that scale represents, and using it as an anchor, because it feels so solid, gets us to turn every aspect of a decision into effectively a financial contributor, even when the financial aspect of it is the least significant.
SPENCER: Yeah, I think economists would say, even if it seems incomparable to something else, there's a way to turn it into dollars. So for example, let's say you're talking about health, right? You might think, how do you compare health versus money? They could say, well, you could ask people how much they would pay to avoid getting some disease, right? Or they could ask how much society should invest in preventing this number of people from getting this chronic disease, et cetera. So it sounds like you think that there's something about that way of thinking that has an error in it.
BARRY: I mean, it's both a substantive error because you take people's estimates more seriously than you should, because you're asking people questions that nobody knows how to answer. You can ask me anything, and I'll give you an answer. But that doesn't mean that the answer means anything. I think it's also morally consequential because it gets people to think about the value of life, the value of well-being in monetary terms. You have a very demanding job, and you're trying to organize an evening of drinking beer with friends, and you know your consulting fee is $300 an hour. So you sit down and you say, is it really worth $900 for me to spend an evening chatting with friends over beer? That's three hours of consulting I could be doing. The problem isn't that you decide that your friends are not worth $900. The problem is using that scale to assess what your friends are worth. I think people think they can resist that; that it is just a shorthand, just a convenience. But I think more and more as you use that scale to be rational in deciding how to allocate your time, you start thinking about all of the things you value in monetary terms, and that really flattens the world we live in in what I think is a pretty destructive way.
SPENCER: What about policymakers? Because some people argue, okay, well, maybe for an individual person, it is destructive thinking in terms of one scale, like money all the time, but policymakers, they're constantly making decisions where, effectively, money is being traded against human lives. Take an example where you could require that every car have tons of safety features that they don't currently have. People would have to pay more money for those cars, but fewer people would die. So effectively, regulation on cars is making a direct trade-off between life.
BARRY: Yeah, that's correct, and I think you have to. You can't avoid it, but at least you should be mindful when you make those trade-offs that what you are doing is a fundamental distortion of how we live. Sometimes the world simply forces you to tell yourself and the world a lie. When you stop realizing that it's a lie, then important distinctions that you normally make in everyday life stop being made. So you're absolutely right. You can't impose regulations on car building that make every car into a Sherman tank so that there will be no lives lost in accidents until we all die from global warming, we have no problem. So you can't do that. So the question is, what's reasonable? And then that leads you to ask, how can I calculate reasonableness? How many lives saved per dollar is a reasonable requirement in the design of car safety? These are hard questions that people ask, and I'm glad that people ask these questions because I would certainly much rather that money be invested in things that are going to be effective than things that are not, and this is a way of assessing their effectiveness compared to each other. But if that's all you do, then the distinction between the value of a life and the value of a car starts to get increasingly less clear. You actually start thinking about the value of lives in terms of what it costs you to preserve them. And I'm trying as hard as I can to resist that — rather than giving it the honorific that this is the rational way to make policy.
SPENCER: Describe how I think about these trade-offs and see if you disagree with that. Okay, you might, I think you might disagree. I agree with you that there are many different things that we could value and that there's no sort of objective way to trade them off against each other. Should I be honest about this thing even though it hurts someone's feelings? Those are two things you might care about, being honest and not hurting people's feelings, and there's going to be fundamental trade-offs. I don't think there's an objective answer which is better, but I do think that for a given person, if the person deeply reflects and is introspective, they will find that they themselves do have different levels of value on those things, but that trade-off will not necessarily respect the rules of rational choice theory. For example, they might find that if they're really honest for most of their life, they might find that that actually changes the ratio of their value of honesty to not causing harm, for example. So it's not necessarily this objective, fixed thing that never changes. What do you think about that perspective?
BARRY: I think you're right. I wrote a book with a colleague about a decade ago called Practical Wisdom. What it was, was an attempt to bring Aristotle to the 21st century, and the answer to questions like, "How honest should I be with my friend?" We argue, almost always, it depends. Your friend is getting dressed to go to a very fancy wedding and calls you to say, Come on over. I want you to tell me how you think I look. So you go over and knock on the door, she opens the door, and you look, and what you think is she doesn't look very good. The question is, what do you say? When we pose this to our students, they think the answer is a no-brainer. If this is your friend, you have to be honest, unmitigated, with no exception, honesty. We tell them, Well, if you go through life like that, you're going to live your life alone. The questions that you should be asking, we suggest, are, does she have an alternative? Is my telling her that the dress is unflattering going to be in any conceivable way constructive? Is there a plan B? What kind of self-confidence does she have? What effect is my telling her she doesn't look good going to have on her confidence that she can judge on her own how she looks every time she gets ready to go out into the world? Is it worth it to undermine her self-confidence because she clearly thinks she looks great, and you're about to tell her that she's got this one wrong? The answer to these questions is it depends on the friend, and you need to know the person. With that caveat, I think that it is going to be person and situation specific, the judgment you make. I think that's the way we actually try to get through life when we're acting at our best, but it's hard. You're going to get it wrong a lot of the time. It is anything but formulaic, but I think it's really not only the best that we can do. It's the best that we should do. People are different, and situations are different, and we shouldn't try to bury the differences. We should try somehow to take account of them with the understanding that we're going to make mistakes. Does that sound like at least a first cousin to what you were describing?
SPENCER: Yeah, no, I think that's just adding nuance to what I said, in my opinion, where you're basically saying there are also important differences in the situation itself that the whole thing will turn on. You have to kind of combine your own values and understand them with a deep understanding of the situation. What does that choice really mean? I think you're also pointing out that there might be more values at stake than you realize. You might think, Oh, this is just honesty versus this person feeling bad. But maybe there are deeper implications, like about their self-esteem and things like that.
BARRY: And again, you eventually have to say something. If it takes you two minutes to answer the question, you've already answered it. Not only do you have to think about all these things, but you have to do it quickly, because there is something in the way you answer the question, and not just the content of what you say, that's going to communicate. That's why we get it wrong so much. Maintaining close relations with people we love is not easy, and when it comes to social policy, even though you're making decisions for people you don't know, let alone don't love, there is a kind of balancing of soft values that have to do with leading a flourishing life, having some confidence that you know what tomorrow will bring, being able to have some confidence that tomorrow will look in many ways like today looked, all of that kind of stuff, and the sort of hard, easily objectified consequences of putting bumper guards on cars, requiring seat belts, and requiring steering wheels to be made of relatively soft material, and so on. You have to use your understanding of the complexity of people in making policy, even when it's people you don't know and never will.
SPENCER: One thing this conversation makes me wonder is, are you describing ways that these simple scenarios used in rational choice theory don't reflect the real world. They don't reflect real-world decision-making. But do you think that they illustrate genuine biases? In other words, okay, most decisions in real life aren't like that, but sometimes we might have a decision that happens to be so constrained. Do you think that they've demonstrated that, in fact, people behave irrationally in those constrained situations?
BARRY: Absolutely. Daniel Kahneman was a friend of mine, and I have unbelievable respect for the work that he and Amos Tversky did. It changed the course of my career. It completely changed the focus of what I thought about and wrote about. I can't overestimate the significance of that work, and I think in this age of replication crisis, virtually all of it has held up. It has been very scrutinized, and there have been many, many replications. You can take this stuff to the bank, I would say, and all of these findings are, in fact, descriptions of errors, but they are errors relative to a standard. If we had scrutinized the standard to the same degree that we have scrutinized people's consistency with that standard, we might have made more progress in coming up with an idea of what the standard ought to be. I guess that the so-called biases that were uncovered would be less obvious and less dramatic, because the standard would be less quantitative and less formal. It's really striking when people make mistakes, because the objective right thing to do is so clear, at least according to rational choice theory. When the right thing to do is more ambiguous, then the deviation from that is also going to be more ambiguous, less sexy, less dramatic, but I don't think any of that research needs to be retracted or modified in its implications. It is really important serious work and has huge implications when it comes to policymaking, because it turns out policymakers are not immune to these kinds of biases. Big mistakes get made, because, you know, there are people too. We all make these mistakes.
SPENCER: I wonder about the fact that we make these mistakes in these relatively simple setups where we're given all the options. There are relatively few moving parts and relatively few values at stake. Are we even more biased on everything else? You would expect if we're screwing up the simple cases, wouldn't that mean that the real cases are even more hopeless?
BARRY: I think the biases are likely to be different, but you're right that when you add complexity, you're more likely to get mistakes unless the kinds of things that we talk about in our book are the kinds of things that human beings are built to do well, whereas this calculation is what we're not built to do well. It might be natural for us to use the context to decide how to talk to our friend, have a difficult conversation with our friend, when to have a difficult conversation with our friend, how much, if at all, to pull our punches, and so on. That may be part of what we are as human beings, and just growing up in a world with other people teaches us those sorts of skills. It's doing the math that's hard because now what you're asking people to do is abstract away from all of these contextual cues that we use as if they're not there, and just formalize the problem and solve it. It's conceivable that we wouldn't screw up more, that the tests people are given are the worst possible ways to assess how rational they are because they are the ones that we are least prepared to do well.
SPENCER: My understanding is that Gigerenzer has made arguments in these directions, saying if you take some of these situations where we show cognitive biases, if you reframe them in more naturalistic terms, for example, instead of giving base rates of something, you give counts, right? How many people were like this? How many people like that? Suddenly people's performance improves a lot, or in the card selection task, if you reframe it as you're checking people's IDs at a bar, suddenly people get way better because it's somehow put into a context that is natural for the human brain.
BARRY: That's right, but what that suggests to me is that it's not like suddenly you can see modus ponens, the logical rule about how you check to disconfirm a conditional claim, if X then Y. It's that you make it less necessary for people to understand that rule because there are other cues in the enriched situation that get people to the right answer without appreciating that there is a rule that tells them how to get to the right answer. I think Gigerenzer is right that when you add context of a certain kind, even though you're complexifying the situation, you're making it easier for people because you're fitting into their cognitive apparatus much better.
SPENCER: It seems to me that in these complex situations, it's often hard to know what humans are really doing, because a lot of it is coming from our intuition, which has been honed through our entire lives, associative learning and things like that. Sometimes we can see evidence that people are following some simple heuristics that might get them to a pretty good solution a lot of the time, even though it's far from optimal.
BARRY: I think that's right, and Gigerenzer is to be credited for having studied and put in the best possible light a whole set of heuristics that he argues, I think correctly, get people to the right answer most of the time. You're quite right that we use these heuristics without knowing that we're using them. When you're an outfielder going for a fly ball, you certainly aren't doing geometry to figure out the arc and where the ball is going to land, nor are you consciously following the heuristic to keep the angle between you and the ball constant. But it turns out that if you were to do that, you would find yourself parked in just the right place to catch the ball. Now, what's happened is that you play enough baseball that somehow this rule, to keep the angle between you and the ball constant as the ball goes up and then comes down, is learned through experience without our ever being able to articulate it. If somebody said, "Oh, so what you're doing is keeping the angle between the ball and you constant," you'd go, "What, are you crazy?" So then, what are you doing? "I don't know. I just know where the ball is going to come down, so that's where I plant myself." There's no question. In Kahneman's framework in Thinking, Fast and Slow — the system one and system two, the automatic system and the deliberate system — the automatic system is one we do not have access to. If I asked you, "How did you know how fast that car was going?" you'd go, "I don't have a clue. I've just seen cars moving, and I knew." Your visual system does all kinds of computations completely outside your awareness, and it gives you the answer, and most of the time, it gives you the right answer. That's not what we think of as thinking. We think of thinking as the stuff that we are doing consciously and deliberately, and it's effortful and slow, and the rest of the stuff, whatever it is, doesn't count as thinking. What Kahneman is trying to get us to appreciate is that they're both thinking, and they interact, and a lot of the time the decision is made before we even start "thinking" about it, because the automatic processes have basically generated an answer to the question. I think it's true — Freud was right — that much of what people are doing in their minds, they are doing without awareness. Where he was wrong is to think that this was motivated, that we were keeping things out of awareness because it was too painful for us to acknowledge them. No, it's just the way the machinery is built, and the trick is to cultivate the kind of judgment in people so that this automatic stuff that is done without awareness is mostly going in the right direction, rather than to suppress it and push everything into a rational choice framework or to ignore it.
SPENCER: The other day, just as a little experiment, I tried meta-analyzing myself as I was making a decision, because I was just curious, what do I actually do when I'm making a complex decision? I noticed that I kept jumping back between my system one, fast, automatic, intuitive thinking, and my system two, sort of slow analytic thinking over and over again. There were two options in front of me. I wasn't sure which to choose. I then paused, and my system one generated another option I hadn't thought about. I analyzed that for a little bit with my system two. Then I realized that there was another factor I hadn't considered. I reflected on my system one to be like, "Well, how much do I care about that factor?" My system one was like, "Oh, you don't actually care that much," so then I was able to discard it. This is an interesting interplay, back and forth, to eventually converge to some sort of solution.
BARRY: Now that may be an accurate account of how you actually made the decision, but from within the framework of the research tradition, system one is the enemy, even though it mostly gets you to the right place, because if not, we wouldn't continue to exist as a species. Mostly, what you want to do is suppress it, because it will often get you to the wrong place, whereas the reflective system will always get you to the right place. The advice I give people, because I'm supposedly an expert in decision-making, is don't just do something, sit there, because the first answer you get, the automatic answer, may not be the right answer. Give it time to percolate a little bit, reflect on it, turn it around, and see it from different angles, and maybe you'll end up doing what your first impulse was, but at least now you'll have better reasons for doing it. The process you're describing seems to me to be a perfectly sensible process, but it's one that, as I say, rational choice theory doesn't honor, and that most people doing research regard as making you more susceptible to error than you otherwise would be.
SPENCER: I remember when Daniel Kahneman did an adversarial collaboration with Gary Klein. Because Klein had written all these papers about how great expert intuition is, and Kahneman had written all these papers about how poor our intuition is, right. And they were like, well, how do we mesh this together? They worked together and basically realized that, in fact, they don't really disagree. Because, I think Kahneman would say it's not that he thinks intuition is bad; it's that he's cataloging when intuition is bad, right? As opposed to saying that it's actually bad. Actually, it works a lot of the time.
BARRY: It does work a lot of the time. I really appreciated that adversarial collaboration. I'm a big fan of Klein's. For people who don't know his work, he studies real-life decision-making, often in very consequential situations. My favorite domain is studying how firefighters figure out how to fight fires. He is very open to having his hypotheses disconfirmed, and most of us are not. His model is that training teaches you which two or three hypotheses to take seriously, and then you go to the scene and evaluate the evidence to test these hypotheses against each other and choose a course of action to put out the fire. When he confronted experienced firefighters with this model, they looked at him like he lost his mind. He said, "What? What alternatives?" There weren't any alternatives, and what they were telling him, though they weren't fully articulate about this, is they know what the right answer is by assessing the scene because they've had all this experience. It's much more like the sort of neural net type model where the answer is computed based on the rich set of connections that you've developed by having fought lots of fires. After the fact, they said, "We do what you're talking about. After the fire is out, we survey the scene and we ask, is there something that we didn't notice that we should have noticed? Can we do it more efficiently, more effectively, faster the next time if we have a fire that looks like this one?" They do the hypothesis generation. It's like post-dictation. After they've put the fire out, they do very careful analysis so that the next time, the network that is generating the so-called obvious answer will be even better tuned than it was before. This kind of model is interesting to see in AI, which is now driving everybody either crazy with enthusiasm or crazy with fear, or both. AI made enormous progress by abandoning the notion that you can use rules to tell computers what to do, and instead, they created simulations of our best understanding of how neurons work in the brain to create networks that learn from experience. The experience is just falling all over the web and gathering information. It turns out to be the right answer to the question. You want to teach a computer to play Go? Let it play Go and get feedback, you know, and chess and write your essay so you never have to write an essay again in college. It's done not by figuring out the correct rules and encoding them into the program, but by putting a structure into the program that enables it to learn not a set of rules but a set of outputs that are appropriate to the situation, computed in response to a challenge from the environment, like a question from a user. It transformed the power of AI, and the insight isn't new. The insight has been around for a long time. What is new is the computing power and computing speed so that you can actually train a network to do complicated things. You know, 25 years ago, people had a similar model, but there wasn't the kind of computing power that would enable that model to actually give an answer before the world ended.
SPENCER: What you said about firefighters and also mentioning chess reminds me, I recently saw an interview with Magnus Carlsen, the phenomenal chess player. He said something that really surprised me, which was that if he had access to a chess-playing computer, which these days, chess-playing computers are much better than even the best human, and all it told him was whether there was a deep move he should take the time to consider, but not what the move was, it just said, "Hey, on this move, you should really spend some time thinking about it," it would significantly improve his play. In other words, on most moves that he makes, his intuition just tells him the right move almost right away. He just needs to know, "Oh, wait on this move right now, I need to spend 10 or 20 minutes thinking about it because there's some other move that I might miss."
BARRY: Right. So this is his version of, "Don't just do something, sit there."
SPENCER: But even knowing that he needs to sit there is a deep problem.
BARRY: That's right. Absolutely. I'm not a chess player, but I'm a bridge player, and bridge is a very complicated card game. It's amazing to watch experts, real world-class players play, because you would think that because they're so expert, they would do everything quickly, and in fact, they don't. It's not because they don't know how to do the calculations. The first thing they have to figure out is what needs to be calculated. What's the relevant information? How reliable is that information? They take forever before playing the first card in the hand. Once the first trick or two has been played, they finish the hand like lightning because their hypotheses have been confirmed or disconfirmed, and everybody knows what everybody has. The hand ends quickly, but they take much longer before they play the first card than somebody at my level or someone even worse than me takes. I think this reflects a sense that they should think about everything they possibly can while they still have the flexibility to act on what they come up with, and later in the hand, it becomes more automatic. The sense I get with chess, knowing almost nothing, is that openings and endgames are pretty mechanical, and it's the middle game where the computer program is most likely to be telling Carlsen, "Wait a minute, this is a time to think."
SPENCER: You mentioned near the beginning of this discussion that you had a kind of sketch of an idea of what it might look like to try to say, "Okay, in this broader paradigm where we don't necessarily know the options and we don't know how to value things, it's hard to compare values, and we can't assign probabilities." We would still want to be able to say if people are performing well or badly, right? We would still want some normative thing to compare them against. I'd be curious to hear your thoughts on that approach.
BARRY: Well, yeah, and I have to be honest with you, the alternative we sketch does not produce the kind of clear-cut satisfaction, this is right and this is wrong, that something like rational choice theory does. What we identify are things that we think ought to be components of rationality. You need to be reflective both about what the world is offering you and about what your aspirations are, not merely your preferences, your aspirations, the short term and the long term. You need to be thoughtful about how this particular decision fits into the other decisions that you're making, and perhaps in other domains of your life. You need to appreciate that there is often no substitute for judgment. There is no mechanical way to figure out the right thing to do. You make judgments in the best way you can, and you're going to be wrong some of the time. You need to be open to what you were talking about, how system one introduced a new option that you hadn't thought about. In rational choice theory type experiments, that's not possible. You're given the options, choose. In real life, as you're thinking about the options that you were given, a new option occurs to you, and the more you think about it, more options occur to you. That openness to the experience of thinking through a problem, thinking through a decision, is essential. And mostly you need what? Well, two other things. One is you need what I wrote a whole book about, namely Wisdom, which is appreciating that the answer to almost any question of any complexity is, "It depends." That's unsatisfying, but it gets you to think about what the right answer depends upon. I think that enriches your understanding of the situation, of the decision, of the longer-range implications of the decision, and of how the decision fits into your life as a whole. That's really what we want. Someone who makes one rational decision after another by the lights of rational choice theory is not necessarily going to be someone who looks back on life with a big smile and says, this is a life well-lived. This is a life that was relatively free of errors and biases is not the same as this is a life well-lived. You want to enrich our understanding of what it means to be rational so that more of us can say at the end of a life, "This was a life well-lived. This is a life in which other people's lives were made better, even if only a little bit," and stuff like that. Our worry in writing this book is that rational choice theory foreclosed on too much of that. The hope is that the picture we present, which is a very incomplete picture, will encourage people to flesh out what it really means to be rational instead of forcing their understanding of that word into the very narrow preference-maximizing framework that we inherit from economics. That's our hope.
SPENCER: When I think about going to someone who's wise to get their help with the decision, it's possible they could give you some information you didn't know. It's possible that they could say, "Here's what I would do," but a lot of times it feels like what you're looking for is a new way of looking at the problem. Once they say something that gets you to look at the problem differently, it makes the problem easier and helps you see what's important, or simplifies it.
BARRY: Exactly, and to use the language of the research in the field of decision-making, what you're talking about is they help you reframe the decision. You didn't realize that you were framing it in one of many possible ways. You know the old story about the three bricklayers that you encounter walking in Rome. You ask them what they are doing, and the first one says, "I'm laying bricks for a wall." The second one says, "I'm helping lay bricks, making a wall," and the third one says, "I'm building a cathedral." They're all right; they're all describing what they're doing, but the perspective of the third one embeds the particular activities in a much larger framework that adds enormous meaning and significance to what he's doing, and maybe makes him do it with greater care and enthusiasm because he's building a cathedral, and without him, there would be no cathedral. So being able to reframe is incredibly important. We can do it on our own, but it is amazing how much easier it is to talk to someone else who automatically has it framed differently and will provide you with insights that you might get to on your own, but probably wouldn't. This is an argument in my mind for why it's important to have organizations made up of diverse people. Forgetting issues of justice and equity, just in terms of efficacy, it doesn't help much to have 12 people shouting the same thing instead of two. It might help a lot to have people with different perspectives tackling a common problem because we all have blinders on, and the only thing you can hope for is that my blinders and your blinders let different things through. That's more likely to happen if the people you're working with are diverse than if the people you're working with are just like you. I think everybody knows that, and people who argue for diversity appreciate that, but it's really been reduced completely to this unidimensional, make sure that there are women and non-whites in your environment, in your workplace. Forget whether there's a benefit to it; it's just the right thing to do, and that's an easy argument to attack, as we're seeing these days.
SPENCER: Well, a lot of people value equality, diversity, and justice as fundamental values as well.
BARRY: Absolutely, but there's a trade-off there because it's not necessarily fair to the people who don't get the job, who are as competent as the people who do, or as promising as the people who do, but happen to be white males. So you're saying, in effect, "I value justice and equality, and therefore some people, at least in this transitional time, will have to suffer so that we can move a little closer to justice and equality." It raises the argument to a different level. I think when you suggest it actually makes the workplace better. Universities started making these arguments when affirmative action was banned, that they were admitting diverse classes because it improved the education of all the students, not because they thought it was their moral responsibility to have diverse student bodies, but because it was their moral responsibility to provide the best education they could. This was a way to achieve that. That's a different argument, and they got away with it in the face of a recalcitrant, retrogressive Supreme Court. For a while, they got away with it. So I think asking someone else is often a very good way to shatter the frame you're operating within and suggest a substitute that gets you to see everything differently.
SPENCER: Barry, before we wrap up, why don't we end on one example where you can talk to us about how your approach to decision-making might have looked in that situation?
BARRY: Okay, I'll choose an example that is somewhat controversial, but I think everybody will relate to, and it is the response to the COVID pandemic. When COVID first hit, there were all kinds of things that we didn't know. The answers to: How contagious was it? How did it get transmitted from one person to another? How serious was it? Did it affect certain population subgroups more than other population subgroups? Was there any chance that we could develop a vaccine? Is there any chance that we can develop something to treat it? Is it really fatal for a large number of people? All of these were unknowns, not totally unknown because you can relate it to other viruses, but pretty damn unknown. So the first reaction was to assume the worst and basically shut societies down. A lot of things were suggested that turned out to be foolish, like wiping down your packages before you actually put them in the pantry, and a lot of other practices had consequences that were not considered by the people who were trying to make medical decisions. For example, if you shut down society, what happens to the economy, and how long does the economy suffer? If communities are wrecked because nobody can make money anymore, how long does it take to rebuild those communities, and what are the psychological consequences of having these communities deteriorate? If you keep kids out of school for a year, maybe they can make it up, but maybe they can't make it up, and maybe there will be some kids from affluent families, where the school they attend is supplemented at home with hours of enrichment, but there are going to be other kids that don't have that, and so the gap that we already see based on socioeconomic status is only going to get bigger, and these kids may never catch up. So how much should we care about the consequences to education and development, which could wreck an entire generation, in assessing how safely we should try to make society to protect it from rampant spread? Now, people did the best they could. They were not mindless with respect to the economic effects and the educational effects. They did the best they could in an emergency situation to try to save as many lives as possible. But I think looking back, if they faced the same circumstances again, they would probably make different decisions because they probably underestimated the consequences for education. We don't know that yet. We'll have to wait until these people who are six and seven graduate from high school. But my guess is that it has long-term effects, and it's hard to figure out what the long-term effects are on the economy and on the nature of the workplace. When people spend a year, or a year and a half working remotely, does that change the way they work when they come back? I was teaching a class in the business school at Berkeley, and COVID made all classes remote. Then COVID sort of ended, and people went back into the classroom, and the morale of the student body was completely different. They did not recover. It was a pain to go to class when they could just watch it asynchronously. The esprit de corps among the students really got shattered. This may come back, I don't know, but no one anticipated that this would be a maybe permanent, not a temporary, setback.
SPENCER: It sounds like you're suggesting that the way of framing the problem focused on certain aspects of what's valuable, but didn't consider all these other important aspects of what might be valuable. Is that right?
BARRY: Yes, and there were people considering them. But how do you quantify? You can talk about how many people are going to die based on your best understanding of the biology, but how can you talk about the effects on communities, the effects on first responders, the effects on little kids with the same kind of rigor that you can specify the effects on physical health? They got a back seat because the first priority is to save lives, and again, that's all completely understandable, and different countries followed different strategies in that regard because it was massively uncertain what the right thing to do was. So yes, if it had been framed differently from the start, and if the people who were in positions of decision authority were trained to see these issues framed broadly, not narrowly, we might have followed a different path, and that might have worked out better. I want to emphasize this is not meant to say that people did stupid things or made errors that could have been avoided. People made errors, in my view, in retrospect. I don't know how they would have been avoided, given how little we knew and how desperate it seemed at the time these decisions were being made. What it is, is a criticism of the notion that you can make rational decisions thinking about the problems you face narrowly rather than broadly
SPENCER: Right, and I think you're also suggesting that there may have been a focus more on what we could quantify than what we can't quantify, because we can measure that thing. So let's try to make that thing.
BARRY: There is certainly some of that. And I don't want to say that how many people die is equivalent to how much math you learn in second grade. Those are very different, qualitatively different outcomes, and you shouldn't treat them as equivalent or even tradable against one another, but I think people mostly assumed or hoped that whatever the effects were on the education and social development of these kids, they would be reversible, whereas death is not reversible. So as I say, they did the best they could.
SPENCER: It did seem also at times, not everywhere, not everyone, but in the times that the focus wasn't even on minimizing death, but minimizing infection, which is a different thing altogether.
BARRY: It's true, but it was because there was no real treatment that anyone had any confidence in. So once you got sick, it was just a roll of the dice. All you could do is keep people comfortable, but it seemed like the disease was going to take its course and people felt pretty powerless until the moderately effective drug got developed that would sort of make it so that the symptoms, the course of the disease would, for most people, be less serious. But we didn't know that when this all started; it seemed like it was basically a crapshoot death sentence. We knew that the older you were, the more likely you were to die. The less healthy you were, the more likely you were to die. But some people died, and some young, healthy people died too.
SPENCER: It does seem there was also an aspect of overconfidence, where there was actually so much information at the beginning that turned out not to be correct. I don't think that was because anyone was acting nefariously. It was just a complicated situation that was constantly evolving. I think people kind of got anchored on this information that, for example, there was a long time before people really realized how the virus spread. You talked about people wiping down surfaces. There were a lot of misconceptions. I guess the sort of meta point is that maybe we were overly certain, and we thought that we could estimate probabilities, when, in fact, we had very little ability to estimate the probabilities of things.
BARRY: The other thing that's really unfortunate is that it got politicized so quickly. Once that happened, separating wheat from chaff became close to impossible.
SPENCER: Because then it's not even a search for truth anymore.
BARRY: No, you don't know if you're searching for the truth or if someone's doing something else, and that didn't have to happen. It didn't happen everywhere, but it sure as hell happened here in the US.
SPENCER: Barry, thanks so much for coming on. Great to chat with you, and for anyone who's interested, you can check out Barry's new book. It's called Choose Wisely: Rationality, Ethics, and the Art of Decision-Making.
BARRY: Let me just say, my co-author and longtime friend is Richard Schuldenfrei, with whom I taught at Swarthmore for 40 years.
SPENCER: Great. Barry, thanks.
BARRY: Thank you so much. You asked terrific questions. I hope that you found my answers were responsive.
SPENCER: I did. Thank you.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: