Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
November 27, 2025
Are stock prices set by cash flows or crowd vibes? Why do bubbles last if “smart money” can short them? What should retail traders learn from GameStop and zero-commission options? When does momentum make sense - and when does it burn you? Why don’t obvious mispricings get fixed - what actually stops arbitrage? Will AI help us think clearer, or supercharge manipulation and personalized pricing? Where should regulators draw the line on gamified trading and price discrimination? Do tariffs feel good because they keep others out—even if we pay more? What does the "winner’s curse" mean for auctions, IPOs, and everyday deals? How much of what we want is copied from other people, and why does that matter for markets?
Alex Imas is the Roger L. and Rachel M. Goetz Professor of Behavioral Science, Economics and Applied AI and a Vasilou Faculty Scholar at the University of Chicago Booth School of Business, where he has taught Negotiations and Behavioral Economics. Alex studies behavioral economics with a focus on cognition and mental representation in dynamic decision-making. His research explores topics related to choice under uncertainty, applied AI, discrimination, and how people learn from information. Professor Imas’ work utilizes a variety of methods, including lab experiments, field experiments, analysis of observational data and theoretical modeling. His research has been published in the American Economic Review, Journal of Finance, Proceedings of the National Academy of Sciences, Quarterly Journal of Economics, and Management Science, among others.
Links:
SPENCER: Alex, welcome.
ALEX: Hey, Spencer, good to be here.
SPENCER: So what determines the price of financial assets, like stocks? Is it deep information on the intrinsic value of assets, or is it just vibes?
ALEX: The theory of financial markets has a very clear answer. It's the discounted flow of future value of each company. People in the market have some information about it. They bid on these assets. Some people are selling, some people are buying. The ultimate price, with the invisible hand, is the intrinsic value of the company.
SPENCER: Just to unpack that. You're talking about future cash flow. If I buy a stock and I project that in one year, it's going to pay a million dollars to shareholders, and then the next year, another million, you take all those cash flows into the infinite future, add them up, discount them back, and that's the worth of the company today.
ALEX: That's what determines the price. That's the basic asset pricing model. But there's another model that many of your audience is probably familiar with, by John Maynard Keynes, where the stock market is a beauty contest. For those who don't know this colorful story, back when he was writing, there were beauty contests in the newspaper. Essentially, the contest involved a bunch of pictures of usually women, and you had to pick the picture of the woman who you thought everybody else would also pick. You had to think about who is not necessarily the most attractive, but who do I think everybody else thinks is most attractive? He said that the stock market was just like that. It's not really about the fundamental value of the company; it's about what I think other people are thinking is the stock that everybody should buy.
SPENCER: It gets recursive because if I'm trying to model what everyone else is thinking, but they're trying to model what everyone else is thinking, everyone is just kind of, you get this recursive thing where, okay, but what is it all based on at the end of the day?
ALEX: So it's kind of like Tom Schelling has this idea of Schelling points. It's like all of our decisions are based on what I think everybody else is going to do. So I have to look around the world and think, "Ah, that's a focal point that I think other people are going to notice too." And if they notice it, they'll know that I notice it. So that's where I should go, or that's the stock I should buy, and that's a really different model of the stock market.
SPENCER: So what would a real example be, where people would coalesce around one aspect that has nothing to do with the intrinsic worth of the company, but it's something salient?
ALEX: GameStop, that's a good example. So GameStop was this little brick-and-mortar shop that was selling video games, not doing very well, and somebody, you know, on Reddit or some other forum said, "Hey, it really sucks that this childhood store that we all love is going into the tank because these private equity guys are trying to destroy it. Why don't we all just buy it? What's going to happen?" And guess what that is? That's a Schelling point. That's a focal point. All of a sudden, everybody else starts looking at that article, thinking other people are looking at it, other people are going to buy it. I should buy it. And then all of a sudden, the stock pops, and that value of the stock, as many of you remember, and you probably have seen, not only the news articles, but there's also a Hollywood movie called Dumb Money about GameStop, that the price not only popped, it stayed high. It's still high. So it's not like this little blip that went up and all of a sudden it's gone. This sort of thing that is completely divorced from the fundamental value of the company can sustain prices for a very long time.
SPENCER: Yeah, because in this standard economic perspective, you could say, "Of course there's going to be deviations from perfect rational markets sometimes, right? But they should of course correct." Once actors realize that there's a deviation, they'll either buy or sell to correct the price, and then everything returns to the rational agent model.
ALEX: That's the idea. But in theory, it doesn't really have to work like that. If nobody sells, the price doesn't go down, it doesn't correct. People need to actually start thinking, "Oh, actually, the discounted cash flows are really bad. I'm going to act on that information and sell." But if a lot of people's wealth is tied to the fact that we should not be selling, and we're coordinating on the internet very publicly not to sell, maybe the price is not going to correct itself. And it hasn't in many of these cases.
SPENCER: I believe it was Warren Buffett who had a quote, "In the short run, the market is a voting machine, but in the long run, it's a weighing machine." And I think the idea there is, it's trying to say, "Yes, there are these anomalies in the way people value things, but in the long term, if you wait 10 years, it all kind of equalizes." Do you think that's true?
ALEX: I think there's definitely a large grain of truth to that. But the issue is, when is that kind of long-run prospect going to actually occur? Maybe it's 10 years. Maybe it's 20 years. In 30 years, let's say the price of Gamestop corrects itself. And a bunch of people say, "See, I told you, everything corrects itself." But meanwhile, 30 years have gone by. Countries are born and die within 30 years. It's a long time. So the fact that people don't really know when these prices are going to correct is a big problem for asset pricing.
SPENCER: What's that other famous quote, "The market can stay irrational longer than you can stay solvent."
ALEX: Exactly, exactly. That's a great one. That's the limits of arbitrage. Everybody says, "Well, there are going to be these smart guys who are going to be shorting the stock." But the problem with holding a short position is, if you're out of that position and everybody else is still holding, you go bankrupt.
SPENCER: One thing I've noticed that I found very worrisome is that, imagine you realize things are in a bubble, like their valuations are completely detached from reality. They make no sense. For example, maybe it's the year 2000, and the Internet stocks are getting these crazy valuations that can't possibly be justified based on the amount of money the stocks are going to make. And you're right. But the problem is, how do you actually capitalize on being right? Because if you can't capitalize on it, then you can't put market pressure to make things more rational. What are your options? You could buy options. That's one possibility. The problem with that is, yes, you can make a bet that the market is being irrational, but first of all, there's often a lot of volatility, so the options will cost a lot. And second, you have to time things because you need to say not just that it's irrational, but you have to make an estimate of when it's going to become rational. If you give yourself a really long timeframe, the options are going to be ridiculously priced, and it's not going to be worth it. So you essentially have to be a market timer, which sort of defeats the whole purpose. Or you could short, and arguably, shorting the market is even worse. If you short the market, basically, when it goes up, you lose money. But the problem is, what if it goes up 10x or 50x? Who could sustain losing 50 times their money? It's just a ridiculously unsafe thing to do. So is there really even a mechanism by which the market can become rational in those circumstances?
ALEX: That's the problem with limits of arbitrage. This is a really beautiful paper by Andrei Shleifer and my colleague Robert Vishny, basically showing mathematically that it's really, really hard to do that, and it's actually not a very good correction mechanism to say, "Look, there are going to be people on the other side kind of correcting the market." Here's a nice example. A lot of the formulas we use to price assets are based on the Black-Scholes formula. This is essentially a formula that says, "Let's take the risk of the asset and other sorts of characteristics and calculate the price." This is still used all over the place. The people who created that formula started a fund, and they said, "Look, we know what the price of the stocks are. If we see any mispricing, we're going to shorten it. We're going to try to correct the market, and obviously, we're going to make money." Guess what happened to that fund? It went bankrupt very quickly because you just can't time the market like that. If people can't time the market, people are "irrational." I don't like using that word. If you're making money, are you irrational? Or do you know something that other people don't? Essentially, they kind of ran out of money, and the fund doesn't exist anymore.
SPENCER: That's an interesting question. Is it really irrational if you're making money on it? The reality is a lot of these bubble-type scenarios, where an asset goes way up in value and is disconnected from the fundamentals of that asset, are kind of like Ponzi schemes or a game of hot potato. It's been going up, so you buy it hoping it will go up, but then you've got to sell it to someone else before it falls. You don't want to be the one holding the bag. The reality is, I think a lot of times, most market players get hurt really badly, even if a few people make out like bandits.
ALEX: Yeah, that's exactly right. When you look at most of these bubbles forming, there's a lot of so-called smart money that tries to get in on it, thinking, "Look, we're smart people. We're going to leave at the right time," and a lot of those people end up holding the bag and going bankrupt. It's really hard to do it. Let's go back to GameStop. Once they did the analysis of who was trading, it wasn't these Wall Street bet folks making a lot of those bets; it was institutional investors. Smart money is getting in on this action. They know that the fundamental value is not the only thing determining the value of the stock. In finance classes, there's only one way to think about the stock market in terms of how each stock is going to be valued. In reality, the people actually working in finance know that John Maynard Keynes was probably right to some extent.
SPENCER: If we think about quantitative investment strategies, historically, some of them have been based on trying to find companies that are misvalued. If you look at the actual worth of the company, it's not appropriate. For example, maybe it's an ETF constructed from different assets, and you can say, "Well, it actually is differing from its true value. So we can buy it until the gap closes." But there are many quantitative strategies that have been momentum-based. Essentially, they're trying to figure out what's moving and assume it's going to keep moving for a little while, and then get out before it stops moving. Isn't this based on some kind of weird psychological phenomenon?
ALEX: Oh, absolutely. What is momentum? It's such a powerful factor in finance that it was incorporated into the standard models of finance because it's such a huge component of asset pricing, and the models just couldn't ignore it. They said, "Look, this is somehow a factor that should be used in asset pricing." When you think about what momentum is, it's psychology.
SPENCER: And everyone is told, "If you're investing, you've got to be fearful when everyone is greedy and greedy when everyone's fearful." But the problem is, we're humans, and so we feel greedy when other people are greedy, we feel fearful when others feel fearful. Unless you're made in a really special way, or maybe you can lash yourself to the mast and prevent your biases from coming into play, chances are you're going to be consumed by many of the same biases as others.
ALEX: This is why things like Robinhood, which are platforms theoretically set to democratize finance, allowing everybody to trade for free. The problem is, if people are biased, then you're not really democratizing finance. You're democratizing mistakes and getting people to lose a lot of money.
SPENCER: Yeah, and it might even go beyond mistakes to you're essentially telling people, "Hey, here's a sophisticated way to gamble that feels like you're doing the things that the big players are doing," but in reality, you're making not only very risky bets, but very probably bad expected value bets.
ALEX: Yeah. There's a really nice paper by my colleagues at the University of Chicago Booth. What are retail investors buying on Robinhood? They're not buying index funds, they're not buying a nice little package of diversified equities. They're buying crazy options and they're losing tons and tons of money to the institutional investors that are on the other side.
SPENCER: It's democratizing hand grenades. It's not the best strategy. It also reminds me of online gambling, which has become absolutely huge. What I've heard is that one of the most popular ways they make money is by giving these crazy bets called parlays, where they have almost no chance of winning, but the potential amount to win is huge. It reminds me of these crazy options trades on Robinhood. In theory, you could make a tremendous amount of money, but actually your odds are really horrible. That huge potential win is really enticing, and maybe you heard about someone else who had that huge win.
ALEX: Yeah. The other difference between these parlays and lotteries is that you know the chance is very low when you buy a lottery ticket. With these options, they're so complicated, you might not even know that the chances are so low. You might think they're moderate. These are financial markets. People on Wall Street are in financial markets. Now is my chance to get in on the money. You're not doing the same thing as the folks on Wall Street. You're giving them your wallet. That's what you're doing.
SPENCER: Yeah, with the lottery, unless someone's superstitious, most people accept that, on average, you lose money playing the lottery. This is very well understood. I don't know if people realize that with some of these options trades, especially because there are so many online gurus telling you their secret method, and people think there is a way they could actually beat the system with this.
ALEX: Yeah, I can't tell you how many times I'm talking to somebody and they tell me, "Have you heard about trading on Forex?" Forex is the exchange between currencies, and you're basically betting against the most sophisticated investors out there, and you're literally giving them money. I personally have research on Forex. It is a negative expected value bet. It is like going to the worst kind of casino. There are seminars you can attend to learn how to make money betting on Forex. That's the same thing as having a seminar about how to go to Riverside Casino and make money. You're not going to make money by going to Riverside Casino. It's the same thing.
SPENCER: So what are some of the implications of this idea that the stock market is not just about the true value of assets, but it's about this Keynesian beauty competition?
ALEX: Well, some of the implications are, again, something that we touched on before, that it's really hard to maintain discipline in a market. When I say discipline, I mean having prices somehow tied to the value of the company when you have these forces that are completely divorced from these values pushing the prices up and down. It also makes the market very unpredictable because you're not going to be able to put in all of these variables and say, "This is what we think the situation is going to be." For all of those reasons, this introduces a lot more volatility and potential risk into financial markets once you start thinking about these things on a more macro scale, where you're no longer thinking about the fundamentals of the company or the fundamentals of the economy, and instead you're thinking about what I think everybody else is thinking. All of a sudden, these little, tiny events, if they change enough people's perceptions of what everybody else is going to be viewing, things go boom.
SPENCER: Do you think it actually leads to greater large-scale instability, like a greater chance of market crashes?
ALEX: Yeah. I think the idea is that that's exactly right, that you're going to be seeing fluctuations in markets where you look at the economy, you look at all the fundamentals, and you're like, "Why are things moving up or down? I don't understand." And then you realize that that's not really why they're moving up and down. They're moving up and down because people are thinking that other people are thinking that you should sell or you should buy, and so this makes it very hard to do policy in those sorts of settings.
SPENCER: You also mentioned that it means that assets might be mispriced, but people might wonder, why does that really matter? Why does it matter that the stock market reflects the actual underlying worth of an asset?
ALEX: So that's a great question. It kind of gets into what is the role of having a stock market in the first place? What is the role of finance? The ideal setting is that finance and financial markets allow capital to be allocated to the places where it has the highest return. So I am a company that has invested in my technology. I came up with a great idea. I came into the stock market. Everybody knows that this thing that I've invented is going to pop and it's going to be great, and they realize that, and they should buy my company. On the other hand, the same thing, a company is sitting there, completely a zombie, completely dead. That company should not get any more capital that's reflected in the price. If prices are divorced from these sorts of things, then the full function of financial markets is really not there.
SPENCER: It seems like there's an underlying assumption there that if a company is making money, that it's providing some useful benefit to society. And obviously you could debate the extent to which that's true, but if we start with that assumption, then we can say, well, "If things are making money, they're doing some benefit to society. Therefore we want financial markets to act in such a way that they provide capital to the things that are able to actually make capital, right, and that things that can't actually make capital don't get capital." Does that seem like the thesis? Yeah, so then, I guess there's a question, to what extent is that really a good assumption that things that are making money are actually doing good for the world?
ALEX: If the assumption of capital markets functioning the way they should is correct, then that's exactly what should be happening. The companies that are making money are being productive. If prices are divorced from companies making money, then companies making money are not necessarily productive for the world. It could be a company that's doing something horrible and completely unproductive. But people think that they can make money by just buying the stock of this company because they saw it on Reddit, and in that case, the whole relationship, and the whole kind of idealized version of financial markets as means of getting capital to companies that are productive goes out the window. You're in a lot of trouble.
SPENCER: The way I look at this, and I'm curious whether you would agree with us or not, is that if a company can make money by harming people, that's a really fundamental problem in society, because many people tend to be willing to make money, and if you're not willing to do it, maybe someone else is. If they're able to make money harming people, people will likely step in and do that. Financial markets can actually be an engine of harm. Because they could allocate money to things that are making money by harming people. In a well-regulated system, we have laws to try to make it as hard as possible to make money harming people. Insofar as regulators are able to do that, then what's left over? The ways to make money are either neutral and it's fine, not a big deal either way, or beneficial. You want the companies getting money to be the ones creating life-saving drugs, not the ones selling addictive drugs that don't actually carry your problem.
ALEX: Exactly. If a company is out there, it's common knowledge that it's not going to be productive and is going to be doing something to harm people, and there's a narrative that develops, either online or in other social circles, that you can still make money investing in that company, it ends up getting a lot of capital to do more and more unproductive or harmful things. This is a big problem. Financial markets no longer serve the purpose of being engines of getting companies to be productive and doing social good. You're getting the opposite of that.
SPENCER: I want to clarify for the listener. It seems like there are two assumptions here that have to work together. The first is that companies, when they make money, are actually doing good things to make that money, or at least not harmful things. If you can make money by poisoning people, then funneling more money to that company will just lead to more harm. That's the first assumption. The second assumption is that capital gets allocated to those companies that can use it productively, that can produce a return from it. If you have the first assumption in place, then by allocating more money to companies that can make more money from it, you effectively benefit the world more. That's the second assumption, which is about efficient allocation, and these bubbles and irrational behavior in the markets can screw up that second assumption. Does that make sense?
ALEX: Yeah. I think to the first assumption, it's really important to highlight the role of a strong state where the government is willing to say, "Look, the capital markets are doing their own thing, and people are buying and selling stocks because they think they can make money off of these stocks. But as politicians and as the state, we don't want people being harmed. That's why people voted for us in the first place. We need to make laws and regulations to keep these companies from doing harm." The first assumption really needs a strong role of the state. Even without this model of Keynesian beauty contests in the stock market, even the normal vanilla stock market model might still get companies that are doing harm, making a lot of money and getting a lot of capital.
SPENCER: Let's shift topics. You co-wrote this book, The Winner's Curse, which is a re-release, but my understanding is it has tons of new content compared to the original. One of the fascinating things you told me before we started recording is that before actually publishing the book, you went and replicated the studies from the book. Tell us about that.
ALEX: The title of the book is Winner's Curse: Behavioral Economic Anomalies, Then and Now. The goal of the book was really to revisit where behavioral economics was when the original columns that The Winner's Curse was based on were published, which was around 1992, and just ask, is this true?
SPENCER: Yes, we often talk about the replication crisis. I also want to point out that over the last few decades, we've been compiling these anomalies where humans seem to deviate from what you'd expect of a perfectly rational agent. You can define all these scenarios; you could say, "What would a perfectly rational agent do in this scenario, assuming they have some utility function they're trying to maximize, assuming they could perfectly process information and work with probabilities perfectly, etc.?" There have been so many of these anomalies that have been collected, and some people think it's gone too far. Some people say, "We've overshot; actually, humans are much more rational than that. We're not such irrational creatures." I think that's part of what this book is responding to, if I'm correct.
ALEX: Yes, part of what it's responding to is to say, "Look, the original set of anomalies were discovered in a very disciplined way." They were discovered by saying, "Here are the assumptions underlying economic models." Let's take these economic models, since we think they're important, and I think most people agree they're important; they're the kind of bedrock on which policy — and we were just talking about financial markets — all of these sorts of things are built. Let's stress test them in a disciplined way by saying, "Actually, this assumption doesn't really agree with how we think people actually behave. Let's set up an experiment to show that people behave as psychologists think they do." I think that's how the original anomalies were discovered. What we found in the book is that replicating them now, all of them hold up.
SPENCER: That's phenomenal, because it's certainly not true of all areas of social science. There have been all kinds of problems with replicability, and seeing this hold up — honestly, it doesn't surprise me so much, because I feel behavioral economics, in my view, has been one of the big bright spots in social science. But it is really cool to see that you replicated it. And how many books actually go to replicate their studies? Very, very few. So kudos to you.
ALEX: You don't have to take our word for it. When you purchase the book, it comes with a bunch of online materials. One of those online materials is not only the results from our own replications, but the materials for you to do the replications yourself. You have the instructions exactly. You go on one of these online crowdsourcing platforms, upload the instructions. It's pretty inexpensive. You collect the data, analyze it, do it yourself. We really want to show that the bedrock of behavioral economics is very much sound.
SPENCER: Can you give us an example? What sort of a really compelling example from the book?
ALEX: I really like the first chapter, which is what the book's title is about: Winner's Curse. Let's say you go to a bar with a jar of coins, and you say, "All right, the person who bids the highest for this jar of coins gets the money from the coins." You can convert them into dollars. You can Venmo them to the person so they don't have to walk around with coins. But everybody bids. Most people bid below the value just because they're risk averse. Time after time, you could do it yourself. The person who wins the money is paying more for that jar than what's in the jar. That's called a winner's curse; people lose money from winning. The cool thing about it is that, one, you can replicate that the winner loses. But two, it's not just about jars and auctioning coins. The winner's curse was first discovered by oil executives when they found that every time they bid on wells for drilling, if they win, there's less oil in those wells than they estimated, so systematically, they end up losing. This is not just a cute little behavioral phenomenon; it's out there in the world where very sophisticated consumers and firms are suffering from it. That's one of my favorite anomalies that replicates super well, and it's something that firms need to be super aware of when they're deciding how to bid and things like that.
SPENCER: I once "replicated" an example of this in a social experiment that I threw. It's a slight variant on it, but the way it works is there's a certain amount of money, and people can bid on it. But the catch, as everyone was told in advance, is that both the winning bidder and the second place bidder have to pay.
ALEX: Made a lot of money with that.
SPENCER: Okay, there's $100. How much do you want to pay for it? But remember, if you're the second place bidder, you still have to pay. Of course, it keeps getting, for a while, people make bids, but then you end up usually with two people bidding, and one of them has bid $70 and one has bid $71. The $70 person is thinking, "Wait a minute, I better bid higher because I don't want to pay $70 for nothing." Then they bid it all the way up to practically $100. I didn't make them pay at the end. It was just to illustrate a point, but obviously that's a twist on what you're describing. I think the winner's curse is so fascinating because it shows that even though something makes sense if you think of it as a kind of one-off, estimating this one thing, on average, I'm accurate at estimating. Once you take into account the broader context and say, "Wait a minute, the fact that I didn't win this, what does that imply? What does that say about the fact that other people didn't outbid me?" Now you have this extra evidence. The fact that you won sort of gives extra evidence that says, "Hey, maybe I didn't want to win this."
ALEX: Yeah, exactly. That kind of feeds into what we talked about before. You have to think about what everybody else is doing. Everybody's trying to estimate the value of this jar. Some people are saying it's got $15, some people are saying it's got $13 and they bid $11. Other people are saying it's probably $20, so they'll bid $18. They're still bidding below the value, but $18 is probably going to win you the jar, and there's only $15 in it. You need to take into account that the more people there are in the room bidding, the higher the chances that if you just take your value and bid according to that value without taking the winner's curse into account, you're going to end up losing money.
SPENCER: Do you see any ways that the behavioral economics approach has gone too far, where you're like, "Actually, the rational actor model was right all along?" It turns out that, in this scenario, people weren't actually being so irrational. In fact, maybe they had additional context or additional constraints that we didn't understand at the time. I'll give you a funny example of this. We were trying to replicate some questions related to the sunk cost fallacy, and we were using various simple standard scenarios, asking people, "Okay, imagine you order a meal, and the meal comes and you realize you not only do not like the taste, but you're full, right? Do you keep eating it?" Most people would say yes, and you could just stop there and be like, "Oh yeah, sunk cost fallacy." But then we also included questions asking them to please explain their answer. Why did they say they would keep eating it? It turned out, tons of them were saying, "Well, I wouldn't even be at a restaurant alone, so I'm going to be with another person. It'd be weird if I'm just sitting there not eating my food." You realize they're operating under a different constraint. It's not to say the sunk cost fallacy isn't real; I think it is. But it's like, "Oh, wait, there are extra constraints that simple models don't always take into account."
ALEX: That's an excellent point. I could talk about this for two hours if you let me. We talked about this in the epilogue of the book. The new wave of behavioral economics is doing exactly what you're describing. Rather than saying people are stupid, we ask, "What are the constraints that lead them to behave the way they do?" Let's say people are making a crazy decision about their mortgage. They get some APR, and in the media, they're like, "All these people are taking these crazy APR mortgages. They can't pay their mortgages; they're going bankrupt." Are these irrational, silly people? How were they given the mortgage? What were their constraints when they were filling this out? You look at these documents; I'm a PhD economist, and I wouldn't have been able to figure out what the APR is. The constraint is that I have limited attention. I have a family that wants me to buy a house. I'm being presented with a set of information that makes this super attractive. I signed the paperwork. If somebody told me, "Hey, here is the APR right now. Here's how it's going to change. This is how much you're going to pay in the future," most people would say, "Screw that. That's crazy. I'm not going to take it." The constraints are attention, my social situation, and the sophisticated interplay between me and the companies that are really trying to screw me. Taking all of those things into account will lead to what looks like mistakes. But they're only mistakes if you don't take these constraints into account.
SPENCER: There are a couple in particular I want to run by you because it seems like there are these waves of research where some kind of bias will be shown in an experimental setting, and then other research will come out saying, "Well, no, actually, if you really think about it the right way, people are being rational." An interesting example of this is hyperbolic discounting, where it's this general idea that if you're thinking about how much something is worth to you a year from now versus tomorrow versus five years from now, you can think about, "Well, one way to value it would be to say it's equal regardless of when I get it." But I think we could pretty quickly dismiss that idea because you don't even know if you'll be alive five years from now. In theory, you may not be alive. So how could it really be worth the same amount to you? You have to have some kind of discount. Then you can show that, if it's not exponential, then there are theoretical problems that occur. People can actually value something more in the future than they do a smaller amount of money today, but then actually change their mind in the middle. You want this kind of consistency, so you say, "Okay, if it's an exponential function, the way that they discount the future is exponential." You avoid these inconsistency problems. But then they do a bunch of experiments and show that this is not the way humans really discount stuff. They use other functions of discounting, like hyperbolic discounting. "Okay, seeming irrationality, right? Interesting." But then a paper comes out and says, "Well, maybe it's more complicated than that. If you actually assume people don't know the right discount rate, you assume there's some uncertainty in the discount rate, maybe hyperbolic discounting is rational after all."
ALEX: That's a 2019 paper by David Laibson and Xavier Gabaix, myopia. I think it's called Myopia and Discounting, showing that actually, uncertainty over future tastes generates hyperbolic discounting.
SPENCER: So results like that, I'm wondering, do they change your view on how irrational humans are? Could it be that a bunch of the seeming irrationality is some kind of deeper rationality that we just haven't understood?
ALEX: Look, I spent seven years at Carnegie Mellon University. That's the home of Herb Simon, who came up with this idea of bounded rationality. Bounded rationality is the idea that people are not irrational in the sense that they're doing crazy things that don't make sense. They're rational, but they're constrained in how much attention they can pay to something, how many calculations they can make, how much information they have about the future, even about themselves. You incorporate those sorts of constraints into our standard model, and all of a sudden you get what looks like crazy rational behavior, but it's actually boundedly rational. I'm a big proponent of that view.
SPENCER: I see. So overall, you would say, "Humans are not that irrational, not as irrational as they seem, potentially."
ALEX: Doing their best. That's my view of humans.
SPENCER: They do their best. It's interesting because while I find papers like that hyperbolic, this kind of one fascinating, I actually tend to lean towards thinking that humans are behaving very, very irrationally. A lot of that, I think, is direct evidence that people seem to do a lot of things, myself included sometimes, for sure, where they just regret it immediately after, and they're just like, "What? Why did I do that? That was dumb. By my own values, clearly, that was not what I should have done. I clearly had enough information to know better." It just seems like a very constant state of the human condition.
ALEX: You kind of have to unpack all of these scenarios a little bit. I'm constantly making mistakes. But why am I making mistakes? A lot of the time I'm making mistakes because I'm thinking about something else that I think is more important. I was making a cup of coffee this morning, and I really needed that coffee. I turned it on, went upstairs, did something for 20 minutes, thinking, "Oh my god, I'm gonna go downstairs and have the best cup of coffee in the world." I go downstairs. There's no water in the coffee pot. Everything's on fire because I've turned the coffee machine on and everything's hot and bubbling. I made that mistake. I should have put the water in, and I know how to put the water in. I did not put the water in. What was I doing? I was planning my day while I was making the coffee. People have limited attention. They are paying attention to things that they find most important. An ex post mistake — Richard and I talk about this all the time — is not necessarily evidence that you were doing something irrational. Obviously, there's room for behavioral economics, and there's a lot of room for behavioral economics, but I think the principled way of thinking of behavioral economics is that it introduces constraints from cognitive psychology into the standard model, rather than saying people are stupid. I think that's a better way of doing things, and it has much more explanatory power than just adding an extra parameter and saying, "Look, this doesn't make any sense, but this is how people behave."
SPENCER: Certainly, it is a much nicer way to do things. It makes people feel less bad about it. How do you make sense of something like astrology? We actually ran a couple of studies on astrology. First, we ran one on astrology sun signs, Aries, Pisces, whatever. We found that they're basically correlated with nothing, which is interesting because you might think they'd be correlated with something due to maybe the time you're born, summer versus winter or something. But if there are effects there, they're just so small that you would need a ridiculously huge sample size even to pick up on the effects. So that was the first thing. We put this out casually. A lot of people got angry at us and said, "Well, that's not real astrology. Real astrology uses full astrological charts." So we thought, "okay, let's try that." We actually recruited 150 astrologers. We gave them lots of information about people. We tried to match them to their astrological charts. Not only were they no better than chance, but the more experienced astrologers were much more confident and also no better than the less experienced ones. So, astrology has such huge levels of belief in it. It seems to correlate with nothing. It seems to be total noise, as far as we can tell. How do you make sense of something like that?
ALEX: How do you think about the broader question of belief systems that are not based on truth, and that people are very confident about and make a lot of decisions based on?
SPENCER: If they were making decisions based on it, I think it'd be easier to explain. It's like, "Oh, it's a fun game, whatever." Many people in astrology don't make decisions based on it; it is a fun game for them. But there's clearly a sizable contingent of people that take it seriously, and it's guiding their choices. They're using it for advice, to understand their lives.
ALEX: This gets at, now we're outside of behavioral economics a little bit. Let's talk about some psychology. I think there's a lot of psychology showing that people need agency in their lives. They need a sense that the world is not this random process, where you're in a machine and have no control over it. You're just making decisions where your effort or whatever you're doing generates some random outcome, and you just have to deal with it. If I'm a Pisces, I believe that this matters, so therefore, on a Tuesday with the full moon, if I do this, things are going to be really great for me. That kind of belief is comforting. It generates positive utility. This belief in something that is definitely not based on any truth, if I believe in it, calms me down, gives me agency, and provides a sense of control over my life. Believing in the system obviously leads to some mistakes, but also believing in that system could generate a better mood than the counterfactual of not believing in the system.
SPENCER: Yeah, and I think you hit on a very important point, which is that it's not just the truth that we are maximizing. Even the perfectly rational agent wouldn't be maximized just for truth. They have this semi-utility function, which maybe truth is part of that. But I think where it gets a little funny is that if you do this non-explicitly, it's hard for someone to be like, "Well, I know x is true, but I'm going to believe y anyway because it's better for me. It's very hard to do. I don't know if people are capable of doing that. Maybe some are, but if you can't do that, then you sort of have to implicitly be valuing the thing while simultaneously believing it's true but not believing it's true." I'm saying, in order to come to say, "Ah, it's actually better for me to believe in this false thing."
ALEX: Kind of like a cognitive dissonance sort of thing where you're holding two beliefs at the same time and they're conflicting, and you just kind of have to live in this space where you're flipping back and forth. I think people certainly do that. I wanted to also segue into saying I don't think people are hyper-rational, and everything is rational, and everything makes sense. I just want to put a little bit of water in this idea that people are just out there doing crazy stuff all the time. But I think, in going into psychology, we have evidence that people are holding multiple beliefs. I'm from Eastern Europe originally, and the idea of holding two completely contradictory beliefs, that's like the whole system of the government. Under Soviet times, you're supposed to hold the belief that we're a communist nation and everything is democratic. But at the same time, if you say the wrong word, you know where you're going.
SPENCER: That's dark. It does seem to me that belief in humans is much more complicated than the simple "You believe something, you don't believe something." You can kind of know something on one level but be in denial. Maybe someone knows their partner's cheating on them, and they kind of know it, but yet they never allow themselves to have the thought of it. But maybe in some ways, it influences their behavior. So yeah, it just seems like we need a really rich concept of belief to make sense of these phenomena. So jumping to a new topic, AI is becoming a much bigger part of people's lives. People are using ChatGPT and Anthropic Claude all the time. What do you think the effect of this is going to be on people's biases and decision-making? Is this a good thing, or is this going to actually just make us worse thinkers?
ALEX: So, there's this kind of parallel to what world we're living in. If we're in an ideal world where people know what their biases are, and they know that the AI has more information, knows what to do, is super intelligent, we kind of live in a better world. Because people say, "Look, these mortgage forms are super complicated for me, and it's going to be really tough for me to go through them. I'm going to put it through Claude, and Claude is going to tell me everything that's going to happen. I'm going to make a better decision." In that world, AI completely eliminates biases. But we could live in a different world where AI is actually being used by companies that are trying to take advantage of people. They used to be able to take advantage of us in a very crude way because they didn't have a lot of information about any individual; they just had to put something out there and see who bites. But AI basically allows companies to build models of individuals given the data that they're freely providing on the internet, and these companies can now exploit biases on a very much larger level, which leads us to live in a much worse world in terms of biases.
SPENCER: Okay, so it sounds like we're talking about two different types of AI. We've got LLM chatbots where people may be talking to it, maybe asking for advice, that kind of thing. And then we have more AI prediction models, like TikTok trying to predict what video to show you, or Amazon trying to predict what product you're going to buy. And there might be active incentives to create biases or exploit biases. Is that what you're getting at?
ALEX: Yeah, that's exactly right. So I'm classifying both of these things as AI.
SPENCER: Because even with things with LLMs like ChatGPT and Claude, there's a big problem, I think, of these models reinforcing our own biases. Where, instead of, in an ideal world, they might call us out on our misconceptions, in practice, they're trained with human training data where people say, "Literally in the training process, they're going to show different examples and say, which of these do you like better? People tend to like things that reinforce their own beliefs, even if those beliefs are biased, right?" And so we had this big problem with sycophancy recently, where they had to roll back some of that because it was getting excessive.
ALEX: Yeah, the human part of reinforcement learning, where people are sitting down and saying, "Look, we don't want the model to do this. We don't want the model to do this. We don't want the model to do this." Unless that's built into the process such that the model is not propagating biases, then it will. It's going to have biases in the model. If you prompt it to say, "Give me the unbiased opinion according to this model," which absolutely nobody's going to do, it's going to be able to tell you to give you that. That's what I mean by being in a fully rational world where people are aware of their biases. If a person is just like, "Which one of these things am I going to like better?" you put that in an LLM, and the LLM wants to please you, then you're absolutely going to perpetuate biases.
SPENCER: I have custom instructions I set up permanently for my LLMs, and it includes three lines: call out my misconceptions, be brutally honest, and tell me when I'm wrong. I posted these online, saying I think this really helps it be less sycophantic. I found that it is much less likely to compliment me and more likely to disagree with me, which I think is really healthy. But someone else copied this into their LLM, which is a different LLM, and they found that it was actually making fun of them and telling them that they're a fool. I was like, "Okay, maybe it pushed it a little too far." You can see why people don't like instructions like this.
ALEX: Yeah, for sure. We don't really know where things are going to go with AI broadly. Unless there's some sort of state intervention, we kind of know where things are going to go. We're going to have system-wide third-degree price discrimination all over the place.
SPENCER: Yeah, could you elaborate on that? What is price discrimination, and how is AI used for that?
ALEX: Let's say you're in the world 20 years ago. You're trying to buy an Apple iPhone, and for you, the maximum you're willing to pay for it is $2,000; you really like it. For me, it's $1,000; that's how much I like it. There are a bunch of consumers with prices all over the place, supply equal to demand. There's one price, let's say it's $1,200. You go out there, Spencer, and buy an iPhone for $1,200. The difference between your willingness to pay and the price is your consumer surplus. Apple is not giving you a Spencer price; it doesn't know what your price is. All it knows is supply and demand, which allows consumers who value the product at greater than the price to get surplus. What is AI doing already in these online settings? It's building a model that knows your willingness to pay. When you go out and buy something, it's going to give you a different price than me, and that is third-degree price discrimination, where people are going to get different prices depending on their willingness to pay, and that eliminates consumer surplus.
SPENCER: Sorry, what does third degree refer to?
ALEX: Third-degree is just the terminology for that type of price discrimination because there are more illegal types of price discrimination.
SPENCER: And is that legal in the US?
ALEX: Some versions are legal, some versions are not legal. But the sort of thing that, if you're going by particular groups, for example, I know I'm not going to put every single thing into my model, but I'm going to put certain things into my model, depending on what I'm putting in that becomes legal, and those certain things that I'm going to be putting into the model might be super predictive of your willingness to pay.
SPENCER: Right. So it's presumably not allowed to say, "Well, you're a white person, so I'm going to give you a better price." That's surely going to be illegal. But if it's using information about your past purchasing behavior or something, maybe it's allowed to do that.
ALEX: Yep, exactly.
SPENCER: So you kind of pointed at this economic argument a little bit. Obviously, people generally feel it's unfair when they get different prices. That's a common thing, saying that's not fair. There are cultures where that's not true, where it's very common to haggle for goods. And you can say, "Well, if you have a relationship with the storekeeper, or you can convince the storekeeper you're not willing to pay more than x, maybe they'll give you a discount or whatever." So, whether it's fundamentally wrong is an interesting question. Maybe there's some cultural aspect to that. But you also play on the economic issue, which is that there's a certain sense in which the market price, when there's one market price, is optimal. Can you tell us what is the sense in which it's optimal?
ALEX: It's not optimal in the sense that there's something in economics that says third-degree price discrimination is, "not optimal." There's this idea of Pareto optimality, where this is optimal because there's no way for us to move to a different system where everybody would be better off and nobody would be worse off. Because, let's say you move from the one-price version to the full third-degree price discrimination version. That's not a Pareto improvement, because the company's hurt.
SPENCER: So is there a fundamental issue that this is a way that companies can basically move consumer surplus, so basically benefit the consumer to benefit the company. So it's not that it's a non-optimal way of doing things. It's just that it screws over consumers at the benefit of the company.
ALEX: Exactly. So it's just how surplus is divided. That's not an optimality thing, that's not an efficiency thing, necessarily. That's a normative thing. That means that's us as a society saying, "Look, we don't want this to be happening. We don't want companies to be getting all of this market surplus."
SPENCER: And insofar as companies do get most of the market surplus, or all of it, it really undermines the whole benefit of capitalism. Because the whole point is that companies should be producing things that are valuable to society, and that's why they get rewarded for it.
ALEX: Yeah. Again, I could talk about this for hours. Capitalism is a catch-all term for a lot of different things. There's really, really bad capitalism that you do not want to live under, where one company makes all the products and employs all of the people and takes all of the surplus and has everybody paid at their cost of effort, and that's it. No more people are getting any surplus from buying anything because prices are set a certain way. That's capitalism, that's still capitalism. I don't think anybody wants to live in that world. The places where the type of capitalism that people want to live under has strong regulation because there are no monopolies, or very few monopolies, and there's a lot of competition between companies for both workers and products and things like that. And that's just a different sort of capitalism. And also, you don't have too much of that third-degree price discrimination, so people can have some of that surplus.
SPENCER: So what do you think the solution is? Should it just be made illegal to do this?
ALEX: I think regulation is the only way that we can ensure that people have consumer surplus. Regulators are always going to be behind tech companies in terms of what they can do, but they could try a little harder.
SPENCER: Do you see other big ways you think AI will have an impact on decision making or biases that people have?
ALEX: All I can say is that it's going to have a huge impact. The direction of that impact, I don't want to make any strong bets. I can see it going either way. It really depends on the regulatory framework and what politicians are willing to do. It depends on how AI companies decide to put these products into the world. It could be that people get burned by these things and see that every time I use it, I get screwed a little bit. I'm going to stop using it. Companies don't want that, so they might actually incorporate safeguards and do more human reinforcement learning after training these models to make them better for decision making. That's certainly a possibility. It's just really hard to predict; the world is moving under our feet.
SPENCER: One thing I've noticed very recently is people using LLMs to argue their side of an issue. I started having people post on my Facebook posts, "You're wrong," and then it's just a list of an argument clearly made by an LLM, where they basically told the LLM, "Prove why this thing is wrong." Of course, if you do that, it will always give arguments on the side that you tell it to give arguments on. It's an interesting kind of dual-use, sort of cognitive bias technology, because you could just as easily say, "Argue all plausible sides of this issue," or you could say, "Argue the side I disagree with," or you could say, "Argue the side I agree with." Depending on which of those uses you do, it could make you less biased or could just massively increase your own bias.
ALEX: So it's actually kind of cool because I'm an active user on X or Twitter, or whatever you want to call it. At first, I was having Grok in there, and I thought, "This is going to be kind of creepy." But then I started noticing all these people saying crazy stuff online and then saying something like, "Grok, am I right?" Or somebody else coming in and saying, "Grok, is this person right?" And Grok being like, "This person's wrong," and that person being like, "Because it came from this third-party arbiter," all of a sudden they were actually able to shift their beliefs in the right direction. Now this gets into how often is it going to be right? How often is it going to be wrong? But there's actually evidence by David Rand and some colleagues showing that LLMs do have this potential to debias people, exactly in this way where people have these conspiracy theories, and then they interact with the LLM, and the LLM pushes back in a way where people are not as defensive as they would be arguing against another person, and beliefs actually converge closer to the truth.
SPENCER: Well, it certainly helps if you're discussing something with someone if you feel like it's on your side. If you feel like it's from an outgroup, or you know that this person doesn't have good intentions towards you, it makes it very hard for those to be productive conversations. If someone views the LLM as just sort of this helpful agent and doesn't think of it as trying to do something to them or trying to manipulate them, maybe that opens them up more to hearing counterarguments.
ALEX: Yeah, exactly. When you're arguing against somebody, you kind of have this assumption that they're trying to get you or they're from the other tribe. I don't think that people have that as much with LLMs, so their kind of moat for having an opinion and incorporating information is more open than it would be if they're arguing with somebody from the other side.
SPENCER: Something that's been making this more complicated is that Elon Musk and some others have been claiming the LLMs are biased, and then hence we get Mecha Hitler and other unfortunate things. That could create a foment in people's distrust of LLMs. Not to say that you should trust LLMs, but that could make it harder for them to act as neutral arbiters. In fact, just the other day, for the first time, I saw an LLM that was explicitly being advertised as a conservative LLM, where they said, "Oh, this is going to be biased in this particular way," rather than trying to market it as unbiased.
ALEX: Something that could happen is that we go from having these broad models, like what Grok is now and what OpenAI has now. Instead, we go into these balkanized models that are trained for specific purposes. There you can get into some wild space where you can have, again, a conservative LLM, or an LLM that's trained on Mein Kampf, or something like that, and these super racist LLMs. People using these LLMs as arbiters for themselves will completely spiral bias into a very different world.
SPENCER: You can certainly see why, if you had certain foundational beliefs about the way the world works, you would want an LLM that supports those as your basis. For example, let's say you're deeply religious, and you want your LLM to reflect the religious values or truths that you think you have. It's not implausible that people will eventually end up splintering to their favorite LLM that reflects their worldview.
ALEX: Yeah, exactly. If we write down a model that people's utility from having their beliefs upheld is higher than anything else, then there will be a market for these balkanized LLMs. That will get us into a pretty weird space.
SPENCER: Shifting topics again. So let's talk about how behavioral economics affects some of the issues of the day. There's been a lot of talk about tariffs, and there's economic analysis of tariffs, which, please correct me if I'm wrong. But as I understand it, generally, tariffs are thought of as inefficient; basically, you're preventing people from buying things, or you're raising costs. From an economic point of view, it's often inefficient. What does behavioral economics add to this conversation?
ALEX: So there are a couple of things that are interesting when thinking about behavioral economics and tariffs. One is kind of simple. Let's say you are a powerful country like the United States, and you have a relationship with the European Union or countries in the European Union, and then you say, "Look, I have the upper hand. I'm a powerful country like the US. I want to make some money from our trade. I'm going to impose a tariff." In a one-shot case, if you're just a businessman and you're not really working with people over a repeated period of time, you're going to make some money. The problem with things like tariffs and international relationships is that these things are repeated gains. If you're thinking about putting in a tariff, the other country can't do anything but accept it. There could be a time when you need help from that other country, or you need some sort of alliance with that other country sometime down the road against another adversary or something like that. Doing something like this, where you exploit your negotiation position, will make it less likely that that country is going to cooperate with you. There's something called the ultimatum game that we talk about in the book, where essentially you and I are in a scenario: I have $10, you have nothing. I offer you some money from that $10, and you say whether you accept it or reject it. Let's say I offered you one cent. Standard economics says, "Spencer, you should be really happy with that one cent. It's better than zero. You accept that." I get $9.99. What happens in reality? You don't accept it, and actually, in that game, both of us end up with nothing. You would prefer for both of us to end up with nothing than for me to generate this kind of unfair offer, and this sort of retaliatory motive is very present, not only in individuals but also in world leaders. So that's one risk with tariffs: we're going to make some money off of it, but we're really ruining our international relations.
SPENCER: It's interesting, this idea of someone who's a pushover, where people can take advantage of them, and then they're not going to retaliate, versus people that are, let's say,
"irrationally" retaliatory. I'm putting irrationally in quotes here, where you're like, "Well, everyone knows if you mess with that person, they're going to go way over the top and do things that are not even in their own interests." They're going to chase you to the ends of the earth to harm you, even though they're going to end up in jail. It's like, "Okay, so on one hand, that is very irrational to be so retaliatory because, in a local case, it might actually cause more harm to them than it's worth." But on the other hand, nobody who knows about that is going to mess with them. There's this meta way where it's solving a problem for them. I hear there's one thing that people talk about with regards to things like gangs. If you live in a really poor area where there's a lot of danger, joining a gang is a way to sort of buy yourself protection, in a sense, saying, "Don't mess with me because you're going to regret it." Now, maybe it's not worth the cost. There are huge costs to joining a gang, but you're changing the landscape of what is going to be offered to you and what's going to happen to you.
ALEX: Yeah, absolutely. Thinking about gangs, there's a sociology book written about the Chicago gangs. These are completely rational, organized, bureaucratic structures that are keeping up. The state has failed these areas. What crops up in these areas is a way of solving the lawlessness and craziness in these situations. When you're living in these sorts of areas, joining a gang is a rational decision.
SPENCER: And I think gangs can create all kinds of problems, so it's not the way we want society to be.
ALEX: Exactly. That's why we need the state to come in and provide the infrastructure so people don't need to join gangs for safety.
SPENCER: I wonder. So you mentioned the ultimatum game, and there are a bunch of these economic games that are designed to abstract some aspect of human interaction, relationship, or game theory, and then kind of draw conclusions about it. To what extent do they really generalize to real human behavior? Are they too abstracted away where it's like, "Yeah, sure, you can prove that in an ultimatum game. What does that say about real life?" Or do you think it actually captures enough about real human behavior that it's useful?
ALEX: It totally depends on the game. There are some games where I think they're just so abstract that what are we even learning from them? For example, if you're using something called the Dictator Game. The Dictator Game is like the ultimatum game, but the other person just has to accept your offer. So let's say I have $10 and I give you a cent. That means you have to take the cent while I get to keep $9.99. The stylized fact from those games is that people still give some money. They give like three or four dollars. You could interpret that as saying, "Whoa, people are so generous. Everybody's kind. We live in this universe of altruists." I don't think that has a lot of external validity just because we were earlier talking about the kind of social context and social constraints. You're put in a very specific scenario. You're with a person who knows who you are. Abstractly, obviously, everybody's anonymous, but if you're in a social scenario, it feels really weird not to give any money. In the real world, you can cross the street. You can not open the envelope with the Salvation Army sticker. You could do a lot of things just to avoid that interaction. What you're going to see is a lot less giving. That's what the follow-up experiments actually showed: in the Dictator Game, when you introduce these other socially complicated aspects, giving goes down to almost zero.
SPENCER: Insofar as these games represent reality, they're so useful, but it's a little microcosm to study human nature. But then as soon as they get too far adrift from reality, from the real things that happen in real life, it can actually go the other way, where you're drawing false inferences about the way the human mind works.
ALEX: Yeah. So a really good example of that is Vernon Smith's classic market experiments, which are like magic, where a group of people is given a card that indicates how much they value a product. You get a card. It's a seven. That means you're willing to pay $7 for it. So if you pay six, you're actually very happy about it. Then you get one through ten inside a room. The other side of the room are the sellers of the product. They're also given the value that they have for the product, which is the minimum that they're willing to sell it for. You put them in a pit market. From these numbers, by the way, you can kind of trace supply and demand. There's an equilibrium price that you can calculate. It's like magic. I run this stuff in my class all the time. Jaws drop. You get to the equilibrium price. Beautiful. It seems like standard economic theory is correct. Wow. Here's the beauty that we discussed in the book Richard Thaler published with Daniel Kahneman and Jack Knetsch. Replace these cards with actual products. Replace them with mugs, pens, anything else. Do the exact same exchange. It doesn't look very beautiful. What ends up happening is people who are endowed with a product all of a sudden don't really want to sell it for the same price that they would be willing to buy it for. So if I randomly allocated a bunch of mugs to one side of the room and asked them what is the minimum that you would need to sell, it turns out it's almost two and a half times higher than people are willing to pay for it. So trade really kind of breaks down. You're nowhere near that equilibrium price.
SPENCER: Before we wrap up, I want to go back to the tariff question we were just discussing, because standard economic theory says that if you have tariffs, it's going to tend to increase prices. You would think that people would hate that, and they would not support tariffs. So does behavioral economics have something to tell us about this?
ALEX: So there's some new work that I've done with my colleague Christóph Madarász and Heather Sarsons. Basically, what we found earlier is that we talked about the Dictator Game, where everybody's super altruistic and things like that. What we found is that if you conduct experiments where you test how much people value a product when other people want that product but can't have it, that actually increases their valuation of the product a lot. What these preferences do is imply that they value exclusionary policies where you have a policy that keeps somebody out from the market that really wants to be in the market that you have access to. So what does that imply for tariffs? That implies that people will actually be willing to bear inflation costs as long as they know the tariff is keeping somebody else out. We have evidence for that. We collected data on people's support for a price increase because of tariffs, or a price increase because of stimulus packages or a bunch of other things, and we correlated it with this basic preference for dominance seeking, which is what we call the preference. It turns out there's a super high correlation. People are much more willing to accept price increases as long as they know that that price increase is keeping somebody else out of what they have. What this suggests to us is that people are not miscalculating or doing something stupid. They know the consequences most of the time. They kind of implicitly know the consequences of tariffs. They're just willing to bear those consequences given what it's going to do to the rest of the world.
SPENCER: Is this related to phenomena where people love to be inside a club that has a long line outside, but when there's no line outside, it's less enticing?
ALEX: That's exactly what we explain using those preferences. We take those old papers trying to explain why there are long lines in restaurants. It's an economic puzzle. Why do restaurants not add more seats? What we show is that if the restaurant added more seats, fewer people would come because the reason you want to be in that restaurant is because of the line.
SPENCER: Something I've been thinking about a lot lately is how much of irrational human behavior actually is social, where you think about it as making decisions for yourself, but you're actually taking into account the behavior of everyone else and the beliefs of everyone else, and in doing so, it could really change things. Earlier in this conversation, we talked about, "Well, if you think other people are going to highly value a company, maybe you should buy it because it might go up because other people value it." But you're not actually thinking for yourself. Similarly, you're taking into account whether other people want to eat at this restaurant, rather than just saying how much am I going to enjoy the food there.
ALEX: In the original paper, we made a strong claim that preferences are largely memetic, as in, they're based on what I think other people's preferences are. I think we have some decent support for this, as far as the context effects we have for preferences. George Loewenstein and Andrei Shleifer wrote this paper on coherent arbitrariness, where you could move demand curves all over the place just by giving people social signals. I think we have a decent amount of evidence that's exactly right, that we as economists are really underestimating the effect of social forces on preferences.
SPENCER: Interesting. It reminds me of, I think it was René Girard who had this theory.
ALEX: Yeah, that's right.
SPENCER: Yeah. Maybe you could explain his basic theory, and to what extent do you think it's actually scientifically valid?
ALEX: Yeah. So the original title of the paper, and, you know, I'm just telling you and the rest of the audience that the reviewers made us change the title of the paper, was Memetic Dominance and the Preference for Exclusion. Memetic is from Girard. He developed this memetic theory of social relations, and we provided empirical evidence that he was largely correct, that people's preferences are based on what they think other people desire. That's his kind of social theory. And his main point is that you need institutions in order to tame the very bad implications of people having these sorts of preferences, as far as the destructive tendencies that people have. What we show in the paper is that he's exactly right. If you take these sorts of preferences that we empirically validate to their limit, as far as what people will want within their society, you really need a state to say, "Look, we don't want scapegoats and all of these sorts of things that Girard was talking about."
SPENCER: Yeah, it seems very bad for society for everyone to want the same things because of limited quantity, and then, you're going to run out of the thing, and some people are going to get it, and some are not. So if people are copying each other's preferences to a significant extent, you're going to have this sort of herd effect where lots of people are unsatisfied and trying to get the thing, but only some people can get it, and it also might create social friction as well.
ALEX: What you're going to have is people wanting exclusionary policies to say, "look, I know that everybody else wants this thing that I want, and I actually think that they want it more than I do, which makes me want it even more. I'm going to enact leaders and policies that are going to keep them out or discriminate against them," and things like that to make sure that they can't have it. That's kind of the natural implication of these sorts of preferences, is things like protectionism, nationalism, and all of this sort of stuff.
SPENCER: I see. So the idea is that if you actually feel good excluding others from things you have, then you're going to say, "well, let's add more protectionist policies around my country, and then it kind of makes you feel good about yourself."
ALEX: That's exactly what we find.
SPENCER: That's interesting. To what extent can this be explained through just political preferences, left versus right divides versus as an individual difference, something that is about the specific person?
ALEX: You know what? We really didn't find any kind of political differences in the data. It seems like an individual difference. We find that between 30 and 45% of people just have a strong preference that aligns with this idea.
SPENCER: Oh, that's really fascinating, because I think that can be a confounder in some of this kind of research, which is that it just happens at this point in time that this political group is in favor of this thing and might also share this other trait. So I think it's actually, to me, it makes it more compelling that it's not just linked to political views, and it sort of seems to be about the individual.
ALEX: Yeah, so the way that we measure it is, let's say there's a, I used to be an amateur artist, and I have these paintings that I never did anything with. So I use them in an experiment to sell them as the Pro. Because you need a product in an experiment to sell. In the experiment, you have people, and they're told, "Look, how much do you want to pay for this painting? We're going to make prints of it. It's a very limited edition. There are three other people in your group. How much are you willing to pay for it? If we tell one of the people that they can't buy it, it doesn't matter how much they want it, what about two people who can't buy it? How about three people who can't buy it?" It turns out there are people who have a very sharply increasing slope in willingness to pay as a function of how many people are excluded; those people support tariffs.
SPENCER: Oh, wow, that's fascinating. And what's the name for that trait again that you mentioned before?
ALEX: Dominance seeking.
SPENCER: Dominance seeking. Are there other aspects to it? Do they seek dominance in other ways?
ALEX: So this is basically a research agenda that we recently started, so that's kind of what we're trying to do. The tariffs paper is going to be coming out in a couple of weeks, showing that it correlates with protectionism. For example, we can give you a scenario with tariffs, and in one scenario, China, or whichever country is hurt by the tariffs, these preferences are super correlated. But there's another scenario where China is not hurt by the tariffs; they find another trading partner. All of a sudden, people don't like tariffs anymore.
SPENCER: Oh, interesting. Because it doesn't really exclude them from getting the thing they want.
ALEX: Exactly. So, we find that it explains these sorts of dynamics. Right now we're trying to figure out, we're doing studies on whether these sorts of preferences are related to resentment, having something that other people want and can't have, and then losing that ability. Resentment is kind of a big study in political science of what happens in the Rust Belt and things like that, where people used to have something that they valued a lot, and through policies, a lot of people couldn't have those things, and all of a sudden they don't have them. What happens to their preferences and how do they lash out? So that's kind of future work.
SPENCER: So final question for you, before we finish, what do you want people to keep in mind about rationality? You've spent all this time thinking about this topic, and there's a lot of different perspectives out there.
ALEX: That rationality takes on different faces. So things that don't seem rational and seem crazy, if you take into account people's constraints and the complexity of real-world decision making, all of a sudden something that seems very irrational makes sense.
SPENCER: Alex, thanks so much for coming on.
ALEX: Thank you. Thanks, Spencer, this was great.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: