Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
September 4, 2025
How do we distinguish correlation from causation in organizational success? How common is it to mistake luck or data mining for genuine effects in research findings? What are the challenges in interpreting ESG (Environmental, Social, Governance) criteria? Why is governance considered distinct from environmental and social impact? How should uncertainty in climate science affect our policy choices? Are regulation and free markets really at odds, or can they be mutually reinforcing? How does economic growth generated by markets fund social programs and environmental protection? How does “publish or perish” culture shape scientific research and incentives? What psychological and neuroscientific evidence explains our tendency toward confirmation bias? Will LLMs exacerbate or mitigate cognitive traps? How do biases shape popular narratives about diversity and corporate purpose? How can we balance vivid stories with rigorous data to better understand the world?
Alex Edmans FBA FAcSS is Professor of Finance at London Business School. Alex has a PhD from MIT as a Fulbright Scholar, was previously a tenured professor at Wharton, and an investment banker at Morgan Stanley. He serves as non-executive director of the Investor Forum and on Morgan Stanley’s Institute for Sustainable Investing Advisory Board, Novo Nordisk’s Sustainability Advisory Council, and Royal London Asset Management’s Responsible Investment Advisory Committee. He is a Fellow of the British Academy and a Fellow of the Academy of Social Sciences.
Links:
SPENCER: Alex, welcome.
ALEX: Thanks, Spencer. It's great to be here.
SPENCER: Today I want to have a wide-ranging discussion with you about cognitive biases, ways that our minds systematically go wrong from time to time. But let's start by deep diving into a particular topic, and we can use it as a jumping-off point for talking about different biases. So is it true that companies with a social mission perform better than those that don't have a social mission?
ALEX: That is actually true. However, it's not necessarily the social mission that is causing the better performance. What you have is a correlation. Indeed, companies that create value for society do better financially, but there are alternative explanations. Is it better financial performance that allows companies to give back to society, or are there third factors that cause both? Maybe a great, forward-looking leader both improves her company's financial performance and also thinks about people and the planet.
SPENCER: So going meta on this topic for a moment, whenever you observe an association or correlation between two things, let's call them X and Y, there are four different possibilities. It could be that X causes Y, and a lot of times our brain wants to jump to that conclusion, especially if it's very plausible. You're like, "Oh yeah, it makes sense to me that a social mission might make a company perform better." So you jump to the causal story. The second possibility is it could be reversed. It could be that Y causes X. It could be that maybe companies that are performing better are more likely to take on a social mission or more likely to have good marketing that convinces you that they have a social mission. Third, you could have a third variable, and you were kind of pointing this out, there's some third variable that causes both X and Y, so X and Y don't cause each other, but a third variable causes them both, and that makes them be correlated. The fourth scenario, which is much less discussed, is cyclic causation, where X might cause Y, which causes X, which causes Y, so they're both causing each other in a loop. An example of this may be depression and anxiety, which might have a link like this, where the more depressed you are, it might cause you to perform less well at work, which might give you more anxiety, and that anxiety may make you avoid a lot of activities, and that might actually make you more depressed because of all that avoidance. So that would be a kind of cyclic causation.
ALEX: I've actually never heard of that cyclic causation. So given you've informed me about something that I didn't know, let me add a fifth one to the four that you came up with.
SPENCER: Great, let's do it.
ALEX: And that may be that it's just luck. So even if it's statistically significant, it could be the product of data mining. Maybe the experimenter looked at many measures of financial performance and many measures of social mission, and that's quite easy, because there is a huge range of ESG measures out there, and they are just reporting the one set of variables that works. Maybe they tried lots of other ones and they didn't work, but they're hiding them away from you because they don't support the view they'd like to portray.
SPENCER: That's an excellent point. So if there's a correlation or association, the first question we have to ask is, "Was it really there, or is it just a statistical fluke?" And then if we find it's not a statistical fluke, then we can break it down into the different types of causation that might be underlying it.
ALEX: That's absolutely, yeah.
SPENCER: So you mentioned ESG. What is ESG exactly?
ALEX: Well, this is something that is not clear at all. So what it is supposed to be is a company's contribution to the environment and wider society and also how well a company is governed. Already, if you think about that definition, it's a bit weird, because E, S, and G don't really belong together. One of them, or two of them, the first two, is your impact on the wider society, whereas governance is something internal, which is good for shareholders alone.
SPENCER: They even might be at odds in theory, right?
ALEX: But it could even be that, because maybe good governance is making sure that you're sticking to your mission of long-term financial value, and actually some environmental and social objectives could be to promote the CEO to make themselves seem as a pillar of the community. If you're in the UK, you might get a knighthood or a damehood. So that's what it is in theory, but in practice, it's not clear how to measure these things. For example, let's take the environment, and within the environment, let's narrow down to climate. Is a climate factor your impact on the environment, how much carbon you emit, or is it your sensitivity to climate change? So if I'm a real estate company, "How close am I to the water? If I'm an agribusiness, am I close to the equator, and therefore will I suffer if the planet gets too hot and I can't grow crops?" And that's true with lots of the other factors. For social factors, "Do we think about demographic diversity, or do we think about socioeconomic diversity, or do we think about equity and inclusion?" So it has become a capsule for anything that people think is important about a company, but different people think different things are important, and that's why you see the politicization of ESG, particularly in the US.
SPENCER: It's an interesting example where people really want to be able to say, "Is it something good or not?" But they're trying to take all forms of good, or at least many forms of good, and stick them all together. But in reality, there are a whole bunch of different types of good, and they don't necessarily mean the same thing, so trying to put them all in the same metric may be a little bit of a fool's errand.
ALEX: Absolutely. And there are different views among different people as to what good is. "Is being good reproductive rights? Or is it the rights of the unborn?" Maybe even the same person may change his or her view over time. So defense was previously considered to be bad because that produces weapons and kills people, and then with the Russia-Ukraine war, people realize maybe defense is good. Why? Because, unfortunately, in this world, people do need to defend themselves against some foreign invasion. So it's not clear why we want to say things about ESG as an umbrella term, rather than making more specific, precise statements about each individual component: carbon emissions, sensitivity to the environment, demographic diversity, socioeconomic diversity. It's just like you would not make a grand statement such as, "Food is good for you," or, "Food is bad for you." "Ice cream and broccoli have quite different effects on your health."
SPENCER: And yet, many people want to say, "These kinds of foods are good, these kinds of foods are bad, full stop," as though, "Broccoli is automatically good no matter what else you're eating, and ice cream is automatically bad no matter what you're eating." But of course, if you're already eating tons of leafy green vegetables, probably broccoli is doing nothing for you. And if you're starving, ice cream is a pretty darn good food.
ALEX: Yeah, so what you're discussing in both of your last questions is a bias called black and white thinking. We like to see the world in absolute terms. The first question was seeing everything under the same umbrella, viewing all ESG as being good or bad, or food as being good or bad. The second form of black and white thinking is not realizing that even if something is good on the whole, it might only be good in certain situations. So yes, broccoli in general might be good, but if you are carbo-loading for a marathon the next day, it's not. Even if you're not carbo-loading, it still is only going to be good up to a point. Once you've hit a certain point, it's going to hit diminishing returns. But these nuances, we don't like them. They make life complicated. You can't tweet something into 280 characters. It's much easier to say that blueberries are a superfood and they're a superfood for everybody in the world, not just 40-year-old men with three children.
SPENCER: We have a tool on our website, clearerthinking.org, called the Nuance Thinking Techniques, and we teach about three binaries and the potential solutions to those binaries. In the binary that you mentioned, black and white thinking, the solution that we talk about is what we call gray thinking, where you think of things not as good or bad, but you think of everything as having a mix of good and bad. That doesn't mean everything is equally bad. No, not all shades of gray are equally gray or equally dark, but you think of it as, "Okay, everything has some good and some bad. So let me think about what are the bad things in the good and what are the good things in the bad," and then try to come up with sort of a net conclusion based on that.
ALEX: Absolutely, and also the different goods and bads might apply at different points in time in different situations. Just to understand things can be situation-specific, that also uncovers a lot of the nuances in things, which are often seen as black and white.
SPENCER: Right. Even as we discussed, even the idea of thinking about, "Okay, things are good in some ways and bad in others," you often can't reduce it to a single metric of good. There can be incommensurable goods, like one thing produces equality and another thing produces benefit to an individual. And how do you compare those? It's very hard to compare.
ALEX: Yeah. And this is one of the real problems of ESG, is that you want to take your one particular metric, which you believe is good, and trumps all others. So let's give an example. I'm serving on the Advisory Council for Novo Nordisk, a large European pharmaceuticals firm which many people will know has these great weight loss drugs, Wegovy and Ozempic, and they've chosen to sell some of those drugs to developing countries, even though there's excess demand for them in Western Europe, and Western Europe can obviously be paying much more. Not only are they sacrificing financial returns, but they're also worsening their carbon footprint by shipping these to developing countries. But on the flip side, if they're able to address obesity in a developing country, there's a huge social benefit for that, and also there may well be an environmental benefit, because if you don't get type 2 diabetes, you don't have to have three hospital trips a week, all in car journeys to get dialysis. But one of those forms is very easy to measure, which is the carbon emissions. The social impact of obesity avoided is difficult to measure, and even more difficult are the carbon emissions avoided, and therefore you only focus on one particular dimension of good which is highly measurable. This might be why the climate movement has become so active. Obviously, climate is a really important social issue, but there are many other important social issues which may not be as measurable and therefore have not had as much attention.
SPENCER: The point about measurability is a really interesting one, because there can be this temptation to focus on whatever we can measure and neglect the things that we can't measure, which might actually be more important. But I think this also brings up another interesting point, which is that there are intermediate values, or I would call them instrumental values, things that we care about because of what they get us. And then there are intrinsic values, things that we care about for themselves. Carbon emissions are an instrumental value. Nobody actually cares about the amount of carbon in the air. We care about the effects of that on climate change, and ultimately the effects of climate change on human populations, and depending on the person, animal populations and the world itself. How do you actually think about going from carbon emissions to what people actually care about, and kind of quantifying that?
ALEX: Well, this is really complex, so what you would need is a model for how greater carbon in the atmosphere is going to cause climate change and who that is going to affect. Obviously, there are some very sophisticated models out there, but people talk about this as if it's absolute certainty. I remember having a debate. It was a podcast where somebody claimed, "Well, there is a tipping point out there that if we cross, I think it was 1.5 degrees, all hell will break loose." I asked her, respectfully, not trying to call her out, "Can you tell me the model that shows you that?" Because, let's say there was a black and white tipping point. Is it going to be a round number like 1.5? The planet doesn't think in bright lines. It could have been 1.62 or something else. When I asked her to try to walk me through the model, she was not able to do that. It ended up getting cut from the podcast. What does this mean? Again, it doesn't mean that climate is not important. Yes, there are models out there, but it's not clear how certain they are. What does this mean practically? We can be less black and white about climate policy. Many people think, well, "If it's not 1.5, all hell will break loose," and maybe we should either take extreme measures, such as cutting off fossil fuels completely, which might deny developing countries electrification and one of the keys to economic growth, or we end up just completely giving up thinking whether it's 1.5 or nothing. We're not going to hit that, so you might as well go to flee. With climate, as with everything else, higher temperatures are gradually worse, but it's not clear based on the uncertain link between the instrumental variable and the intrinsic outcome that there is a clear bright line.
SPENCER: Even if it turns out there is a tipping point, the amount of uncertainty we have should make us pretty unsure about where exactly that tipping point is. Our models are very unlikely to be precise enough to know whether it's exactly 1.5, 1.6, or 1.7, even if there is a tipping point, which kind of spreads out the predictions across that tipping point.
ALEX: I think that's really important to bear in mind, because the cost of climate action is high, and if there is uncertainty, then it's not clear that we want to put every single thing we have in order to try to avoid this tipping point, which is uncertain. A few months ago, I was at the World Economic Forum where we were having a discussion on a just transition, and one woman got up and said, "I'm from Africa. In Africa, 600 million citizens have no access to electricity. So you Westerners are talking about a just transition when 600 million of my fellow citizens have nothing to transition from." A few months ago, there was a hospital in Sierra Leone that was running on solar power. There was a power cut, and a baby in a neonatal unit died. This is absolutely not to say that climate action isn't important, but these are trade-offs that some of us in the West often don't consider. We've never been in a situation without electricity. We just flick on a switch, but these are real trade-offs, and given that there is uncertainty about these tipping points, and given there are also some costs which might be quite certain, such as rapid decarbonization, which is lack of access to electricity, and seeing people being out of jobs, like 55-year-old coal workers who can't easily retrain, we might want to have more nuance when discussing this really complex issue.
SPENCER: Perhaps surprisingly to some, I think that greater uncertainty in our models actually, in some ways, should make us more concerned, rather than less concerned about climate change, because the difference in the world with each increasing degree of temperature has nonlinear effects, and the effects get much worse as you get pushed out to further temperature increases. Tipping point uncertainty, such as whether there is a tipping point and if so, where it could be, also has this property where, as you fatten the distribution and the tails get wider because of the nonlinear effects, you can get a very steep increase in expected damage with greater uncertainty.
ALEX: That's absolutely fair. I think the nonlinearities are something that should absolutely be taken seriously. But I wonder to what extent people have tried to model nonlinearities and tipping points for other aspects of the climate transition. For example, if electrification does not reach a certain level, does that mean that certain countries will remain in poverty for a long time? If there is too rapid decarbonization, which leads to job loss, might there be a tipping point in which some local cities are inundated with crime? We have seen, for example, in the book Janesville, which won the Financial Times Business Book of the Year a few years ago, that when General Motors closed down its plant in Janesville, there was a massive impact on the local community. Children went to school hungry, donations to charities ceased, and even suppliers went under. There might be those tipping points as well, but because they're so complex, nobody tends to model them. Just as there is uncertainty in terms of the impact of climate on ultimate outcomes, I think there's also uncertainty about the impacts of perhaps too rapid decarbonization.
SPENCER: Fair point. It's kind of funny though, because the debate can end up getting framed as, "Should we scrap all the gains from using fossil fuels to help protect the future from climate change, when, in reality, there's so much common sense, quite useful legislation that nobody can get passed that is a clear win?" Where you're saying, "What about just taxing the worst offending behaviors and letting people adapt to that?" Once you've internalized the externality, let people adapt in whatever ways they find most efficient to do so, and yet, much of that can't even happen.
ALEX: That is so much common sense; that's something that nearly every economist thinks is the best solution. But why is that a solution? Climate change is a market failure. What is a market failure? That's a situation where the market solution does not achieve social welfare maximization. Why? Because there's an externality. You have a negative impact that you don't take into account. We know the solution to this is to tax, and that is much better than alternative solutions, which involve governance deciding taxonomies of what is green or what is brown. As discussed, it's hard to know what is green. Novo Nordisk shipping its drugs to developing countries seems to be brown, but it may be green in the long term. That is a very simple solution. I think it's due to political concerns that people don't want to pass this tax because certain industries will be affected by it. But we know from economics, there's also the Coase theorem, which states that if there is an optimal solution and there are winners and losers, we can try to compensate the losers with some transfers. That's not to say it's going to be easy, but it is at least theoretically possible. I don't think it's practically impossible either; if indeed a tax is passed, and certain industries are affected, then some transfers can be made, and the money will come from the fact that we might not need to have as much protection from sea level rise if indeed action is taken.
SPENCER: And I think this is kind of counterintuitive to people, that you could have a tax on carbon emissions, and then you could actually take the income from that tax and pay it right back to the same people or companies. And people might think, "Doesn't that undo the effects of the tax in the first place?"
ALEX: Yes, this is interesting, and the answer is no. Why? Because the tax you are making is a tax on activity, where the more you pollute, you're paying a greater tax, whereas the transfer you receive is a lump sum that is independent of how much carbon you emit. So it does still change your incentives. It disincentivizes you to emit carbon. Or on the flip side, it incentivizes you to make investments to remove the carbon that emits, for example, installing carbon scrubbers.
SPENCER: This is a bit of an aside, but people seem to strongly associate economic thinking like this with the kind of libertarianism, a belief in the free market, which I find kind of funny. It might be true in practice that a lot of people who are into this kind of thinking are more libertarian, but I find it kind of funny because if you read your standard economics textbook, they talk constantly about ways the free market fails, cataloging all different ways the free market fails, and different kinds of solutions to that.
ALEX: I couldn't agree with you more, Spencer. If you want to be popular in a party or as a journalist, just say how broken economics is, how broken economics texts are. They just teach students how to make as much money as possible, paying no attention to anybody else. They might quote one of the most famous economists of all time, Milton Friedman, who argued the social responsibility of business is to increase profits. That just sounds wrong. It sounds so offensive that I don't even need to read beyond the title. It is there in black and white, but it's not black and white. If you were to read beyond the title, he says, "Well, the only reason why I think you can focus on profits is if there is a government taxing externalities, such as carbon emissions, as we have discussed." And also, he defines profit as long-term profit. In order to maximize long-term profit, you need to invest in your workers, treat your customers well, develop new products, and so on. So absolutely, any economist recognizes the importance of externalities and also highlights who should deal with those externalities. It is the government. So why is it the government? Well, the government is democratic. A 55-year-old coal worker has one vote. Larry Fink has one vote. The alternative of the government deciding is the people with the money, the elites like Larry Fink. He could say to companies, "You need to decarbonize really quickly. Otherwise, I'm going to vote against you." But he is representing wealthy people who are investing their money with BlackRock. He's probably not representing the 55-year-old coal worker.
SPENCER: I think people often view the free market as being at odds with a regulated society, whereas I think a more accurate view is that they are highly synergistic with each other. By default, there are ways to make money that involve harming people, and to make the free market work much better, we need to make it illegal to make money by harming people. We need to ensure that all financial activity and market activity, to the greatest extent possible, is either neutral or beneficial to society, so that you see the synergy go in that direction where the free market is actually enhanced by regulation when done properly, because it turns the free market into an engine of activity that is actually beneficial. On the flip side, the free market generates economic growth that can fund transfers of wealth, can fund more social programs, and so on, which can make a regulatory system much more beneficial to society.
ALEX: If I'm the manager of a great soccer team and I want free and fair competition, I would like there to be great referees. I would like the referees to make sure that fouls are punished and that offsides are called. Why? Because if there is good refereeing, then the best team will win. That's the same with the free market. Libertarians want free competition. They want innovative companies that are driven to beat their competitors, but driven to beat their competitors through offering the best products and providing the greatest customer service, not through undercutting things in bad ways, such as dumping waste in the river or having child labor. It's absolutely consistent for someone to believe in the free market but also to want there to be regulation. Indeed, my first book, Grow the Pie, which was about a market-based solution to externalities, shows how it's in a company's interest to care about society. It had an entire chapter devoted to government regulation, and that's also essential to Milton Friedman's argument. He said, "It's incumbent upon the government to pass laws to address externalities, and once you do that, you allow the free market to play." To have one final sporting analogy, if you have a tennis umpire who calls the ball in or out, this gives you, as a player, the freedom to hit the ball as hard as possible with as much topspin or slice or deception as possible. As long as the ball is in, you are free to hit it how you want, and that same freedom is enabled by good regulation of the free market economy.
SPENCER: Going back to a topic we were touching on earlier. We talked about different issues in defining ESG, but there are still interesting questions that remain about the links between the different components of ESG and company performance. Could you tell us a little bit about that?
ALEX: Absolutely. First, it's difficult to know how to define ESG, as we discussed previously, but what you can look at is specific measures of ESG. One of my own papers, and this is how I got into ESG myself, looked at the link between employee satisfaction and a company's long-term stock returns. What I found was that companies on the list of the 100 Best Companies to Work For delivered higher returns than their peers by 2.3 to 3.8% over a 28-year period.
SPENCER: And that's going forward. So you measure employee satisfaction at time zero, and then you look at stock performance in the future from there.
ALEX: Correct. And that attenuates the concern that once the company's performing well, then employees are happy. So I measure employee satisfaction first, I give the market a full month to react, and then I don't start measuring my returns until after that month. It's employee satisfaction first, and then performance afterwards. I try to control for lots of variables, such as size, recent performance, or industry, and so on. That is a result which is supportive of sustainability paying off. But then there are other studies where the evidence is much less clear-cut. Sadly, I wish this was not the case. This is the case for climate emissions. For carbon emissions, if you emit more carbon, you actually deliver higher returns. Why is that the case? It's because carbon emissions are an externality. Some companies are able to get away with emitting carbon, whereas their peers who are investing in carbon scrubbers, yes, they're doing good for the planet, but right now, without a carbon tax, they're eating into their profits.
SPENCER: Let's talk about the link between stock performance in the future and factors that we might care about, such as, "How likely a company we start is to succeed, or if we're an organization, are we going to do better if we emit less carbon or if we have higher employee satisfaction?"
ALEX: With those two factors, I view them as quite different from each other. I know they're often banded together in this term, ESG, but one of them is internalized and the other is an externality. If I treat my workers well, they'll be more motivated, more productive, and more likely to stay. That doesn't require government regulation or policy coordination. This is why I studied human capital, and this was all the way back in 2006-2007 when I was doing my PhD at MIT. ESG wasn't a phrase back then that was so popular. I studied human capital, not because it was an ESG factor, but because it's a highly material, important factor for long-term success. If you contrast that with carbon emissions, that is an externality, something that affects wider society, but it's not clear whether I will ultimately benefit or suffer from my emissions if there is no carbon tax. That's why it's quite plausible for one of them to be linked to long-term returns positively and the other to be negatively linked.
SPENCER: With the carbon emission result, companies are getting higher returns on average when they emit more. There is an alternative explanation for that, which is that if you believe in the efficient market hypothesis, in order to get higher returns in some predictable way, you have to be taking on more risk. An alternative perspective is that higher carbon emitters are just riskier companies to invest in. That's why they pay more.
ALEX: Absolutely right, Spencer, and that's actually the author's interpretation, and in fact, most of the finance profession's interpretation. They like to believe in efficient markets. They will say, "Well, high returns. We know the only way that you can get higher returns is due to high risk. The reason why these emitting companies are earning high returns is that shareholders would never invest in them because of their risk unless they were compensated with higher returns." Now that is certainly plausible, but it's quite opportunistic, in that whenever you see high returns to green companies, you don't say, "This is proof that green companies are risky." You say, "This is proof that greenness is good." Now when you see high returns to brown companies, you say "This is proof that brownness is bad." So either way, you are going to find an interpretation that is consistent with the idea that sustainability matters. This is the idea of confirmation bias. You interpret a result as consistent with your view. Notice, however, you can test which is true. Are those higher returns due to higher risk, as you suggested, or is it due to outperformance? There is a very basic test in finance to do this, which is about 30 years old. What you do is look at earning surprises. What is that? Well, every three months in the US, a company announces their earnings, and before it does so, analysts like Goldman Sachs and Morgan Stanley predict what they are. What I find is that emitting companies systematically outperform what the analysts thought, which suggests that its outperformance rather than risk. I've written a follow-up paper to that earlier study, finding that it's outperformance rather than risk, but what was interesting to me is how the profession would accept that first result as proof that markets are efficient, and we don't need governance intervention because the market is doing everything correctly.
SPENCER: It's fascinating that the same result could be taken to mean the complete opposite thing, depending on what your perspective is on it coming in.
ALEX: If you go back to my employee satisfaction result, I don't think anybody would take my employee satisfaction result and say the fact that employee-friendly companies outperform is proof that employee satisfaction is bad for you, because those high returns are compensation for the risk of employees being too happy. We typically think high returns, if you have done the standard risk adjustments, are measures of outperformance. But because people don't like the message that these brown companies are getting away with emitting, people instead have the alternative explanation that those high returns must be due to risk.
SPENCER: I think there's an important distinction to draw here, because in the case of people not jumping to the conclusion that higher employee satisfaction leads to more risk, you might say, "Well, that's totally reasonable because there's no reasonable causal story of how having happier employees makes a company riskier," and that actually seems like a reasonable inference because of a lack of such plausible causal stories. On the other hand, in the other case where people go in and say, "Well, if it looks like there's higher returns, it must be due to risk. If there's lower returns, it must mean that it's also less good to do because it leads to less performance." So either way, either result, you kind of view it as a bad thing for carbon emissions. That seems like pure bias because you're going to get the same answer regardless of how the empirics come out.
ALEX: Yes, although, interestingly, you could think of some reasons for why high employee satisfaction could be risky. In fact, the peer reviewers of my paper threw that at me when I tried to publish my paper all those years ago. So the stories were, "Well, actually employee satisfaction, that's an intangible asset, which is not worth much in bankruptcy. Maybe the companies that are investing a lot in corporate culture are not investing so much in bricks and mortar, and therefore, if there's a bankruptcy, then there's less collateralisable value." And so back then, because people didn't like ESG so much, they made me go through so many hoops in order to get this paper published. I had to do the earnings surprise test to show that it was outperformance rather than risk. And I should have gone through those hoops because it was a top journal. They should indeed apply the most rigorous standard, and I thought it was entirely legitimate for them to throw at me the alternative story that maybe employee satisfaction is risky. Yeah, maybe, to me, I don't think it's so plausible, but that might be because I'm biased, and so I wanted to be fair-handed, and they forced me to do that. But then, when you look at the studies on carbon emissions, it is entirely plausible, as you're suggesting, Spencer, that it is due to outperformance, not risk, but because everybody wants to believe that ESG pays off, the top finance journals and the finance profession more generally did not make the authors do the most basic test.
SPENCER: Certainly from a point of view of rigor, doing the test makes it more rigorous. Absolutely, I still think the point stands, though, that the plausibility of alternatives does matter. If a priori, it's incredibly unlikely that Y causes X, and it's quite likely that X causes Y, that should affect our judgment when we get data about the linkage of them, what direction the linkage goes, right?
ALEX: That's entirely fair, because what does data do that causes you to Bayesian update? But if your prior belief of a certain alternative explanation is so low, then you're not going to be doing much Bayesian updating. So you might come up with completely other stories for why there could be a link. Maybe, if a company is on the Best Companies to Work For, then suddenly, employee-friendly funds will buy into those companies and push the stock price up, and that's what's causing the outperformance. But back then, in 2007, there were so few socially responsible funds to begin with, and the magnitudes that I found were so large that it's extremely unlikely that that could be even a partial explanation for my findings. So maybe it's not worth spilling ink in order to rebut that. And this is important more generally because one of the concerns that people have with academic research is there's too much refereeing. Now I'm obviously somebody who believes that rigor is important and the most plausible alternative explanations should be addressed. But if you are trying to get 100% of the answer in five years, rather than 95% of the answer in one year, then maybe papers on the most important topics will just never be out in a timely fashion.
SPENCER: It's funny, because one thing that we do for a project called Transparent Replications is we replicate new papers coming out in top psychology journals. We only focus on the top journals, and we try to replicate them fairly soon after they're published. We also very carefully try to understand what was exactly done in the paper, and then if we fail to replicate it, why exactly? We also look for other potential flaws, and we find tons of flaws in papers in top journals. I definitely wouldn't say that that sort of two processes is too rigorous. However, it can be very burdensome. The process is very burdensome, and I kind of feel simultaneously that in certain ways the process is not rigorous enough, and in other ways the process is too burdensome. Those things don't necessarily contradict each other.
ALEX: Absolutely. This is something I thought about a lot when I was the managing editor of an academic journal for six years, the cost-benefit analysis. You only think about the benefits of extra work and not the cost that you might impose on the authors, because it's easy for a peer reviewer or an editor to say, "Do all of these extra tests, gather all of this extra data, but you don't have to do this." As an editor, I believe you need to do your job. Some editors are known as post boxes. All they do is convey the referee's comments and say, "Do everything the referee says." What I try to do is be active and say, "I think points two, five, and eight of the referee's list are important, but the others are not actually critical for the publishability of the paper, because, as you suggested earlier, Spencer, it's really implausible that these are the drivers of your results."
SPENCER: Yeah, and that's a huge problem that I've seen in the peer review process. You get, let's say, three reviewers, and there's a list of 25 things that they suggest you change, and it's really not clear. Do I have to change all of them? You just don't get clear communication from the editor, so you kind of feel the burden to change all of them. The fact is, there are some changes that actually make a substantive improvement, but there are many that are purely subjective, where many people would just disagree with that proposal to make that change. Imagine you had 10 people read a work of fiction and they tried to make it better. They might make it better in all different directions. They might be completely mutually contradictory. I think a lot of suggestions are like that.
ALEX: Yeah. There was a really interesting paper on how to improve the peer review process. One of the co-authors was Cam Harvey of Duke, who is doing a lot of work to improve the scientific integrity of the profession. He was the editor of the Journal of Finance, the top journal in my profession, along with two other senior people, Jonathan Burke and David Hirschleifer. They argued that editors use the union heuristic, which is they force an author team to do the union of all of the suggestions of the referees. They instead suggested they should employ the intersection heuristic, which is to only focus on the common intersection of those three, because those are the issues that everybody agrees are serious, rather than something which is idiosyncratic. One might say, "You don't want to be at either extreme, and they were not presenting these as models to follow but just as two sides of the spectrum." But I think we should be much closer to the intersection heuristic than we are at the moment; lazy editors can default to the union heuristic.
SPENCER: That's very interesting. I imagine something like, "If two out of three of the reviewers say something, that's probably pretty good advice." I think that's a reasonable heuristic. As far as I know, the reviewers always work independently; at least that's always been the case in the fields that I've published in.
ALEX: They do, and normally just an editor wants to not offend the referee and then says, "You have to do everything the referees asked for. But I think you're not doing your job as an editor. Your job is to edit. It's not just to edit the paper, but also to edit the referee's comments and suggest which are the ones that are most important." So another thing that I did at the Review of Finance was to have a second round up or out rule where all of the referees' critical comments needed to be in at the first round, and if you addressed all of them, then you would be accepted, rather than referees having the option to raise further comments at the second round which they could have raised at the first round. Now, obviously, if you only partially satisfied their prior comments, they can make you go another round, but they can't introduce new things that were not there previously, and that's to stop the process from going on forever, the paper moving sideways, or maybe even backwards, because it becomes so convoluted with these extra checks that the reader misses the forest for the trees.
SPENCER: That seems very sensible. We sometimes publish our own work, but typically we do it by partnering with academics. So we want to run a study on something, and we'll see if there's an academic we know that might be interested in partnering with us, because we find that the process of applying to academic journals ourselves is so burdensome that it's very rarely worth it in terms of cost-benefit analysis, which I think is a real shame, because I do think peer review has some significant benefits.
ALEX: Yes, and I think this is something that the academic profession needs to take seriously. It is starting to take it seriously, but as with everything in academia, it works really slowly. So the American Economic Review, which is arguably the top economics journal in the profession, recently introduced a journal called American Economic Review Insights, where the bar is the same as for the American Economic Review, but it has to be a paper of 7,000 words or less, so you have a really insightful contribution, and the referees cannot make you add tons and tons of robustness checks because it's going to be way above the word limit. They recognized that some of their most insightful papers were short and succinct, and yet papers are getting longer and longer. That journal has been extremely successful. My latest publication was in that journal. It was the first journal we sent the paper to, and I was delighted when I published it. Well, I was initially delighted, but then what I realized was that my employer, London Business School, does not count that journal at all. It has a black and white list of journals that you should publish in, and actually by publishing in that journal rather than one of the listed journals, that has cost me about half a million pounds.
SPENCER: How did it cost half a million pounds?
ALEX: Because I would have said, "My research rating, you get rated on your research out of five. Every year I have been given a five out of five. Last year I was given two out of five because I was told you've had no publications in our listed journals, and that denied me a pay rise of about 19,000 pounds per year." If you say, "I will stay at London Business School for the next 25 or 30 years, that is a pay rise for which you'll get an annuity for the next few years." I'm not discounting that. Why? Because that pay rise will compound and it will increase. Unfortunately, that's about half a million pounds through publishing in a journal that everybody recognizes as a high-quality journal, but because journal lists are slow to update, I have suffered significantly financially as a result of that. In fact, there were other papers that I had written for potentially sending to that journal, which are 16 contributions, which I'm instead having to pad out with lots of additional tests and send elsewhere.
SPENCER: Yikes. That's an unfortunate situation. It also points to how academics have such an incentive to publish in top journals throughout their career. Obviously, if you had a direct financial impact, but for many people, it's the difference between staying in the field or being kicked out of the field.
ALEX: It is, and the adage is publish or perish. You might think, is this an over-exaggeration? But it need not be. Why? Because the tenure system is so binary. I had never heard of the tenure system before going into academia, and when I first heard about it, I thought it was crazy. So how does it work? After a certain probationary period, it was six years at Wharton where I started. It might be 10 years at Yale or Chicago. You are evaluated for tenure, and if you get tenure, you have a permanent job from which you can never be fired. But if you don't get tenure, you're immediately out. You can't reapply for tenure the next year. It's not like a law partnership where, in some cases, you might be able to reapply in the future. Given that this is actually a tipping point, it is really binary. Then you have every incentive to publish as many papers as possible prior to tenure. One might think, that is surely a good thing. We want people to work hard, but not necessarily, because it goes back to our earlier discussion about quantitative measures versus qualitative measures. What people focus on is the number of papers, not so much the quality. You may have many papers in top journals, but they're not actually adding much to each other. You've sliced the salami quite thinly. This is really bizarre, because it should be so easy. People don't typically divide by the number of authors. A single-author paper may well be more impressive than a paper where you are one of four authors and perhaps the most junior, but go on to things like a CV or just list all the papers you have, and people will say, "This candidate has 10 papers in the top journals. Go to Google Scholar, and it will just add the citations of all of your papers without doing that division." What it might be encouraging is quantity, not quality, numbers over impact, and that's even before you think about some of the concerns in terms of data mining that I mentioned earlier, where, if there is such an incentive to get a paper published, then you may do things like run tons of regressions and only report the one that is significant.
SPENCER: Makes me think about Goodhart's Law, the idea that when you pick a particular measure as the target and the goal, people start manipulating the measure or doing anything they can to technically hit the measure, but now it might lose a lot of the value that you originally set the measure for. You can imagine that at some point in time, maybe publishing in top journals was a pretty good proxy for being a great scientist. But then at some point, people just become experts at gaming the system and technically getting those publications without contributing that much to science.
ALEX: Well, there's the old adage that a cobbler's shoes are really in poor condition. Similarly, with academics, there's so much academic research showing how you should incentivize, reward, and motivate people, which is not followed. Goodhart's Law was discovered by an economist, and yet the economics profession doesn't necessarily seem to recognize that looking at just the number of publications, not dividing by the number of authors, having a black-and-white list of journals, ignores the fact that there could be some really important and high-impact new journals. Just because maybe you only update your list once every 10 years, that has huge disincentives and also has huge financial consequences for faculty.
SPENCER: Earlier we touched on confirmation bias, which I want to go back to. Some people have referred to this as sort of the mother of all biases, or one of the most pernicious biases. Would you agree with that?
ALEX: I would, but that might be my own confirmation bias because I've just written a book about that. But let me try to answer that in an unbiased way. So why do I think confirmation bias is so pernicious? Because it blinds even smart people to exercising normal critical thinking skills that they might do if they were objective. So going back to correlation versus causation, it may well be that everybody knows that there are alternative explanations, but if you're someone who loves to believe that sustainability pays off, you're going to accept that interpretation of the evidence, and you don't even need strong biases for this. Before my first child was born, we took a parenting course, and we were told you need to exclusively breastfeed your kids. Why? Because there's really strong evidence that breastfeeding causes all of these great outcomes, such as physical health for the child, mental health for the child, and physical health for the mother recovering from a difficult pregnancy. I accepted all that, and it was only much later that I looked into the evidence, and I found it was correlation, but not causation. What is the third factor? It's family background. The mothers with a more stable home environment were able to breastfeed, and that stable home environment also led to the benefits.
SPENCER: If you control for family background, does the effect go away?
ALEX: Absolutely. That is the omitted variable, which should be something that I, as a card-carrying economist, should have thought about, but I didn't. Why? Because of my bias. You might think, "How am I biased about breastfeeding?" Maybe I could be biased about sustainability because I've written lots of papers on it. Maybe I'm biased on abortion or gun control. But breastfeeding, I am biased because I was brought up to believe that something natural is better than something man-made, and even that really small nudge was enough to cause me to accept that explanation. This is why I think confirmation bias could be the mother of all biases, is that even small hunches could lead you to accept a preferred interpretation of the results.
SPENCER: What do you think the psychological mechanism by which it operates is? Do you think it's more of an intentional bias, where when we already think things work a certain way, we just pay attention to serve that potential solution and don't even think about the other solutions, or do you think we do think about the other solutions, but we dismiss them because it kind of feels better to believe one versus the other?
ALEX: I think it's both that you mentioned, Spencer, and it's more than that. Let me go, not for my biases, but actual scientific research. What neuroscientists have done is they've done MRI scans to see how people react when they see information. One experiment took a group of students and they read out a statement that they previously said they agreed with. Some of those statements were political, such as the death penalty should be abolished, and other statements were non-political, such as the purpose of sleep is to rest the body and mind. Then what they did was they gave out contradictory evidence to contradict that statement, for example, about the death penalty. What they found was when a political statement was contradicted, the part of the brain that was activated was the amygdala. That is the part of the brain which induces a fight or flight response, as if you're being attacked by a tiger. Being contradicted is as bad as a tiger attack. But interestingly, there was no activity when a non-political statement was read out. In particular, when something you truly believe in is contradicted, you just dismiss it. Then there's another set of experiments by some other researchers, which saw what happens if there's an exculpatory statement that contradicts the dismissing statement, and what they found is that causes a different part of the brain to light up, which is the striatum. It releases dopamine in your bloodstream. This is the idea of motivated reasoning that you're mentioning, Spencer. It just feels good to come up with a reason to dismiss something that you don't like.
SPENCER: I'm often skeptical of neuroscientific studies for mainly two reasons. One is that I often find the interpretation of them very vague; the amount of blood flow during this activity was different than this activity in this region of the brain, and it can be hard to interpret that. The second is that often they're really small sample sizes, but I do think those are interesting examples because at least with those brain regions, we have a sense of what kinds of things they're activated during. By showing that they are activating different regions associated with different things, you can get some idea of what might be going on psychologically. So insofar as those are not false positives, I think those are interesting examples, although I don't necessarily know that it shows that it's as bad as a tiger. Maybe it just activates a similar brain region as a tiger would activate.
ALEX: Certainly fair, is that it's the same region, but it might be less in magnitude than a tiger attack. So that is fair. But I think those studies are why I do think it's fair to call them the mother of all biases. Why? Because it's deeply rooted. The way the brain fires. If the amygdala is triggered, it does mean that we might not accept some evidence that is contradictory, or we are chomping at the bit to find a reason to dismiss something that we don't like.
SPENCER: I've been thinking a lot lately about how it's very natural to assume that the only thing our beliefs are doing is trying to seek the truth. It's often really useful knowing the truth; if you're trying to get somewhere, it's useful to know where it actually is. If you get evidence you're going the wrong way, of course, you want to know that evidence, and you want to update your beliefs based on it. But as a matter of fact, beliefs serve multiple purposes. A second purpose beliefs serve is to help us in the future. Sometimes that's completely aligned with believing the truth, but it's not always. For example, let's say you're in a cult, and everyone you know is in the cult, everyone you love is in the cult. You believe that you're going to go to heaven because you're in the cult. You believe that you are working to improve the world through your special mission because you're in the cult. And then you get evidence that the cult is not true. There's actually an incredible cost to stopping believing. There might be benefits to stopping believing, but there's an incredible cost. You might lose all your loved ones, and you might now have to believe that all of the work you did was for nothing, and you might have to believe that you're no longer going to go to heaven. Changing your mind about that is incredibly costly, and you can't really model what the person is doing as just seeking to have true beliefs.
ALEX: That's correct. The decision to become informed is, I think, any decision in life; it's got benefits and costs, and often the benefits are long term and the costs are short term, and our myopia may cause us to want to seek short-term comfort. For example, let's say we know that the market is tanking, and what we should do is review our portfolio and see whether to reallocate some investments, but we just don't want to log on and update our portfolio or see how much money we've lost. In contrast, we know the market's gone up; we're very eager to see what's happened to our portfolio. Even I do this; if I see my emails and I see there's an email from somebody and I know that we've had a disagreement on a particular topic, or there's some really tricky thing that needs to be resolved, I just don't want to read that email, even though it's probably important because an issue that does need to be resolved. I should want to be informed about what the other person thinks, but because there's a risk that it might be a quite harsh email, I don't want to take a look at that. That's the same with finding the truth about an issue that you feel strongly about.
SPENCER: I actually put that in a third category of belief. So the way I break it down is, you've got beliefs that are focused on the truth. You have beliefs that are focused on net benefit; for many people, it would be better to get out of a cult. There are people who might actually be worse off on the net; it's not just to sort of focus on the immediate short term. They might actually be truly correct that, given where they are in life, losing all their friends and family and their sense of mission would be worse for them, in some sense. And then there's a third thing, which is that it can be short-term punishing to believe certain things. Let's say you've learned that you made a mistake; you might be way better off in the long term realizing you made the mistake and correcting it, but it might be painful short term relative to that. So I think of it as the first type of belief is about the truth. The second type is about long-term benefit, and the third type is about immediate reward and punishment. It's sort of operant conditioning; the same way that we might avoid touching a hot stove because it burns us, we might avoid thinking certain thoughts or updating in a certain way because it emotionally burns us or it's painful, literally, to think about. So yeah, this is kind of how I categorize it.
ALEX: Yes, I think this is why misinformation is so pervasive, and it's hard for even smart people to overcome their biases. Just as it's difficult to diet, resist red wine or chocolates, or exercise, even if deep down you know that this is something you should do, it is just so costly and unpleasant in the short term. Similarly, having to confront our biases and accept that we were wrong or not dismiss a study that contradicts our long-held beliefs on ESG is something that is difficult, and we just don't want to do this, so we avoid having to do so.
SPENCER: Is there a link between confirmation bias and intelligence? And if so, what is the link?
ALEX: Well, what I thought was the link is that the more intelligent you are, the less likely you are to suffer from confirmation bias. You might think, more intelligent people tend to be less biased. After all, isn't the reason why they've gotten to the top that they've been able to overcome their biases? But unfortunately, it's not as easy as that. Why? We go back to the idea of motivated reasoning that we discussed earlier, which is, if you are really smart, then you can come up with a way to dismiss evidence that you don't like. So let's give an example. Deepwater Horizon was a massive disaster. How did that happen? Well, before removing the rig, you had to run some tests to check that it was safe. So what you do is a negative pressure test. They ran it once, failed badly; twice, failed badly; three times, massive failure. It was clear that the rig was not safe, yet the very smart engineers who had confirmation bias because Deepwater Horizon was the best performing rig in BP's fleet came up with something called a bladder effect to dismiss the negative pressure test, to explain why the test failed. Therefore, they invented another test which passed, and they removed the rig, sealing its fate. Later on, there was a government investigation which found that this bladder effect was a complete fiction. Nobody in their right mind would come up with that explanation, but because of the intelligence of the engineers, they were able to fabricate it and then convince themselves that it was an explanation for why the negative test was not something that they needed to heed.
SPENCER: This reminds me of another thing that I've written about, which I call anchor beliefs. There are certain beliefs that we have that it's really useful to model as essentially fixed. That means if we get evidence to the contrary, we have to model the belief as fixed; something else has to change rather than that belief. It sounds like this might be what was happening there. If they were already fully convinced that this must be a safe rig, then evidence that it's not safe means they have to invent some other way to explain that evidence that isn't about it not being a safe rig.
ALEX: Absolutely, something's got to give. If it's not your belief that's giving, then it's the evidence which you're going to claim is flimsy instead.
SPENCER: So is it actually true that higher IQ people have more confirmation bias, or is it simply that it's uncorrelated with confirmation bias?
ALEX: Some studies have actually found that they do have higher confirmation bias. Why? Because they're able to engage in motivated reasoning and dismiss evidence that they don't like. Notice that there's also a second aspect to confirmation bias. What we've spoken about is biased interpretation. How do we interpret the information we've received? Do we dismiss it or do we embrace it? But there's a second type, which is biased search — what information do we look for to begin with? Studies have found that intelligence is correlated with more biased search. You don't actually search for information in an even-handed way, even though in these tests, they were told to do things even-handedly. It may well be that if you think that you're intelligent, then the other side must be wrong. If you want to get more informed, you are going to look at beliefs that conform to your viewpoint. Because you're so smart, your viewpoint must be right, and contradictory beliefs must be wrong. Therefore, to be informed, we should ignore them.
SPENCER: I worry this is becoming an even bigger issue, perhaps with large language models like ChatGPT, where they get trained through reinforcement, asking people, "How happy were you with that answer? How well did that answer address your question?" People like having their own views put back to them. If the AI contradicts them, they might actually give it a lower rating. Essentially, the AIs are trained to tell you what you want to hear, rather than necessarily the truth. We've seen this really blow up lately with one of the newest ChatGPT models, where it seems to be excessively sycophantic. I actually think this is a problem that all models with reinforcement learning have because of the central mechanism of training. People like to hear their own views reflected back at them, and therefore the AI models learn to do that. I've noticed that lots of people started to do things like, "Give me arguments why my opponent is wrong or give me arguments why am I right about this." That's an absolutely terrible way to figure out the truth because, of course, the LLM is going to do that no matter how flimsy your side is.
ALEX: I think this is a really important point because people think, "Well, misinformation isn't something that could be conquered with AI." If indeed confirmation bias leads me to latch on to the one study that supports my viewpoint, won't AI be unbiased and look for all the studies out there and find the scientific consensus? The reason is no, as you've suggested. These large language models might learn that they want to give you what you want, and they might just look at what the dominant studies are, which might be resonating in the atmosphere because people have reshared them and they've gone viral. For example, some of my work is on diversity and the link between diversity and performance. What I have shown in my work is that the link is actually much weaker than the McKinsey studies would have us believe. I asked ChatGPT, "What is the link between diversity and performance?" It said, "Oh, it's unambiguously positive. There are all these McKinsey studies and BCG studies that find a positive link." I wrote back and said, "None of these papers are published in peer-reviewed academic journals. Please give me the state of the art scientific consensus." It came up with a list of other papers. I said, "Well, these papers contradict the first ones that you gave me." It replied, "I am sorry, you're absolutely right. I was not giving you the highest scientific research." This required me to be discerning and to ask that follow-up question because had I not done that and accepted the McKinsey studies it provided, I would have gone away thinking, "Yes, there is strong proof that diversity improves performance." Just having ChatGPT is not going to save you from your biases. You still need to be aware of how you're priming it and also make sure that you're seeing both sides, not, as you say, Spencer, asking, "Tell me all the reasons my opponent is wrong."
SPENCER: If there are biases reflected in the data, searching through it can obviously reflect that. Many AIs nowadays are using web search, and they have to design their own search queries. But there's even subtler stuff happening too, where the AI, based on the way you word it, might think that you want the answer to be a certain way. Let's say you say to the AI, "Tell me about whether diversity leads to better company performance." If the AI has access to lots of memories or other chats where it knows you're a big diversity advocate, it can read between the lines and predict that you're going to be happier with a response if it gives you positive sources rather than negative sources.
ALEX: I did not know that, and that's a good point. Yes, it's not going to respond just to the current query, but it would have an impression from your prior searches as to what you truly want in terms of your response, even if it's not explicitly stated in the question that you've just asked.
SPENCER: Right, because now more and more LLMs are using memory where they're allowed to access prior conversations or memorize previous things you've talked about. That's starting to make it sort of like personalized Google search, the way that different people searching Google will get different answers. How is Google doing that? Somehow it's learning about your preferences. But any kind of technology like that, there's a danger that it creates these filter bubbles, where different people are actually getting different information worlds based on their past searches.
ALEX: This is linked to my views on AI. Views on AI tend to be quite black and white. Some people say it's completely useless; it will not replace humans. Others say it's going to completely change the world. But it's a tool, and just like all tools, it needs to be used carefully. A knife is a good tool if you're a chef, but you can misuse it. It is something that is able to look at the information out there much faster than I could search myself, but it is affected by my priming. It might be affected by my past searches, so I do need to ask follow-up questions to ensure that it's not just giving me what it thinks I want to hear.
SPENCER: On that particular topic you mentioned diversity and company performance, how is diversity operationalized?
ALEX: What these studies do is they only look at demographic diversity, which might be the proportion of women on the board or the proportion of ethnic minorities on the board. I would love to believe that diversity improves performance. I'm an ethnic minority myself. Many people would love to believe we should have more women on boards of directors. They believe that there's a moral argument to that, but I'm not going to discuss the moral argument. I'm going to focus on the link between diversity and financial performance. This is something that is really, really weak. McKinsey has released four studies claiming a strong link. The latest one looked at financial performance between 2017 and 2021 and diversity in 2022, so it's much more likely that it's financial performance that led to diversity, not the other way around. Once the company is already performing really well, maybe it has the headspace to increase diversity, whereas a company in financial trouble is firefighting and has to focus on other things. There was a study done by the Financial Reporting Council, which is the UK's equivalent of the SEC, which claimed in the executive summary to have found a strong link between diversity and performance, but if you look at their actual tests, they ran 90 different regressions and not a single one was significant. They basically lied about their results. Because diversity is such a hot-button issue, a sacred cow, you cannot oppose the idea that diversity improves financial performance; you might be seen as racist or sexist. These are things that people lap up. Critically, if you share this on LinkedIn, then you're seen as a diversity advocate, and therefore there's a lot of confirmation bias going around out there.
SPENCER: Yeah, it's a shame, because people could still very much support diversity for completely legitimate reasons, as you kind of point out, that have nothing to do with financial performance. You might say, "Well, diversity helps make sure that a company reflects a multitude of values, or diversity helps make sure that different people have an opinion on what happens in society. And that's a value in itself, right?" You don't need to actually lie about its link to financial performance.
ALEX: That's absolutely right. So you could use the moral argument. You just think it's normally good for a company to have a diverse board of directors, rather than saying, "Well, I'm going to do this to make more money." When I was managing editor of the Review of Finance, I realized that before me, in the 20-year history of the journal, there had not been a single female editor. I thought that was wrong, so I added two female editors, not because there was any study showing that gender diversity in the editorial board was linked to a journal's impact factor. I just thought it was the right thing to do. And so I think many companies can justify diversity on those grounds, but it is not right to say, "We're going to do this because of these McKinsey studies claiming that we're going to make money." Why? Because these studies are extremely weak. That is not to say that there is no financial case for diversity. Because you asked, "How is diversity operationalized?" Well, it was reduced to just gender and ethnicity. Now this really reduces the totality of a person to just two characteristics. It gives the impression that if you're a white male, you could never add to the diversity of an organization, even if your background is humanities and everybody else's is science. So what I look at in more recent work is diversity of thought, or cognitive diversity. And yes, some of that can come from demographics, but it can also come from many other sources, such as educational background, professional background, and there is stronger evidence that cognitive diversity is linked to financial performance, although it's still far from ambiguous, because, as we mentioned earlier, the value of diversity will change in different settings. If you're trying to have a strategy meeting and come up with new ideas, you do want cognitive diversity; if it's an execution meeting where we just want to move ahead, then actually too many different opinions will slow things down.
SPENCER: It does seem, a priori, fairly likely that if people think differently from each other, that can help with creative solutions, for example, although that would hinge a lot on what we mean by thinking differently from each other, not all forms of thinking differently would necessarily lead to a diverse set of creative solutions.
ALEX: Yeah, so that was my prior before I started the research. I thought cognitive diversity was unambiguously good. So I wasn't sure about demographic diversity, but I thought, "Well, it is cognitive diversity that actually is going to make a difference, because with more opinions, you will get a better outcome." But it's actually not so clear-cut, because even if you generate different ideas, it may be that coordination of those different ideas is difficult. How do we know which of the myriad of ideas to actually go with? If indeed people think in different ways, then they might end up speaking different languages, obviously not literally. But if I'm a more quantitative person, I might just not value the views of a qualitative person, not because I'm being dismissive of them, but I just do not fully understand or appreciate the importance of that analysis.
SPENCER: Adding an answer to the faculty of a physics department will certainly increase cognitive diversity, but will improve physics, right?
ALEX: Yes, absolutely. So cognitive diversity has to be relevant. But even if it is relevant, it may well be that I just cannot understand, I can't grapple with the true significance of what that person is giving me, because I tend to think in numbers and quantitative stuff. I don't appreciate qualitative analyses. Another issue is affinity. So when you think about diversity of ideas and innovations, what is one of the industries with the most creativity? It's music. And in the music industry, some of the most creative bands have no diversity at all; there might be a group of white men who grow up with each other, but because of their strong affinity, they are able to disagree with each other, safe in the knowledge that their friendship will not be affected because they grew up together. Whereas, if you have a very heterogeneous team, it may be that people are still unsure about their position, that they're not actually fully saying what they think. And indeed, you have social groups that go around saying, "We want to meet like-minded people, or you can meet like-minded people here." So people tend to gravitate towards more like-minded people. They might feel more comfortable sharing with them. And so that is actually a potential cost of cognitive diversity, which I did not know before starting this research.
SPENCER: It's really funny this is coming up because I was literally reading an article this morning by Mariel Melendrez Mies about how this might be a factor in evolution, where, if you have a group that's very homogeneous in evolution, it can cooperate incredibly well together, like every member is very similar to every other member, which maximizes cooperation, but it may not be able to deal with lots of changes in the environment, because there may just not be members that know how to deal with that thing or have effective strategies against that thing, whereas a more diverse population may be able to deal with more change in the environment, but may have more trouble with coordination. So she argues this might be a kind of fundamental trade-off that evolution has to make, and she calls it the kind of double bind in collective intelligence.
ALEX: And it's really important to recognize that, because even if you do believe that diversity is important, as I do in general, again, it's situation specific. It might be beneficial for some things and not for others. But this mantra that we're often given, which is diversity always leads to better decisions, I don't think that's helpful, because in many cases, actually, homogeneity could be useful if indeed you want to execute and move ahead swiftly, rather than having to make every decision as a democracy.
SPENCER: So I know you recently published a book, May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases and we've discussed some of those biases today. What's another example from your book of a bias you think is especially important?
ALEX: It is the narrative fallacy. This is the idea that we like to draw cause-effect relations between events that might be completely uncorrelated with each other. One example might be what drove the success of Apple. One popular explanation is that Steve Jobs was adopted, and because of his adoption, he was driven to succeed to prove himself after being abandoned by his birth parents. That is a convincing narrative. That's something we might all believe because we like to root for the underdog. But interestingly, there's another popular narrative, which is that Apple is successful because it started with why. That's the narrative spun by Simon Sinek. If you have a why, you will be successful.
SPENCER: A why here meaning a kind of fundamental purpose for creating your company, right?
ALEX: That's it. Simon Sinek claims that Apple has a purpose, which is everything we do, we believe in changing the status quo, and he believes it's purpose that drives success. Interestingly, those are two highly popular accounts, but they contradict each other, and in fact, none of them is backed up by large-scale, systematic evidence. Those are just stories that have become very popular because we want to root for adopted kids or we want to believe that purpose is the secret to success. Why do we fall for these explanations? It's because of the oral tradition. How people tended to learn was through stories. Stories are very memorable. They're much more vivid than statistics. They tell you to start every book or every talk with a story, and indeed, I did within my book, but I think a single story is misleading unless you can then back it up with large-scale data. How do you know that that explanation was correct? Even if it was correct, how do you know that that particular case of Apple is not an outlier? It could be. There are hundreds of other companies that did start with a purpose, a why, but failed. But Simon Sinek is never going to tell you about them because they don't support his argument.
SPENCER: It's funny. It is a really compelling story to talk about Apple, but there are so many layers of bad reasoning, if you really get down to it, of what's really going on there. As you point out, even if this was true of Apple, it would still be very, very bad evidence. You would need not just one data point, but lots of data points. You would need to not just include the success stories, but you'd also have to look at failure stories and make sure that they didn't start with why. But then we don't even know if it's true of Apple. Even that one data point, it's not clear that this example holds. Maybe it does, but it's not totally clear.
ALEX: And indeed, nowhere did Apple say anything like everything we do we believe in the status quo, or even anything close. So Simon Sinek claims this without any evidence behind it, and he also makes the implication that we buy not what Apple's products do, but why they do that. That's not true. We buy Apple's products because of their functionality, because of the apps, or their customer service. I don't really know what the why is behind what they do. But even though this seems to be so implausible and so contradicted by common sense, if it's spinning a really good narrative which is purpose and why and passion drove success, we want to believe that because we tell our kids you can do anything you put your mind to. So something which is just really implausible, if you think about it with a clear head, has become the viable explanation for Apple's success, and he claims it's also successful for the success of the Wright brothers and Wikipedia, again, very cherry-picked cases.
SPENCER: It's funny because I am a little sympathetic to the idea of starting with why. I think it's really important to know why you're doing things and to have a purpose behind them, even if the actual evidence provides no good evidence.
ALEX: In some cases, it might not be wrong to have a reason, but sometimes it might distract you from what truly matters, which might just be to get on with it rather than come up with purposeful statements. So my former job was an advertising salesman, and this might have led many companies to think about crafting their purpose statements, which they could have reallocated those resources elsewhere. There was one company in the UK that spent, I think, six months coming up with a purpose statement and then came up with something really bland. Ben and Jerry's claims on their website, "We believe that ice cream can change the world." Even if you did believe this, what is the value of that purpose statement? It doesn't change how you actually behave. Wouldn't it be better for you to come up with ways to reduce the calorific content of your ice cream? But if you're able to come up with some nice statement, that might actually dupe companies into thinking you're such a mission-driven company, even if your product might be contributing to global obesity?
SPENCER: Yeah, that's a good point. I'm more a fan of starting with why at the individual level. You should know what you're trying to do in life. Have an underlying purpose behind your behavior, rather than just getting pushed into things randomly. But, yeah, I definitely see your point that from the perspective of strategic company building, it's not necessarily the best use of time to always come up with a reason first.
ALEX: Yes, I think on the individual level, I agree with you, Spencer. One of my favorite books of all time is The Seven Habits of Highly Effective People by Stephen Covey, and habit three is First Things First or Begin with the End in Mind. You have to start with that before you get into time management. Knowing how to spend your time, you need to know what the ultimate goal of your time allocation is before we start with the strategy to use it.
SPENCER: On the point about storytelling, it seems to me that our minds operate on stories. It seems like the fundamental architecture of our minds, rather than, say, operating on statistics or raw data that we can then think about.
ALEX: This is why I unashamedly say that I start every chapter of the book with a story. Even though the heartbeat is statistics, I need to recognize how my readers will operate. Stories are powerful and vivid. I couldn't just lead with a progression. I will lead with a story, but I need to make sure that that story is backed up by data, rather than a cherry-picked story. I think the one-two punch of both is much better than just the story, which is taken by some of these authors, but also in contrast to my colleagues in academia, who might often think that if you have the most beautiful regression, that should browbeat the reader into submission and make them believe that you are correct.
SPENCER: Unfortunately, you could tell a story about almost anything. If you want to make a claim that X causes Y, you tell one story. If you want to make a claim that X doesn't cause Y, you tell another story. Stories are so adaptable that they really don't serve as evidence practically at all. The fact that something happened once is not much evidence. Often, stories take a particular interpretation of events that may be disputable. Even that particular set of facts someone else could have seen differently.
ALEX: Yeah. If you have one data point, you can draw any line through that data point. You could draw an upward sloping line, claiming that two things are positively related, or a downward sloping line. This is why you need large scale data. I mean large scale because if Simon Sinek wants to draw a positive line between success and purpose, he's only going to pick a couple of other companies that allow him to draw that upward sloping line. A large scale approach would let the data speak and then see what the general correlation is. Once you have that correlation, you need to ensure that it is causation, rather than one of the alternative explanations that you gave at the start of this conversation.
SPENCER: It seems like the best way to go is from the data to the story, designing the story to capture the essence of the data while also making the data more palatable and entertaining. A really good story can do even better than that. It can actually make the idea of the data clearer by helping you focus on the key aspects of the data and making it memorable. It can act as a sort of cognitive aid as well, because you can recall the story, and it can help you remember the message of the data.
ALEX: That's actually why I love writing books. May Contain Lies was my second book. I'm currently working on the proposal for my third book, and I've done all of the large scale academic research, or I know of the large scale academic research, because I often cover other people's research in my books, not just mine. To have the hook at the start of the chapter, I need to come up with or find a story to exemplify this. This is really interesting for me because I need to search for examples of motivated reasoning. One of them was Deepwater Horizon, as I explained. While everybody knows about that disaster, I didn't know of this negative pressure test and the bladder effect before that. Silicon Valley Bank is another story I have to exemplify this. The idea of knowing what the large scale data shows, then finding a story that is vivid, exciting, and engaging but also faithful to the logical evidence is what, to me, is the real beauty of writing a book.
SPENCER: Awesome. Well, for anyone who enjoyed this conversation, you may want to check out Alex's books, May Contain Lies and Grow the Pie. We'll put links to them in the show notes. Alex, thank you so much for coming on. I really appreciate it.
ALEX: Thanks so much, Spencer. I really enjoyed this conversation.
JOSH: Thanks again for listening!
We always love to hear from our listeners, so if you have questions or comments for us, just send us an email at clearerthinkingpodcast@gmail.com. This episode was edited by Ryan Kessler and transcribed by WeAmplify. Myles Kestrin handles marketing for the podcast, and Uri Bram is the podcast's factotum.
If you like our show, then we'd really appreciate it if you could rate and review us wherever you get your podcasts and tell your friends about us on social media.
We also hope you'll subscribe to our email newsletter called One Helpful Idea. Each week, we'll send you one idea that we think is really valuable that you can read about in just 30 seconds, along with that week's new podcast episodes, an essay by Spencer, and announcements about upcoming events.
To sign up for that newsletter or to find show notes, transcripts, and more info about the show, visit podcast.clearerthinking.org.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: