CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 137: How can we un-break politics? (with Magnus Vinding)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

December 22, 2022

How can we as individuals and as societies un-break politics? What is the two-step ideal of reasoned politics? How might this ideal apply to specific political issues, like free speech? Is it possible to reach agreement or even compromise on political issues that are rooted in intrinsic values? How can we reduce our own political biases? Are there some political issues which must always or by definition be zero-sum, or can all issues conceivably become positive-sum?

Magnus Vinding is the author of Speciesism: Why It Is Wrong and the Implications of Rejecting It, Reflections on Intelligence, You Are Them, Suffering-Focused Ethics: Defense and Implications, and Reasoned Politics. He has a degree in mathematics from the University of Copenhagen, and in 2020, he co-founded the Center for Reducing Suffering, whose mission is to reduce severe suffering in a way that takes all sentient beings into account.

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Magnus Vinding about free speech, disinformation, and group identities.

SPENCER: I'm really happy to tell you that today's episode is sponsored by GiveDirectly. GiveDirectly is a global nonprofit that lets you send money directly to people living in extreme poverty with no strings attached. It's really amazing to think about how just $10 can be so useful to someone living in extreme poverty somewhere in the world. For you, $10 may be just a meal. But for them, that could actually make a really big difference. And right now, you can go and really help a person a tremendous amount. I'm a big fan of the work GiveDirectly does because they really work hard to identify who really needs the money, and also to figure out how to get money to those people in a highly cost-effective way, so a very high percentage of the money given to them ends up making it to the end recipient. If you're interested, you want to learn more, or you want to send money directly to someone living in extreme poverty to spend on what they need most, go now to givedirectly.org/thinking. That's givedirectly.org/thinking.

SPENCER: Magnus, welcome.

MAGNUS: Thanks a lot, Spencer.

SPENCER: I think a lot of people have the sense today that there's something deeply broken about politics, that we now have massive tribalism (where different groups basically just hit on each other and try to one-up each other instead of cooperating), that we have widespread disinformation and politics (where people don't know what sources to trust), and there's a breakdown of trust institutions, at least in America. But I think, there's a sense that this is happening elsewhere in the world as well. So I'd love to dig into these topics with you and hear more about your perspective on how we can be better as individuals in grappling with the political landscape, but then also as a society in making politics more sane.

MAGNUS: Sure, that sounds great.

SPENCER: Do you want to start by telling us about the two-step ideal of reasoned politics? What is that?

MAGNUS: I've written this book called, "Reasoned Politics," which is a book that tries to do a lot of different things. But the overarching framework that I present is actually quite a basic one, this two-step ideal, where the core idea is simply to try to divide up our political thinking into a normative step and an empirical step, respectively, where the normative step is to simply put a state argue for and refine our values — so that this relates to our moral values or underlying moral aims that animate our politics, ultimately — and then, at the empirical step, the aim is to then try to explore empirical data construed broadly, in order to clarify how we can best realize the values that we have identified or clarified at the first step.

SPENCER: I want to make sure I understand this. So let's take a topic like education in the US. So the first step might be to ask ourselves, "What are we really trying to accomplish with education? What are our actual goals that we want to achieve as a society? Is the purpose just to basically provide something for kids to do during the day while their parents work? Or, are we trying to make them be good citizens? Or are we trying to prepare them for the workforce? Or are we trying to make them self-actualized human beings?" And then step two, once we agree on what we're trying to achieve, then we can look at the empirical question of how we actually achieve that goal. How do we help educate students effectively to make them into the people or create the outcomes that we desire?

MAGNUS: Yeah, exactly. I think there are a couple of points that are important to make on that. One thing is that, I think, we often are not all that clear about the distinction between those two different steps. Those are actually two different questions. And then another thing that is worth clarifying is that it's not as though this ideal it's about, you state certain values and then people must necessarily, for example, state very different values, and then the real conversation is only starting then at the empirical step in terms of, for example, how can we best satisfy some kind of compromise at the level of some pre-specified value. So the idea at the normative step already is also to have a conversation. So an open-ended discussion about what should actually be the end, so to speak, of education, which I think is already an interesting question and perhaps also somewhat of a neglected question.

SPENCER: My frame on this is in terms of intrinsic values, where I think of an intrinsic value as something that you value for its own sake, not merely as a means to other ends. And I think that we all have certain intrinsic values, it's kind of a psychological fact. Like if you could really study our minds in detail, you'd realize that there are certain things that our minds place value on just for their own sake, like they just fundamentally value that. And the clearest example is we almost all value our own pleasure sort of for its own sake, but people value a lot of other things. They value reducing suffering in the world, or they value their loved ones getting what they want, and so on. And so at that level of conversation, I'm not sure I would agree that we can make much progress. Like, if one person has an intrinsic value of A and the other person doesn't have an intrinsic value of A, I'm not sure that there's really a way that that conversation can converge, other than to say, "Okay, well, we just have different values." However, I think a lot of the things that we think of as values are not intrinsic values. So if we think about something, like trying to help students become good citizens and not being a purchaser of education, that may not be an intrinsic value. That may be just a means to an end. And so then maybe we can have a conversation about like, is that something that we desire, not fundamentally, but do you think it's a good intermediate step to produce a good society or something like that?

MAGNUS: Yeah. It's a great point. I think I disagree with you a little bit on the first part there. Rather, I'd say it's somewhat of an open question, the degree to which we could reach an agreement on values, or at least attain greater degrees of it. I think, in many cases, this is also a remark that Derek Parfit made, where he said that the field of sexual ethics is still very young. At this point, it has been studied very little in a secular context. So it's perhaps worth being somewhat open-minded about how much potential there is, therefore, progress for convergence. But of course, it's true that when we're talking about education, and what the goals of education are, it's true that it's a bit of a long step to go from, for example, some very abstract general values, and then to talk about how education best fulfills those. There will be some intermediate steps most likely, where, for example, even if you are a consequentialist, who has certain impartial values and it's reducible to, maybe ultimately, some fairly simple principle, you would still want to cash those values out in terms of certain other — you might call them — proxy values or values that are very helpful to that end. So for example, when you're trying to build a better education system, you'll probably also focus on, for example, creating good citizens and helping make people better informed and better able to make good decisions generally. So those could be some goals that probably not many people can converge on ultimately. Whether it's then because we consider them intrinsically good or valuable, or even if it's just that we think they are instrumentally valuable.

SPENCER: Yeah, I guess I just see more promise in coming to agreement on what's instrumentally valuable than on what's intrinsically valuable. Let's do a thought experiment. Imagine that there are two robots, Robot A and Robot B. Robot A is really, really smart. But it's been programmed to only care about making macaroni. And Robot B is very, very smart and has been programmed to only care about making pizza. It feels to me that even if these robots are incredibly smart, Robot A is never going to be able to convince Robot B to care about making macaroni, because it's just not the way Robot B is programmed. Now, maybe Robot A could coerce Robot B into caring about macaroni. Maybe Robot A could secretly sneak into where Robot B is powering up and reprogram it or something. But that's effectively changing Robot B's value system, right? It's not like they're naturally converging. I guess that's how I look at it with humans. It's like, it might be that someone could coerce us into caring about something else. But there's a certain sense in which your intrinsic values should resist change. Because if you value a certain thing, and you don't value another thing, by what criteria would you switch to value the other thing? You're gonna lose some of your first value or be less likely to achieve it.

MAGNUS: Yeah. That's an interesting point. I think, first of all, it's true that it's very unlikely that humans are ever going to converge to a very large extent. There will always be, I'm sure, very significant disagreements among humans. But that being said, I think it's also quite instructive to look at history and how, actually, values have indeed changed over time. And how, for example, when it comes to views of slavery, and when it comes to views of women's rights, things have actually changed in quite significant ways, and in ways that I think it's clear that most people would endorse on reflection at this point. And furthermore, that people would say it's not actually wholly arbitrary that we have changed the values in those ways. And I think it's still the case that we have similar value changes ahead of us. An obvious example being the way in which we treat and think about nonhuman animals. And additionally, there are likely other examples. For example, it could be that artificial sentience or like non animal forms of sentience is also something that we will need to change our minds about and take more into account in both levels of our values, and then also at the level of our political decisions.

SPENCER: I think it's an excellent point you make that people's values have shifted over time. I guess what I would ask there is, how do our intrinsic values form? And my suspicion is that there's a genetic predilection to have certain things as intrinsic values. But then additionally, our upbringing and culture and even our reflection, modify them over time, or cause them to develop over time. So that by the time they stabilize, they have been influenced by lots of things, such as what the people around us care about, and what we've kind of learned about the world, and so on. But once they start to stabilize, then it's not clear that you have any reason to change them, unless you have some meta intrinsic value that causes you to want to change them for some reason. But barring that, why would you want to change your own values? It's like, you care about X. So why would you want yourself to care about Y?

MAGNUS: Yeah, that's a fair point. But then, I guess it's also true that many people do, to a significant extent, value, say, consistency. And from that starting point, you could argue that you could actually potentially derive a lot of fairly unusual, or at least not currently commonly accepted ethical principles. For example, the notion that you shouldn't discriminate against beings based on their outward appearances. That's actually, in one sense, why we shared views or principles. But then, in terms of its implications, they're actually quite radical. So in that sense, I think there's still (you could say) a lot of potential for progress. And there's likely going to be, I think, a lot of progress or such potential in the future.

SPENCER: Yeah, I agree with that. I think a lot of people don't necessarily live by their own intrinsic values. And part of that is that they haven't thought through the consequences of the values they already have and what they imply about their actions. I view that less as a conflict between intrinsic values and more a conflict between one's current behavior, and sort of automatic way of doing things and intrinsic values you hold. For example, someone who has intrinsic value of being fair to everyone, but then they judge people who were born less good looking harshly, even though it's no fault of their own. And then it kind of violates their intrinsic value of fairness.

MAGNUS: Yeah, that makes sense.

SPENCER: But stepping back to your overarching point, I really liked this idea of you're starting with the normative discussion and then moving on to the empirical discussion. I'm curious to have you point to some real-world examples where you feel like that would have helped clarify things or would have made political discussions better.

MAGNUS: Actually, I'm tempted to say just about any policy issue. But in my book, I do try to apply this framework to five different broad policy issues. For example, one of them is liberty. What I do in my book is that, as a normative foundation, in technical terms, I rested on the view that we have a prima facie moral duty to reduce suffering. And so that, in itself, can be construed in many different ways. But I phrased it in that way, quite deliberately, because that's a value that many people, and many different moral views, do endorse. In that way, it's a value where I feel that it's possible to make some progress that a relatively broad faction can get behind. So, that's sort of my normative starting point in the policy analysis in my book, which is the fourth part of the book. And then I go through these different issues where liberty is one of them. And so, if we have a strong consequentialist concern for the reduction of suffering, which isn't to say that it's our only concern, but if that's a strong concern, how then should we think about liberty? And what are the ideal forms, the ultimate forms, of liberty that we should be granted in that case? And even more specifically, we can talk about something like free speech, which is, of course, a very hot topic. Of course, it's extremely complicated to try to analyze that at the empirical level because there are just so many factors that are relevant. And there are so many things, also in terms of timescales, what are some potential risks, especially if you look far ahead in time. But nevertheless, of course, it's really difficult to summarize this in a very short space. But one thing I can say is that there is a lot of empirical data that suggests that it makes a lot of sense to actually strongly protect free speech.

SPENCER: Let's break that down because I want to see this approach in action. Let's apply your process to the question of free speech. Can you walk us through that?

MAGNUS: Yeah. So one important consideration is this thing about what are the effects of limiting free speech. In the first place, it's worth noting that it's widely accepted that some limits to free speech make sense. And these are already (you could say) implemented at the level of legal systems around the world in terms of, for example, it's illegal to incite violence. And I think that ultimately, whether that should be legal or illegal is far less interesting than — by the way, some libertarians argue that even that should be legal. Someone like Murray Rothbard, an anarcho-capitalist, argued that even that should be legal. — But I think the more consequential case, and the one that's more commonly discussed, is whether the expression of, for example, any viewpoint should be legal or not. And in that case, of course, one can focus on various things that could be relevant there. But I think one thing that is particularly relevant is actually this phenomenon of psychological reactance, which is that when you make something forbidden, it can actually backfire and become more interesting. And there are some authors who even argue that this has actually happened in real life where people have attempted to suppress ideas and thereby make them more interesting. So legal scholar Nadine Strossen argues that, for example, in Nazi Germany, or rather pre-Nazi Germany, (I think it was back in the 1920s) where they had a sort of anti-hate speech law, that I think forbade anti-semitic speech. And she argues that that might actually have aggravated anti-semitic sentiments and made it more interesting. But it's not only sort of historical experiments, there's also a lot of more recent psychological evidence about how our psychological reactance actually can increase support for ideas that become forbidden and thereby make people more willing to support things that they wouldn't. And an interesting example is a study in the US published, I think, in 2017, where there was a political correctness prime, where people who had been exposed to that became significantly more likely to vote for Donald Trump, even though they wouldn't actually otherwise have voted for him. The theoretical explanation there possibly is that it was the people's psychological reactance that reacted to it. And whether one supports Donald Trump or not is actually not the main point. But the main point is rather that you can, in a sense, induce false preferences by making people feel like they're not allowed to hold certain views and do certain things. So that's one part of the story. But I think it's actually a fairly important one, and perhaps even the main reason to strongly protect free expression of any viewpoint.

SPENCER: So regarding reactance, I could see a couple of different things going on here. One is that there's this personality trait of reactance. And I know some people like this, where as soon as someone tries to tell them not to do something, that actually makes them feel motivated to do it. I personally don't have that trait, but some people definitely do. And so there's like an individual differences thing going on.

MAGNUS: That's actually something that empirical studies show that libertarians are especially high on that. It's almost sort of their defining trait.

SPENCER: Oh, that's really interesting. And then there's this other thing, which I think can happen, where when information is forbidden (let's say it's forbidden because it's false information), it can make people feel like maybe there's something more to it, or there's something more sinister going on. Like, if you say, "Oh, you can't have this information." I'm like, "Well, why can't I have it? Is it because you're trying to hide something?" And so it has this allure, or maybe it can feed into these conspiracy narratives.

MAGNUS: Yeah, that's a very interesting breakdown. I think you're probably right, that there are different reasons that explain it.

SPENCER: Going beyond reactance, when we're thinking about free speech, one of the reasons that I tend to be supportive of free speech, although I don't think it should be 100% free for all. I don't think you should be able to call for like killing people and things like that publicly. But one reason I tend to support strong free speech norms is that I don't trust anyone to be in charge of the process of deciding what we're not allowed to say. Like, even if the people in power today are ones where I would trust them to make those decisions, I don't know who's gonna be in power in four years. And I don't necessarily know that I would trust them to make decisions. So I'd rather just have a norm that people can say pretty much anything. And that's just like a strong norm regardless of who's in power. So I'm curious to hear your reaction to that kind of way of analyzing it.

MAGNUS: That's actually also a key that's somewhat more of a (you could say) theoretical argument. But, I strongly agree with that. And you could also back it up empirically. So you could say that this argument is pretty much that the risk of abuse seems very high of restricting free speech. And in a sense, you could say that in that way, the alternative just seems worse. There are then empirical examples of that, where, what are the countries that shut down or limit free speech? I think a fairly good proxy measure of that is actually (you can look at) Reporters Without Borders. They list and rank countries in terms of press freedom. And you can pretty much see that the countries that are towards the top are generally just countries that work a lot better. And the countries that are towards the bottom there are totalitarian countries that have the worst human rights abuses and score just very poorly on just about any metric you could care about.

SPENCER: I guess there's a question of which way the causality runs there. [chuckles] But that's interesting. So some people have made the argument that in the world we live in today, if we're just realistic about it, false information spreads really quickly, like through Twitter, Facebook, and politicians actually will leverage false information and don't necessarily get punished that badly for saying false things. And so, if we allow people to say anything, including really harmful false things, that just being realistic, that stuff is going to spread all over the place. And we're going to end up with a sizable percentage of the population believing harmful false information. And so, we have to pragmatically crack down on certain types of information. I think it's interesting and potentially a legitimate critique. So I'm curious, how would you deal with that critique? Do you think there's something to that? Or do you think it's a misguided critique?

MAGNUS: In a sense, you could say it's theoretically more motivated. And that it seems, almost intuitively, that it makes sense. But I actually think that, ultimately, there are a lot of reasons to think that the best way to combat false information, ultimately, is that you want to have free expression such that false information can always be criticized. And you never really know in advance what kind of speech you will need, so to speak, in order to combat false information. So in that sense, you could place yourself in a rather suboptimal position by saying something that you will, later on, figure out, "Oh, no," you can't actually say that now, because of these rules that we made before we realized that there was this pitfall. And also additionally, there is some empirical research that actually suggests that — this was something that I saw recently, a political scientist, Michael Peterson, he had a Twitter thread where he reviewed some reasons why actually — it turns out empirically, that the best way to combat false information is actually to allow the free flow of information. And he made this point as a critique of the censorship of Russian media, in particular, and argued that that was actually a bad decision.

SPENCER: What do you make of messages of QAnon, where it seems like millions of people have become convinced that there's like a conspiracy of liberals who are drinking children's blood and running pedophile rings and things like this?

MAGNUS: A question you might be getting at is that, are there policies we could implement that would help alleviate this problem? And then might these policies relate to restrictions of free speech? I have my doubts about that. Because, again, going back to the point about psychological reactance, I can well imagine that you would actually really exacerbate the problem if you, for example, make certain ideas forbidden or illegal to express.

SPENCER: It was certainly planned to their narrative. It'd be almost perfect for their narrative. "Oh, they're banning the information now." So stepping back and looking at the bigger picture, how do we improve the norms around politics, in terms of how we can be better individuals when it relates to politics, and how we can build a better system overall?

MAGNUS: That's a great question. I have a list of recommendations that I make in the fourth chapter of my book. Those recommendations are based on the two previous chapters in which I covered. I review some basic research on political psychology and political biases in particular. And the first and perhaps most important commendation I make is, don't trust immediate intuitions. And this is quite an interesting one, also because I know it relates to some of your work Spencer. I've seen you give a talk about the role of intuitions in our decision-making. And I actually make, I think, a point that's very similar to what you made in your talk, namely, that when it comes to large-scale decisions, such as political decisions of which policies we should endorse, we have good reasons to be highly skeptical of our most immediate intuition. One reason is that our intuitions evolved in an environment that's very different from today, where you could say, our intuitions just aren't really built to take into account all the complexity that's involved in today's globalized world. It's a very different environment from what we're involved in. And the reason it's important to highlight the recommendation of not trusting immediate intuitions is that it is very much what we instinctively do by default. For example, this model that Jonathan Haidt defends. He calls it ‘the social intuitionist model of moral judgment', where we tend to have immediate intuition about something that's wrong. An example he gives in one of his papers is a case of incest, where many people, presumably for biological reasons, have a very strong aversion to that. And the point, then, is that in the studies that Haidt has done, they tried to ask people for what are the reasons that it's wrong. And it turns out that, often, the reasons that people give don't actually have necessarily that much to do with why they actually say it's wrong, partly because many of the reasons people provide, at least supposed to be controlled for in the study and in the case they're presented with. The importance of not trusting immediate intuitions is that it essentially is sort of like an opening, you could say, to not get stuck at our most immediate inclinations. And so, instead, be able to have a conversation where we eventually get to more of a conversation and an exploration of data at the empirical level, because that often is the bump in the road that ultimately stops us, because we tend not to get further than that. So that's a key point. But that's not to say that (as I'm sure that you might also be eager to say) the point is not that we can never trust our intuitions at all, but rather that a better approach, I think, is to try to look at them as a data point rather than as the final word.

SPENCER: Can you ground this in a political example where people are trusting their immediate intuitions too much, in your opinion?

MAGNUS: An obvious example might be something like, should drugs be legal. And people will often have very strong intuitions depending on where they sit on the political spectrum. But I think that's a case where actually looking at the empirical data will end up changing many people's views. And I know it has in many cases. That is, that there's a lot of — and this is actually something that I cover in another chapter that concerns justice, including criminal justice — where for example, in Portugal, they change the drug laws in ways that actually seem to have very good consequences, surprisingly good consequences in terms of, for example, reducing the number of drug addicts and also reducing the number of deaths due to opioids. So they had a very big problem in Portugal, where heroin addicts would have a very high rate of AIDS. And the change of drug laws they made just essentially that they went from punishing to trying to help people with drug addiction problems. And the effects have just been quite a remarkable success.

SPENCER: Got it. So if I'm understanding you, a lot of people have an intuition about whether drugs are harmful or okay to do. And they're sort of using that intuition to affect their view on policy. Whereas, in fact, you're saying, if you look into the evidence, you can learn about what actually works and the question of whether we should change drug laws in a particular way, once you learn about the empirical facts, we might actually have a lot more agreement on whether we should do it, as long as we converge on that evidence and agree on what the evidence says. That might end up violating our intuitions or might be in line with it. But the point is that our intuitions aren't really doing the work there.

MAGNUS: Yeah. And then another complicating factor is that what we say about policies can often be seen as a signal of loyalty and as an endorsement. So for example, if you are a conservative, and you are strongly opposed to drugs, it might feel as though if you want to change drug legislation, it can easily be confused as a signal that you actually endorse drugs or something like that, which is not necessarily the case. One could endorse a change in legislation, while still being opposed to it. Just this one could, for example, be in favor of legalizing alcohol, but nonetheless think that people should drink a lot less and perhaps, ideally, not drink at all. That's another thing that I think can often, those maybe hidden signaling considerations can actually often get in the way of us thinking more clearly about politics.

SPENCER: Right. And there's both an identity issue there. Like, we might think of ourselves as someone who doesn't do drugs, and therefore, we don't want to say a statement that seems to contradict our identity. But then there's also, of course, loyalty to a group. Like, if we think our group is against drugs, or we think our group is in favor of drug legalization, we may not want to kind of go against the group that we feel part of.

MAGNUS: Yeah, that's true. And of course, those things are often very related. Our individual identities are often very strongly tied to the groups that we feel we belong to most strongly. And so, often you could say our individual identities become almost like a symbol of our group identities.

SPENCER: Right. And I suspect there are individual differences there, where some people's individual identities are almost entirely the same as their group identity. Whereas others have a pretty strong individual identity that's sort of pretty separate from their group's, or maybe they don't even have groups.

MAGNUS: Yeah. Conservatives and libertarians are great examples of that, or a variation along that dimension. So for example, conservatives tend to score uniquely highly in the loyalty dimension on Jonathan Haidt's moral foundations framework. Whereas, libertarians are the ones who score the lowest. And there is the libertarian, David Friedman, who likes to tell this joke where, "There might be two libertarians somewhere who agree on everything with each other, but I'm not one of them." That's a great example of what you're saying, like, famously, libertarians don't tend to be so conformist even among each other.

SPENCER: Maybe there's just less sense of shared identity with other libertarians. Is that what you think is going on?

MAGNUS: I think that's part of it. But then there's also the fact that it is one of their deep traits, that they are high in psychological reactance. So they also are just uniquely individualistic as a matter of personality.

SPENCER: A little earlier, you mentioned political biases. And now we're kind of talking about individual identity versus group identity. Do you think of political biases as being more about the individual experiencing cognitive biases or more like some kind of group-level bias?

MAGNUS: I think both things are certainly going on. But I think it's true that our group-level biases, actually, the fact that we have them is, in a sense, gives us precious information, if we are trying to find out where our biases might be. So for example, we could talk about many different, seemingly disconnected biases. We can talk about, there might be motivated reasoning or overconfidence bias, or confirmation bias, but I think, actually, looking at these through the lens of human tribalism is really helpful because that lens can give us a good sense of where we should expect those biases to be most significant. So for example, when it comes to overconfidence, perhaps you'd be especially overconfident when it's something that concerns a core idea that your in-group holds. And conversely, you might be especially overconfident about the rejection of some idea of the out-group. It's usually related, of course. There's a lot of evidence on this that is another key term called ‘hot cognition,' which is that we tend to process political individuals, groups, and issues in a strongly emotionally tainted way. So like we tend to have very — and by the way, a lot of evidence suggests this is unconscious, and it comes on before we can even frame our first conscious thought — we process our own group in a very positive light. And conversely, we process the out-group and their ideas and their individuals in a very negative light. And all this seems to happen very reflexively before we can even outside of our own awareness. And being aware of this pattern, I think, is really, really helpful. It's almost like a super tool, in terms of debiasing, to be really aware of this pattern, and how it materializes in different concrete biases.

[promo]

SPENCER: So someone wants to apply that idea to become less biased themselves. What would you recommend for them?

MAGNUS: A good example, again — not because I want to pick on conservatives — but if you're a conservative, and you're considering the issue of drug laws, for example. So you might look around you and notice that "Okay, so pretty much everyone on my own team strongly opposes a change in drug laws, steps towards drug liberalization. I know, additionally, that actually, conservatives tend to be very high in the loyalty foundation. So maybe that's also something I should be at least somewhat aware of, maybe try to control for that." And then, of course, you would try to keep that in mind as you look at the relevant evidence, and be aware of a strong resistance you might have to certain findings. And a way in which you might process even the tiniest bit of counter-evidence, you might give that vastly oversized weight compared to what one ideally should, in terms of normative reasoning standards. So, that could be one example. One could also try to take an example that goes somewhat in the other direction. You could perhaps say, if we have someone who's a liberal, and who has equality, in some sense, maybe at the level of their moral foundations' view. They might strongly value equality, but perhaps on reflection, they might endorse impartial consequentialist values that don't necessarily assign intrinsic value to equality. In that case, you might think that there's likely to be a strong bias in favor of thinking that equality necessarily always has the best consequences. And again, that's something that might well be strengthened by the fact that your own peers likely also favor equality, whereas the other side is at least more permissive of inequality. So you might be strongly inclined to view all evidence on that matter in a way that's distorted. At least that's something that's worth being aware of. That could be a systematic distortion in a way.

SPENCER: This reminds me of what happened with Brexit, where I noticed that virtually everyone in my immediate social world thought Brexit was terrible. And they thought the UK should stay a part of the EU. And so I just kind of reflexively kind of agreed with them. And then one day, I started thinking about how confident I am in that. And as I got into that more probabilistic mode, like if I had to bet, I realized that almost all of my views on Brexit were either, "Oh, seems like the people I know think it's bad, so it's probably bad." Plus, maybe they're a bunch of heuristics like, "In general, cooperation is good, and breaking cooperation is bad. And things being more united is good, and things being less united is bad." But when I really thought about it, like if I had to place a bet, I realized, I was only maybe 75% or maybe 80% confident that Brexit was actually bad. I wasn't really that confident because there's just so many details, like the actual consequences of what something like that actually will entail. And I was really using these loose heuristics about the people I know. And, generally, cooperation tends to be good. And so I realized that yeah, I probably was way overconfident of it being bad initially just because of having this reflexive, like, looking into the people I know.

MAGNUS: I think that's a great point. And it also relates to another concept that I go over in my book, which is that of political overconfidence. We already know that people have a tendency to be overconfident. Some studies already from back in the 70s showed that when people are 100% certain about a given claim, it happens less than 85% of the time, or it's true less than 85% of the time. There was at least in one study. A political context probably just makes that even worse. Actually, there are studies confirming that overconfidence is uniquely prevalent in politics, probably partly for signaling reasons. And of course, the questions that we are concerned about in a political context are indeed uniquely complicated. Brexit is a great example of that, where there are so many uncertain factors. One of them being, what would be the alternative deal? If given Brexit, and I'm not sure many people had a good understanding of that? And of course, it's difficult to know, because it also depends on what other parties decide. So for that reason, I think it's reasonable to be quite uncertain and to even be quite skeptical of our own uncertainty and confidence in very complicated matters.

SPENCER: Yeah, just seems like there are many major political questions where if one were to start with no bias, and just kind of investigate it totally neutrally, trying to look at the evidence, that after only an hour or two of research, sort of the correct thing would be to be extremely uncertain. And to realize that it would take a huge amount of research to actually gain a substantial amount of certainty. Because in practice, a lot of the things are just so complex. One of them I think about is Obamacare, which is the overhauling of health care in the US. And again, I think, based on a lot of heuristics, a lot of people I know support Obamacare, and probably it was a good thing in many ways, and yet, it's actually so complicated that I think I actually know almost nobody that really understands it, like really understands what the implications are. I think it would just take hundreds of hours of research to really get what it does, how it changes society, what the trade-offs are, what the costs really are, and what else that money would have been spent on had it not been spent on that, and so on.

MAGNUS: Yeah, that's a great example. And then an additional complicating factor is that often in these kinds of discussions, you hear about what works in another country. But that's not always so easy to generalize. It is useful evidence, but this is the point also that anthropologist Joseph Henrich has made in some of his books, namely whether policies can work depends on the local culture — and the way he would even say — the local cultural psychology that people have. And so for example, if you compare, say, the United States with Denmark, it's true that people have very different attitudes about, for example, taxation and about interpersonal trust, where Denmark is, in many studies, the country that scores the very highest in terms of interpersonal trust, whereas the US scores markedly lower in that kind of thing. And of course, that's just one metric. Such differences can end up making an important difference in terms of what policies can ultimately work.

SPENCER: Yeah, absolutely. It's really hard to generalize cross-culturally. There is an interesting phenomenon that occurs, where if I said, "Oh, I did an experiment on one person, where I gave them a pill, and they got better." Most people would be really skeptical, like, "Oh, that's just one person." But we tend to be enamored by case studies of individual countries, like, ‘Oh, this country changed their policy and look at all these benefits." And we kind of find that pretty convincing. And there's something weird about that. We know that with one person, that won't be very convincing. So why is that very convincing with one country? If you think about this more, I think that there are times when it actually can be validly convincing and other times when it isn't. And I think what it has to do with is the causality and how well we understand it. So let's say a policy was changed in a particular country. And we really carefully examined what actually changed, and we realized that it makes a lot of sense that that led to a really different outcome, because X caused Y, caused Z. And then, once we understand the causality well, we can go apply it to a new country and say, "Well, in this country, does it make sense that doing X will cause Y to Z?" Whereas if you treat it as kind of a black box, like they made this change, and then this outcome occurred, I don't think we can be very confident that that would apply to a new scenario because we don't really understand why it applied to that original scenario.

MAGNUS: Yeah, perhaps a relevant difference between a single country and a single person is that a single country obviously has a large population. So the population itself within a country is not N equals one. So in that sense, I guess one could make an argument that we're gonna have more confidence in the single country case, but I still think It's a very good point that, in a relevant sense, it's still just one data point.

SPENCER: Yeah, in a certain sense it is only one data point because it's only one culture and only one set of laws, and only one context. Yes, it's applying to lots of people. But it doesn't necessarily mean it's going to if applied to a different culture, for example, or in a different city, or whatever.

MAGNUS: Just a quick point there is that, interestingly, this is some of the more recent work by Jo Henrich and his colleagues. He tries to map out a space of cultural variation, and he then looks at the distance between different cultures, which often is very closely related to how geographically close they are, although not always so. But, I think, it's interesting how such empirical work can likely be quite useful in terms of informing which policies that might generalize and which that don't. So in this high-dimensional space of cultural variation, you should likely expect to be much better able to generalize based on cultures that are fairly close to you, versus those that are completely on the other side of the spectrum. An example of the spectrum could be something like (and it's just one dimension) something like individualism versus collectivism, where a country like the United States is far towards the individualist spectrum, whereas countries in Southeast Asia are far towards the collectivist spectrum. And that can make huge differences in terms of both how people function together socially, but even also in terms of individual thinking. It has many, many consequences in terms of how people process the world, even perceptually.

SPENCER: Oh, good point. And I imagine for different interventions, it's actually different axes of that high-dimensional vector that are gonna matter. So, for some kinds of interventions, the collectivism variable might matter. But then for others, maybe that's less relevant, and something else matters around (I don't know) corruption, or around income, or something like that.

MAGNUS: Right. And again, if you have a given policy, then one dimension might favor a given country, but another might strongly render counterproductive. And so it can, again, be very difficult to extrapolate what is the, all things considered, evaluation based on those different dimensions and what their implications are.

SPENCER: A final topic before we wrap up. It seems like politics too often is zero-sum, where at least in the US, it's like the left trying to grab power, the right trying to grab power. It's like a tug of war, when in reality, there are so many opportunities for making the world better that you think that the two sides could agree on. There's a lot of problems in society. Why can't we find problems that we all agree are problems and find compromise solutions where both the left and the right think that the solution is better than the status quo? Even if it's not their ideal solution, it's like everyone agrees it's better than the status quo. "Okay, let's go with it. Let's make society better." So, I'm curious to hear your thoughts on sort of the zero-sum nature of politics.

MAGNUS: Yeah, exactly. And I think there are also a couple of reasons explaining that. One is simply that zero-sum thinking is extremely intuitive for human brains. And it's something that has pretty much been with us for much of human history. The world has been largely a zero-sum, or at least much more zero-sum than it is today in economic terms. In terms of, for example, you didn't have growth rates resembling what we have today. You had pretty much zero growth for all of human history. And that the fact that that's intuitive — or that it's very prevalent to think in that way — is also shown in studies where, for example — I think this was done in the United States — people could either choose economic policies that benefited their own country and also benefited another country, or they could choose economic policies that benefited themselves and hurt another country. And sadly, people often prefer the latter, even though the benefit to themselves was, I believe, the same in these studies. So this thing about caring a lot about relative differences between groups is somehow very, very deeply entrenched in us. And I think there's a lot of value in becoming more aware of this tendency and trying to control for it because I think most people on reflection would say that's actually not something we freely endorse.

SPENCER: Do you think that mainly applies in competitive environments where countries view themselves at odds with each other or political groups view themselves as at odds with each other? Or do you think that would apply even in cases where sort of a random other country or random other group is not a competitor?

MAGNUS: That's actually a very interesting question because I know there are studies suggesting that in terms of within countries, when there is less economic inequality, people are much less concerned about status anxiety and competition like that — just something I cover in the 12th Chapter of my book. — But then whether that applies cross-countries, I haven't seen any evidence of that. But I wouldn't be surprised if people in more equal countries would be less worried about other countries gaining wealth or other good things.

SPENCER: I tend to think that there's this scarcity mindset that can occur, where you are viewing things as scarce and therefore precious, and therefore if someone else has them, then you don't have them. And then there's this sort of abundance mindset, where you don't view things as inherently a tug of war. And you don't view someone else's having something as taking away from you. But this tends to have to do with a feeling of a competitive environment.

MAGNUS: Yeah, a competitive environment can certainly be a factor. And then additionally, another reason we might be inclined towards zero-sum politics is actually (what some local scientists have called) the ‘identity expressive nature' of our political convictions, which is that, essentially, we often use our beliefs and opinions as a way to show — again, relating back to the tribalism point — as a way to show which team we belong to politically. And some political scientists have argued that this likely actually exacerbates disagreements, or at least the appearance of disagreements because, often, it doesn't actually seem that there is that much underlying disagreement on actual policy substance, partly because people are actually often not particularly informed about them. But nonetheless, people tend to express very opposing views and kind of use political discourse as an arena to signal their opposition to another side. And that kind of setup is, by its nature, quite zero-sum, in a way that people sort of need to show opposition to the other side.

SPENCER: Magnus, thanks so much for coming on. It's a fun conversation.

MAGNUS: Thanks for inviting me, Spencer.

[outro]

JOSH: A listener asks: How can I find people like your guests that I can talk to on a regular basis?

SPENCER: I love talking to people that love talking about ideas. That's like one of my favorite things to do. So the way I would frame that is, where do you find people that love talking about ideas and are really very intellectual, and tend to be analytical? And I think that they tend to cluster. I think that they're very highly clustered in certain places. So step one is like if you can live in or visit those places. There's a lot of them in New York and San Francisco, London, in Oxford. There are a bunch of places like that. The second thing is that they tend to know each other. Obviously, they don't all know each other, but they tend to know each other. So, once you get to know one person like that, if they can take you to meet other people there they're friends with, they're more likely to be like that, and so on. And so you can kind of meet them through each other. And then there's sort of super-connector type of people that are like the people that tend to throw the big parties, or the people that tend to know everybody, and those people are especially good sort of magnets and ways to meet lots of people like that.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: