Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
June 29, 2023
What is persuasion, and what is it not? How does persuasion differ from coercion? What is the Elaboration Likelihood Model (ELM) of persuasion? How are the concepts of assimilation and accommodation related to persuasion? Motivated reasoning is usually seen as a cognitive bias or error; but what if all reasoning is motivated? Are we motivated more by physical death or social death? How much evidence would Flat-Earthers need in order to be convinced that Earth is round? What are "deep" canvassing and "street" epistemology? In what contexts are they most effective? Under what conditions is persuasion morally acceptable?
David McRaney is a science journalist fascinated with brains, minds, and culture. He created the podcast You Are Not So Smart based on his 2009 internationally bestselling book of the same name and its followup, You Are Now Less Dumb. Before that, he cut his teeth as a newspaper reporter covering Hurricane Katrina on the Gulf Coast and in the Pine Belt region of the Deep South. Later, he covered things like who tests rockets for NASA, what it is like to run a halfway home for homeless people who are HIV-positive, and how a family sent their kids to college by making and selling knives. Since then, he has been an editor, photographer, voiceover artist, television host, journalism teacher, lecturer, and tornado survivor. Most recently, after finishing his latest book, How Minds Change, he wrote, produced, and recorded a six-hour audio documentary exploring the history of the idea and the word: genius. Learn more about him at davidmcraney.com, or follow him on Twitter at @davidmcraney.
SPENCER: David, welcome.
DAVID: Oh, thank you so much. It's an intense pleasure to be here. I'm very honored to be asked to come on your show. I love all your work and all the stuff you do. And I was extremely blown away when I happened upon you one day and was like, "Oh, wow, I'm meeting you in person now," so this is really cool. Thanks.
SPENCER: Oh, thanks so much, David. Yeah, and I really enjoyed coming to your podcast as well. Today, we have a really fun topic — which is really relevant to society today — which is how minds change and how persuasion works. I feel like this is so relevant because there's such polarization, where it feels like the Left is trying to change the minds of the RIght (or at least convince people not to listen to them) and the Right is trying to change the minds of the Left (or convince people not to listen to them). And then we also have lots of different opinions where people are trying to persuade us of things, whether it's companies trying to persuade us, or politicians trying to persuade us, so very, very relevant. Let's jump into this. First, can you tell us what is persuasion? And what is persuasion not?
DAVID: That is a great question. When I started this project, I wouldn't have been able to answer that question with any kind of authority. I think I was like a lot of people; I thought persuasion was just getting people to see things my way or getting people who I thought were factually incorrect to not be factually incorrect. One of the great things about this topic is, if you ask 1000 experts, you'll get 1000 different answers for what persuasion is. It's easier to answer what it's not. Persuasion is not coercion, is not doing anything that would take away the other person's agency or trying to get them to fall in line with a certain way of thinking or seeing the world. It's also not an attempt to defeat your opponent with some sort of superior intellectual argument or some sort of superior moral argument. And it's not a debate either, a debate in the sense that somebody's going to win and somebody's going to lose. What persuasion is, is leading another person along in stages, and helping them better understand their own thinking, and see how that could align with the message that's being communicated. You can't persuade another person to change their mind if they don't want to do that. And so a lot of the techniques that I've talked about in the book focus on the person's motivations to resist the message or their motivations to see the world in whichever way they're seeing it. And in many ways, persuasion is just encouraging people to realize that change is possible on something for which they may have not considered change being even on the table. And at the end of the day, to sum it up, persuasion is always going to be self-persuasion. People change or refuse based on their own desires and their own internal counter-arguing. It's proportional in strength to their certainty or their confidence on the issue. And most of the best techniques are focused on that, like let's examine your certainty and what is feeding into that. Let's examine your confidence and what's feeding into that and get as metacognitive as we can get. And just in doing that one step, oftentimes, people find that they are much more open to changing their mind on something than they were before the conversation began. That's an overview. It takes a lot more books [laughs] to get into what it means, just to define the phrase 'change your mind,' took a lot of time and effort. It's a very nuanced and complex concept. But at the end of the day, that's what persuasion is.
SPENCER: See, I'm surprised about that framing of it because I feel like, when people think about persuasion, they think about people trying to convince them of things, whether it's an advertisement trying to get them afraid of something so that they'll buy insurance on their next flight, or it's a politician trying to make you feel like the other politician is a bad person, because they did some bad thing, and therefore get you to vote for them. The way you describe it, it sounds much more like something that you're involved in, that you're not persuaded against your will. But I feel like a lot of common usages of the word 'persuasion' involve this more sinister-seeming thing,
DAVID: For sure, yeah. And I would put all that in the category of coercion, which is not what I'm discussing in this book or this project. And there are things that fall out into different silos, when you're thinking about, are you persuading someone to act in a certain way or behave in a certain way. That would be trying to get someone to buy something or switch brands or something like that. You're affecting their behavior so there's persuasion in that category. There's also persuasion in the category of, you believe something that is not true, and I would hope to persuade you to see this more accurately, and that's a different kind of persuasion than trying to affect somebody's behavior. Then there's moral or attitudinal value-based persuasion, which is much more about a person's emotional attachment to the issue and the feelings of identity, or being driven or motivated by much more instinctual things, things that come out of the emotional side of cognition, the amygdala and so on. So it breaks out into lots of different categories once we establish what we're talking about, and then we try to get nuanced about it. But yeah, sure — whether it's PR advertising (they're trying to persuade you to do something or to make different decisions), and politicians are gonna do the same thing — all of those things where it starts to get complicated, where people usually (I think) get their Venn diagrams all messed up is seeing a belief as the same thing as an opinion, or seeing beliefs and opinions as the same things as attitudes, or seeing all these terms as talking about roughly the same thing: values, attitudes, and beliefs and so on. But they're very different when it comes to — not just the psychology and the neuroscience — but when it comes to what we are trying to talk about here? What are we persuading each other of? For instance, if you say, "I think the President is a bad president," then that feels like a belief. But really, what you're expressing is an attitude that comes online whenever you contemplate that attitude object. Now that might be informed by all sorts of beliefs — and some of those beliefs may be true and some of them may not be true — but if I'm attempting to change your mind about that statement, that presentation of your position. What I should do is employ things that affect attitudes. Because if I try to affect your belief on the matter, there are all sorts of ways for you to get out of that, that are not going to be nearly as effective as me trying to understand and help you understand what is generating this attitude, what's fueling it, and where can we go inside there and think about what certainties are inside that, what confidences are inside there. So it's important that, when you decide you want to employ a persuasion technique, you know exactly what the target is: an attitude, a belief or a value. And if you're trying to affect someone's behavior, the things that you want to focus on when it comes to getting someone to behave in a certain way, may not be the same sort of things you'd want to focus on if you want them to believe in a different way.
SPENCER: Got it. So you're saying that if you want to change someone's behavior, that attitude is usually a better target than belief?
DAVID: It depends. It depends on the issue. If I want you to buy my brand of razor blade, it's more about picking what persuasion handbook you're going to pick up, of all the ones that are available. When it comes to affecting people's behavior, the Elaboration Likelihood Model is the one that seems to be most effective. Because most of the time people, whatever it is they're choosing, whatever products they're choosing in their lives, the route they're taking to work, the things that are very behavior-based — things that involve you deciding, making plans, goals, and then committing to decisions that require you to move your body around in some way or another and behave — that can be affected by beliefs, attitudes, or values. But the model that puts it all together is the ELM and we can go into that if you'd like. What I like about the Elaboration Likelihood Model is that the scientists behind it were asking the same questions you're asking right now because the state of research into persuasion going into the 1980s was a real mess. That's exactly how it was described to me. If you wanted to make an A in your classes, and earn a degree in the field of persuasion research, you would have to memorize the outcomes of every single study because there didn't really seem to be anything that unified it. In this situation, this mattered; in that situation, it didn't matter. This communicator in this format would increase the likelihood of persuasion; in a different format, it would decrease it. This message in this format would increase; in that format, it would decrease. So this was the original model that led into ELM and it was, who says what, to whom, in which channel, and to what effect? Who is the communicator? What's the message? Which channel is the medium of the context? And who is the audience? And the effect would be, what is the impact? All the research that led up to this point around about the 1980s, you'd have like, 'this communicator works great in this channel, but not in this channel.' Or 'this message works great with this audience, but terribly with this audience' and so on. And these two incredible researchers, Petty and Cacioppo just wanted to pass their classes. They just wanted to earn their degrees. So they took one of the rooms in the house they were renting on campus and they painted every wall with blackboard paint, and just wrote out all the research they were trying to understand so they could memorize the outcomes of the studies, and they started to see the model that would eventually be called the Elaboration Likelihood Model by grouping things together. And if you'd like to, we can go to that.
SPENCER: Yeah, let's do it. Why don't you walk us through the Elaboration Likelihood Model?
DAVID: It's right there in the name, the likelihood that someone will elaborate, and 'elaboration' is a strange term in psychology. The way it was described to me by Petty was, if you imagine you're watching a commercial for soap, and in the commercial, "You should buy our soap. It'll make you smell like flowers." And the message in some people's minds will then enter into elaboration where you're adding your own thoughts on top of the message and you might think, "I would love to smell like flowers, I love flowers." And so for that person, the message will be quite effective. For another person, like, "Ugh, I don't want to smell like flowers because everybody will make fun of me at work." Now that message is going to be the same message but it's going to have the opposite impact on those persons because they had two different forms of elaboration afterward, after getting the message inside their minds. The Elaboration Likelihood Model states, when elaboration likelihood is high, people tend to take what's called the central route. And when elaboration likelihood is low, they'll take what they call the peripheral route. In the central route, people will pay attention to the merits of the argument: is it well reasoned? Is it logical? Based on my expertise on the issue, does it seem like it's accurate? Those will be the central cues that they'll look for, that they'll try to pay attention to. On the peripheral route, you will pay attention to much more emotional cues: is the speaker attractive? Does this message seem like it contains a lot of big words? Does this person seem eloquent? Do they have a prestigious degree? And even other cues, like "If I pay attention to this, they're gonna give us pizza at the end of this lecture," or something. You're paying attention to peripheral cues. In their research, one of the examples that really (I think) illustrates all this well is, they told students that there was going to be a comprehensive test at the end of their entire time at the university. At the end of all their classes and all their years of work, they were going to have to take this final exam on everything they'd ever studied, and that was going to determine whether or not they got their degree. Some students were told that that was going to go into effect at their university, and some were told it was going to go into effect at another university. So right away, you have two variables here: some people feel like this is going to happen to me, and some people are just like, this is going to happen somewhere. And then they listened to arguments, and they were further divided into two groups: one group listened to lots of (in a previous study) they established really solid, fantastic arguments for implementing this policy, and there were strong arguments; whereas, others listened to weak arguments. And the strong arguments were all about like, prestigious universities require exams like this. And the weak arguments would be like this harkens back to the traditions of the Greeks and stuff like that. And what they found was that the students who thought that that was going to happen at their university were more motivated by arguments on the central route. And they were not motivated by arguments that took the peripheral route, the ones that had weak arguments. On the central route, the weak arguments were ineffective. The students would see the flaws in them. They'd see the opinion-based aspects of them, the emotional appeals, and so on. In fact, if a person was in a state of high motivation, if they heard nine poor arguments, they would be less persuaded than if they had heard three poor arguments. But for the unmotivated students, the ones that thought it was gonna happen at somebody else's university, the more arguments they heard of any kind, the more persuaded they were because they were using some sort of heuristic, which was just more arguments equals better things. It didn't matter if they were good or bad arguments, the number was what they were paying attention to. They were on the peripheral route, and they were being persuaded by peripheral cues. And there's lots of other studies like this. They did things with disposable razor blades and all sorts of other stuff where they would adjust all these variables. They turn the knobs and all these things. And they found this model works out perfectly in almost every situation and it became a really robust model for how to make the sort of arguments you would want to make to persuade a person to either advocate for a behavior or engage in a behavior. It's a really good model for persuading people to fall in line with some sort of future goal planner decision and so the ELM still stands really tall in that world.
SPENCER: Iit sounds like whether elaboration is likely or not is a really big factor. It determines what route, whether you want to use central route, peripheral route. But what actually affects whether elaboration is likely?
DAVID: I love that. Some of the things that affected it are, first of all, if you're told you're going to have to teach this to someone, it really, really spikes motivation. It spikes the likelihood that you're going to elaborate and go on a central route. If you are already familiar with the topic, if you have some expertise, that also will increase the likelihood. If your social identity or social concerns are going to come into play — like you could be shamed or ostracized for having a poor position on an issue, or if you engage in a certain behavior, you might take some sort of social sanction hit — that's going to increase your likelihood. Things that will decrease it are cognitive load. If you don't understand the issue very well, it's very complicated in some way, that's going to decrease your elaboration. If the room is noisy, or if you're hungry, or you're angry, those are things that will decrease the likelihood of your elaboration. Iit boils down to motivation and abilities. If you feel highly motivated, and this is going to affect you, or you understand this issue, or some sort of social costs could come into play with it, that's going to increase the motivation, and also without those things, decrease it. And then ability is straight up cognitive ability, which can be affected by how much cognitive load is in front of you, but also, did you eat recently? Is the room uncomfortable? Is the message in a weird font? Is it like yellow text on a white background? All those things can affect it.
SPENCER: Another idea in this space is that of assimilation and accommodation. How does that fit in? And what do those words mean, assimilation and accommodation?
DAVID: Yeah, this is how minds change, and I love this so much. It makes so much of the world make sense, which is ironic, because that would be an example of assimilation and accommodation. These are the twin engines of how we update our models of reality. And this is also foundational to pedagogical concepts. This is how learning works. At the end of the day, learning is changing your mind, and we change our mind either through assimilating or accommodating. I'll start with a real-world example. A little kid sees a dog for the first time, and you point at it, and you say, "Look at the dog." And they add this word to their vocabulary, they add this concept and this category to their model of reality. And something categorical happens: non-human, doesn't wear clothes, walks on four legs, has a tail, covered in hair, something like that is happening. And they may then later on, see a horse and point at it and say, "Dog," or they might try to do a little adjustment and say, "Big dog." And if you say, "No, no, that's not a dog, that's a horse," you've witnessed assimilation and accommodation. Assimilation is coming into a situation where you're confronted with novel information — something ambiguous, something new — and trying to make sense of it by fitting it into your existing model of reality or existing categorical understanding. In this case, if you say, "That's a big dog," you're trying to assimilate it into, "Look, it walks on four legs, it's not a person, it's covered in hair. This seems to fit my understanding of this." When your parents or someone says, "No, that's not a dog, that's a horse," you have to acknowledge, "Oh, okay, well, it turns out that I'm going to need to accommodate this. I'm going to need an even higher level category," which would be something like a creature or animal in which both horses and dogs fit. And there must be something nuanced and specific about this that separates it from the dog. That's accommodation. We're always doing this at all times. When we are in a novel situation, where our previous understanding of the world — which provides us with our inferences, our expectations, our predictions — when they don't work out the way that we thought they would, in a state of novelty, you get this dopamine response which grabs your attention and focuses it on the situation, and motivates you to assuage the negative affect you may be experiencing by updating your understanding. And one way you can do that is to assimilate. That's disambiguating novel information by fitting it into your existing model in some way so that you can interpret that novelty as a confirmation of the model's existing accuracy. With accommodation, on the other hand, that's acknowledging that your model must be incomplete or incorrect in some way, and that's why this novelty isn't making sense to you. You update the model so that that novelty is no longer an anomaly but a new layer of understanding and you expand your mind to accommodate it. And we're always doing this at all times. But as you become an adult, that model becomes so complex that it's much easier to assimilate than it is to accommodate and the risk versus reward of going on a difficult and expensive accommodation effort is just less appealing than trying to make things fit into what you already understand. So we tend to have this, as they call it, a tightrope between accommodating and assimilating, where we tend to favor assimilation. One researcher said something like, "In small amounts of incongruency, we become alert, but we will err on the side of our priors." And so it's just a lot more effortful, and sometimes a lot more risky because all brains are going to resist change at a base level. Updating when you shouldn't is dangerous because you might become wrong. Not updating when you should is also dangerous because you might stay wrong. So you walk that tightrope, and something has to make it seem like it'd be a lot more risky to not update or to update for you to decide which side you're gonna go. And that can happen through another entity, another agent communicating to you that it would be very risky for you to not at least entertain the possibility you might be wrong in this situation. Or it can just come from the fact that you've engaged with the same kind of novelty and ambiguity so many times, and you keep coming back without the outcome that you predicted you would get. That can also deeply motivate you to go on an accommodation effort. So both of them (I guess) count as changing your mind, but accommodation (I think) is the one that we think of when we think, "Okay, that person really has updated the way they see the world."
SPENCER: When people are attempting to persuade us, are they going to tend to be more successful if it's a persuasion of assimilation, where we're fitting into what we already believe, even if the change is not as profound? What would you say about that?
DAVID: You can really think about it like, if you're teaching a class — maybe you're a YouTuber or a podcaster or something — and you're trying to introduce new concepts to people. We have an intuition for how powerful assimilation is, in that, we try to give examples that people already understand, or we try to fit it into something that's happening in pop culture, where we try to say, "If you think of it like this, it's kind of like that, but add this element." So it's much easier to teach someone something new when you build on the foundation of things that people already understand. It's when trying to understand something new or trying to disambiguate something — when the risk reward comes in of, "Okay, if I update my priors on this, it could get me into trouble with my trusted peers," or "if I update my priors on this, it's going to cause a cascade collapse of this entire understanding I have of X" — that's when it becomes much more likely that a person is going to lean into the assimilation, lean into their confirmation bias, lean into that side of things to try to avoid the threat that is intuited from the accommodation. That's the resistance I think we come up against oftentimes when people are like, "Ah, I don't know. This person feels unreachable. Nothing I say gets through to them." And that usually comes from, there's a risk reward calculation happening on their side that they may not be able to articulate — it may not be salient to them in any way — but the threat of updating has entered the fray in a way that causes them to feel a very visceral resistance to the message.
[promo]
SPENCER: There's a challenge I've faced. I call it the magnetic concept problem, although I don't know if it has another name. But it's when you're trying to explain a concept to someone and they already have a category in their mind that's pretty similar, but it's not correct, it's not what you're trying to say. And you find that whenever you talk about this thing, they just — I call it magnetic concept because it just sticks to the concept they already have — they'll always keep trying to apply the concept that's close to it, but is not correct. It's very, very hard to make that new concept that's close to one they already have, but not the same.
DAVID: Yeah, that's great. That's totally true. There's plenty of old aphorisms for that: "It's not what you don't know that gets you in trouble. It's what you know for sure that just ain't so," which is often attributed to Mark Twain. He didn't say it, sorry. Another Jungian quote that, ironically, also he did not say, which is, "The mind oscillates between sense and nonsense, not right and wrong." Yeah, this magnetic thing you're talking about is very in line with all that. Our priors really, really affect our disambiguating of the unfamiliar and the novel, and we tend to interpret things through our current understanding. We don't just come to things blank. We don't come to things neutral. Everything is filtered through a zillion zillion lenses of previous experience. Some of those experiences are going to cause you to have emotional reactions to things and some of them are just purely 'I understood it this way.' The flowchart, the diagram, the blueprint in my mind is getting in the way of this understanding you're trying to offer me because it does already make sense to me in this way. Yeah, that's totally a thing. We interpret things through our priors and our priors can sometimes be very helpful and sometimes they can get in the way.
SPENCER: Another important element of minds changing is motivated reasoning. Do you wanna tell us a little bit about how that plays into it?
DAVID: I love motivated reasoning. I've been writing about this for ages. And it's really strange to me, and beautiful, that I think there's a cultural value in the West, especially in different cultures within the United States, that we like to think of ourselves as being purely rational actors. When we are trying to understand something, we go down into the bowels of our castles, and consult our scrolls and, by candlelight, raise a finger and say, "Aha, this is what I think about gun control," or whatever it is we're trying to make sense of. And it's nice to think of ourselves in this way. We would all (I think) like to think we were deeply contemplative in such ways, and that just the facts are all that matter to us. And we base all of our decisions and understandings on preponderance of evidence. It's just that all the research into human cognition and reasoning does not support that. We are very motivated reasoners. Before even getting into motivated reasoning, we already know what that is. My favorite example these days is, if you have a friend or you know somebody who's falling in love with someone new, and you ask them, "What is it you like about that person? Why are you falling in love with them? Give me your reasons." They will say, "The way they talk, the way they walk, the music they're introducing me to, even the way they cut their food, these are all the reasons why I'm falling in love with this person." But if you visit that person at a later date, when they are breaking up with that exact same person, and you ask them, "What reasons do you have to want to break up with them?" They'll often bring up things like the way they talk, the way they walk, the music they make me listen to, the way they cut their food. All of those 'reasons for' become 'reasons against' when the motivation to search for reasons changes, and the motivation changes because all sorts of things have changed in their dynamic. At the end of the day, they have a different attitude about that person and they're searching for reasons why they might have that attitude. And they go cherry picking the evidence available for something that seems to support the feeling they have at this moment. And they're motivated to do that. Something that is important to note here is that Reason with a big 'R' is something that does exist. There's logic and propositions and things you learn in philosophy classes. That's not what we're discussing here. We're talking about reasoning. In Psychology, reasoning, very simply, is coming up with reasons for what you think, feel and believe. And as we've further studied this, we have found that reasoning usually is employed to create some sort of defensive argument for the sake of reputation management among my most trusted peers. So we often are motivated to find reasons, not only to explain ourselves to ourselves, but to explain ourselves to others just in case we might want to defend our position within the social hierarchy. We're incredibly motivated reasoners. And it could be as simple as, 'I'm motivated to do that which delivers the best outcome for me.' But it could also be, 'I'm motivated to maintain a certain identity in different situations.' And what it comes down to is, we justify and rationalize a whole lot. And we live a life based off of internal narratives that are, in themselves, just justifications and rationalizations for our conclusions and our decisions and our goals and so on. And knowing that is important when it comes to persuasion because, often, if you're trying to just dump a bunch of facts on people, if you're trying to fight them with Wikipedia or something like that, you're trying to debate a person in such a way where you think that, "I'm going to win because I'm the expert," or "I have the evidence," you're going to come up against motivated reasoning. And the same way that a person gets on Google to look for evidence that vaccines cause autism because that's what they feel is true, and then they find all sorts of places that confirm that feeling, and they cherry pick the evidence in that way, the same thing is going to happen in a dynamic where two people are discussing an issue. Everybody's going to be cherry picking through reasoning, the reasons that seem to justify and rationalize their position. Every persuasion technique that I found that works well floats up above that and gets metacognitive and helps the person recognize that's what they're doing, and then calls into question whether or not these are actual justifications, or these reasons are supported by the evidence and so on, so that you have to get metacognitive to avoid them. But if you attempt to have some sort of plain 'just the facts' conversation with someone, or you try to persuade someone by dumping a bunch of facts on them, well, you'll get a lot of the things that we see oftentimes on social media [laughs] or things that our health agencies attempted during COVID, even of something as simple as trying to convince someone who believes the Earth is flat, or that believe 9/11 was an inside job, or that Pizzagate Is real. They have plenty of facts that they can pull up and say, "These support my argument, and your facts are not going to work on them." You can just think about it. If you've ever been in an argument with someone and they started sending you a bunch of YouTube links, or asking you to go read this website or read this book, it usually doesn't work on us. And the same thing is going to go in the other direction. It's not going to be a very effective way.
SPENCER: It seems to me that the relationship between our beliefs and our reasons varies a lot. For instance, there are times when we're really truth-focused. For example, let's say you're trying to get to a wedding and you're late, and you're really trying to figure out, "Well, how do I get to the wedding as fast as possible?" You're probably really responsive to evidence about what the best route to take is, and if someone makes an argument about why it's better to take this route, you're probably really carefully listening to that, because you really want to get to the wedding before it starts. This ties into Julia Galef's idea of scout mindset versus soldier mindset. So you'd be in a scout mindset in that case because you really are trying to figure out the truth. Whereas, let's say you're at the wedding, and someone brings up a politician that you hate, and they're talking about how that person is really great and all the reasons for that, there, you're probably not really being responsive to the reasons that they're giving. And what you're really doing is, you've got this feeling that this politician is bad, and the reasons you generate are just a response to the feeling that you have. And there, you're in more of a soldier mindset where you're just trying to beat them. And I think there are other attitudes beyond those as well. For example, in Jonathan Haidt's famous studies where he would give people these moral scenarios — for example, they'd have someone who masturbated using a dead chicken — and he'd ask people whether it was immoral, and they would say yes, but then they had trouble explaining why, because it really was just that it felt immoral to them, but they had trouble turning this into concrete reasons, and so they would just generate one reason, or generate another reason but they were kind of haphazard. They're just their attempt to turn a feeling into something that you could explain to another person. Anyway, my point is, I just think that it's quite context-dependent.
DAVID: No, you're dead-on there. What I love about what you just said is, in all these instances, there's a goal in mind. The reasoning is motivated. When I'm trying to get to the wedding, I have an accuracy goal. The goal is accuracy. I want to get there, I want to make it there alive and safe and as quickly as possible, so accuracy matters to me, that's the goal. When it comes to debating a political issue, we often believe (of ourselves) that accuracy is still the goal, that we're trying to be correct. But that's usually not the goal. You may not be aware of what the actual goal is. The goal is often, "Am I demonstrating that I'm a good member of my group?" That's the actual goal. But you may not believe that about yourself. You may not admit that about yourself. It may not be something that is salient or articulable. Brooke Harrington told me that, if there was an 'equals MC squared' of social science, it would be that the fear of social death is greater than the fear of physical death. And in any scenario where the threat to your social self comes into play, even in a small way, this will trump all your other motivations. And this will become the new thing that is guiding your behavior. And we see that very often. The Flat Earthers are a nice, neutral example of this, where people who have all sorts of motivations that lead them into being interested in that topic — going online looking on Reddit for joining places, meeting people, going places in person — eventually, it becomes a social identity. And whatever motivation you had to get into that topic becomes secondary, tertiary coordinate. The number one motivation that's driving your behavior at a certain point is, "I would like to be looked upon by other members of this group, as a good member of this group." And each group has different ways of determining that and different signals that people become aware of. But that's the actual goal that you're in pursuit of. So getting to the wedding, you're motivated by accuracy. Conversations at the wedding are going to be motivated by all sorts of things that you may not even be aware of. And this is true across the board. The antecedents of our thoughts, feelings, behaviors are often unavailable to us, and the drives and motivations that are getting us to the chocolate cake or to the polling place are often unavailable to us as well. The persuasion techniques I talk about in the book is that, a lot of the more successful ones, that's where they go. They're like, "Can I help this person uncover and articulate and evoke these hidden motivations, drives and so on, that are actually behind the the position that they're stating or the anger they're experiencing right now?" That's the power of accepting the fact that we are all motivated reasoners.
SPENCER: Yeah, I think a really fascinating thing about human nature is that we can do a thing for reasons that are different than we believe we're doing it, and we can genuinely be wrong. It's not obvious that that should be possible. But it seems to be a really common occurrence.
DAVID: The Jonathan Haidt study stuff you talk about always reminds me of the split-brain patients stuff, that 'moral dumbfounding,' as you describe. They'll ask people, "If your dog died, would you eat it?" or something like that. Or "Would you clean a toilet with an American flag?" stuff like that. And people very quickly, very instantly, say yes or no to those kinds of questions. And it's not all that dissimilar from saying, "What's the last movie that you watched?" And you say, "Top Gun: Maverick." And I say, "Did you like it?" It's like, "Yeah, I loved it." It's very quick. You can sample your emotional state, your attitude and response and then deliver the information to the other party saying, "Yes, I feel good about this." "No, I felt bad about this." But then if I ask you to go in there and get metacognitive and explain to me, "Well, how come you don't want to clean a toilet with an American flag? What's wrong with that?" Or "Why wouldn't you want to eat your dog? Letting it go to waste seems kind of bad." "What did you like about Top Gun: Maverick? What would you give it on a scale of one to ten? How come?" That sort of introspection is difficult. And oftentimes, you are bad at it. And you can't actually get to the source of where these emotions are coming from. In Haidt's work, they would do this great work before the study. They do all this work ahead of time where they would try to make sure they have caulked every surface of the ship. They know every possible argument a person might make, and they're going to destroy all of them ahead of time. And so as they ask people, "You say you don't want to clean a toilet with an American flag. How come?" And they'd say this, this and this. They would knock all those down one at a time until the person eventually exhausted all possible rationalizations and justifications for their position. And they just say something like, "I don't know why. It's just wrong." And that's the moment of moral dumbfounding. In split-brain patient work, back in the day when they would treat certain forms of epilepsy by doing a partial severing of the lobes — a corpus colostomy — they would get into a very unique research situation where the portion of the brain that is mostly responsible for generating narratives and communicating them to the spokesperson for the entire organism that they call the left-brain interpreter. If you would hide from that portion of the brain, stuff that the person was looking at experiencing or doing, and then ask that portion of the brain to explain what's going on, you get these interesting situations where they would show people images of car wrecks and mangled bodies and stuff. And the person would be like, "Ugh!" and they say, "Oh, what's going on there? Why are you responding that way?" And since the portion of the brain that can speak for the body did not see the car wreck, and only was noticing that the body is going, "Ugh," they would say, "Oh, I ate something for lunch that's not agreeing with me," or "I've been feeling sick lately." Or they'd ask them to just, "Hey, if you don't mind," they'd hand him a piece of paper that says, "Stand up and walk over here." And then they'd ask them, "Why did you do that? And they would come up with some rationalization or justification for the behavior like, "Oh, I wanted to go see what this pencil looked like over here in the corner." And they call that confabulation, which is, you do not have access to the antecedents of your thoughts, feelings or behaviors, but you will very easily, quickly and almost effortlessly produce rationalizations and justifications for whatever you're currently thinking, feeling or doing, which may or may not be true. It tends to be that which seems most justifiable or most reasonable and rational to the person listening. And the same is true with the moral dumbfounding. People keep attempting to justify and rationalize their positions until they get to the point where they don't actually have the information. They don't actually know why they feel so strongly about the flag in the toilet. At that point, they've been exhausted to the moral dumbfounding place. Both of those, I think, reveal what Hugo Murthy and Dan Sperber talk about in their interactionist model that you have two systems for making sense of the world when it comes to disambiguating: one for receiving information, one for delivering information. And the part that argues for what you're planning to do, or argues for your current state of mind, or argues for your current attitude, may not be privy to that which is motivating you to feel so strongly about it. And I think a lot of poor persuasion techniques or poor messaging — especially when it's done, broadcast messages that go to large audiences, without the knowledge that your audience may themselves not be aware of that which will be most impactful — they'll lean on the facts-only approach that doesn't always work out. It's more powerful to put people in that position of, "Oh, I actually don't know why I do feel this way. I know, I'm motivated to try to understand myself better." And that seems to be a better route.
SPENCER: It seems to me that, if we think about introspection, I think introspection often works when you're genuinely looking for the root of why you're doing a thing. Whereas, if you're doing introspection in a social setting, where you're having to explain yourself to another person, it feels to me that it's much more likely to produce these confabulations. I'm wondering if you agree with that.
DAVID: Yeah, I think that it's all going to depend on what you intuit the most positive outcome is going to be. And in a social situation, the most positive outcome could be, 'This maintains my current reputation.' It could be, 'This enhances my reputation.' It could be, 'This gives me a position of power over another person.' It could be, 'This communicates my empathy.' Once you are motivated by social outcomes, the spectrum of possible goals is just enormous, and accuracy may not be one of those goals. We've all been in positions where telling the complete truth is not exactly the right thing right now. And I would agree with what you say there, that it's very context-dependent and social contexts make it much more complex.
SPENCER: What is the affective tipping point? And how does that relate?
DAVID: I love the affective tipping point. I was very happy to come across this. I was looking into motivated reasoning early on, when I was still convinced that facts were the only way. And there was this great research by David Redlawsk, who was just asking, "Do motivated reasoners ever just get it?" Is there an amount of evidence I can give you where you'll stop this? Let's say you believe the Earth is flat. [Let's always go there because it's nice and neutral.] If you believe the Earth is flat, how much evidence that the Earth is not flat would I have to give you before you go, "Oh, okay, I was wrong about that." That's what he was looking for. The affective tipping point is the moment at which people can no longer justify ignoring a preponderance or an onslaught of disconfirmatory evidence, whether it's a fact-based issue or it's something more in the world of morals and politics and attitudes. The supposition here is that no organism could survive without some fail-safe for when counter-evidence becomes overwhelming. And so there should be some sort of tipping point at which we would switch from conservation mode to active learning, we would go from assimilating to accommodating, something that says, "Okay, I definitely am wrong about this," or "I'm not getting the outcome so reliably, that I've got to update my priors." So what Redlawsk did is, in the study, they had people take part in a pseudo-presidential primary, and they chose primary so they wouldn't introduce variables of polarization. The major people were still inside the party they planned to vote for. And they showed them all these possible presidential candidates. They knew they were in a study and it was fake. They had all these possible presidential candidates, and they let them look at as much information as they wanted to about those candidates before going in. And some people looked at hundreds of pieces of information before they chose, "This candidate is the one that I would vote for." And then every week of the fake presidential campaign, they would check in with their subjects to see how much they still supported their candidate. And they divided people into different groups. And what they did is, in each group, people got a little more counter-evidence than the previous group. So the way they did it was, if you looked at ten pieces of information in one group, one piece out of the ten would suggest that maybe this person doesn't align with your values. And then in the higher group, it was eight out of the ten pieces of information. They had them watch a fake channel — not fake news, as in (quote, unquote) "fake news" — but scientifically fake news. They knew they were in a study, but the information they're receiving is coming from a website or a television station that's only related to this research. Some people got a very small amount of counter-evidence, and some people got a whole lot of counter-evidence. What they found was that, if you got up to about three out of ten or so pieces of information that suggested that you chose a candidate that doesn't align with your values, people ended up liking that candidate more by the end of the study than people that received no counter-evidence at all. There were these backfire effects where people were pushing against the information and finding reasons to defend their candidate and deepening the network of associations they had related to them and becoming much more motivated to find something good in them and defend them. Whereas, past that point, people just exhausted from it, and they no longer resisted. So above about 30% of the incoming information — if 30% or more of the information coming in suggested that your current understanding was incorrect, or went against your goals and your desires — it was at that point when people started to engage in active learning, and they started to update their understanding of the situation. The big caveat here, according to Redlawsk, he calls that the affective tipping point, it seems to be around about 15% of incoming information when you start becoming doubtful, and about 30%, you get that 'Mm, I might be wrong' feeling, and you start to update. But he said to be careful because this is a study, it's controlled. In the really real world, people vary. In some topics, the number is going to be a lot higher because the topic is much more important for them to maintain their prior positions. And also people have a lot of control over what goes into their brains. And if you feel threatened by the idea of changing your mind about a particular issue, you might adjust the flow of incoming data by either avoiding information that's going to disconfirm your assumptions, or altering your media diet so that the sources that you get your information from, are very unlikely to say things that are going to make you feel bad about how you already understand the world. People have a lot of control over whether or not they reach their affective tipping points. But it is good news to know that there is one.
SPENCER: Yeah, it makes a lot of sense that, at some point, you just can't ignore the evidence, even in something that's so deeply held. I'm not a religious person. But if Jesus reemerged, and is turning the oceans to fire, and all the Christians go flying up into heaven one by one, at some point, you're like, "Okay, I'm pretty sure that this thing is real," right?
DAVID: One of those researchers told me (this is exactly how they put it to me): You walk in your kitchen in the morning and there's a bunch of frogs playing in a marching band going across the kitchen floor. Your first feeling is going to be, "Okay, I know that that's not possible, that can't actually happen. Somebody is playing a trick on me. I have been drugged. This is a hologram," something like that. You're going to try to assimilate it first. You're going to try to think of things you already understand about the world that would make this make sense. But yeah, like you said, there is a tipping point because, yes, if I saw the oceans turn to wine and everybody's getting raptured, we will resist. You're going to try to assimilate it first and try to figure out what's going on here. How am I being tricked? But there is a point at which you can no longer justify ignoring the disconfirmatory information.
SPENCER: Even something like one plus one equals two, which some people will say, "Oh, no, that's just a logical truth." Imagine one day you woke up and, whenever you put one plus one in any calculator, you try it and it says three, and you try using that in life and it seems to solve problems for you the way that 'one plus one equals two' used to solve them, and then you ask a friend, and they're like, "Of course, one plus one has always equalled three," at some point, you're gonna be like, "Okay, maybe I just somehow misremembered that one plus one equals two?" I don't know.
DAVID: Which illustrates how dangerous gaslighting and propaganda and coercion can be because, since there is a tipping point, that means that, if you control the media diet of your audience, and you control the outcomes of their attempts to test reality and judge their own priors against what they're currently experiencing, it can be hijacked in that way. And it's important to know that we are all susceptible to that sort of thing. You don't necessarily have to fall into a cult to experience that kind of controlled affective tipping point used for heinous purposes.
[promo]
SPENCER: David, before we finish up, why don't we talk about some practical techniques that, through all of your research, you think are actually useful for people to learn about?
DAVID: Yeah, I never ever thought this is where I would land, mainly because I didn't know these institutions and these groups existed. But along the way, as I was trying to understand the science of all this... I never wanted to write a book that was like How to Win Friends and Influence People part two, or get into the Cialdini stuff. I just didn't think of it that way. I really just wanted to understand: how do people change their minds? How do we go from being a nation that was so anti-same sex marriage to being pro-same sex marriage, and how did that happen so quickly? Or how do norms, like smoking norms, and how did these things change? If I could put people in a time machine and send them back 50 years, and if they would argue with themselves, what was it that happened in between those two points? That was the idea. But along the way, I did find...thankfully, there were people who were researching. Yes, that's great, but how could you actually apply any of these things? So the Elaboration Likelihood Model is one of the great things I found, but the other ones were street epistemology, and smart politics, and deep canvassing. And all of these groups are also very similar to motivational interviewing. What I discovered was, there were all these groups across the world who were all concerned with how we get better at changing people's minds. And none of these groups had ever met each other. None of these groups were aware of each other, but they were all using the exact same technique. And if it was in a numbered order, the steps were in the same order also. And that blew my mind because they had all developed their techniques through A/B testing, the 1000s and 1000s of conversations, throw away what doesn't work, keep what does. And in so doing, they landed in the same place that therapeutic models like motivational interviewing had landed, because that's also what they did, they did A/B testing. So what all these groups independently discovered was the same thing, that brains work a certain way when it comes to trying to make sense of ambiguous things or things that are counter-attitudinal or seem to call into question things you thought were facts. The flow from reactants to the affective tipping point to acceptance, there's a way this tends to operate when the source of the information is another living human being, and you're engaging in a conversational context. So motivational interviewing came out of the therapeutic domain of patients, clients who were dealing with addiction, alcoholism, things like that. The therapists were noticing that — when they would say, "Well, here's your problem," or "Here's what you ought to do," or "Have you ever thought about how this is something you shouldn't be doing?" — when they would challenge clients in that way, the client would respond with something they call reactance. Reactance is like when you're a kid, and you know your room is a mess, but your mom says you need to clean up your room and you don't because your mom told you to. It's that feeling that your agency is under threat. And when your agency is under threat, you will double down on decisions and plans and goals and behaviors that will secure your agency against the person who's threatening it. But they found that the better route to getting clients to adjust their behavior was to get them to generate the counter-arguments that they were trying to (sort of) copy-paste into their own brains. I can use that movie example, and you can try this with any of your friends today, or yourself. You just think, what is the last movie you watched? Did you like it? If you say yes, ask, "Okay, on a scale from one to ten, what would you give it?" And usually, at that moment, people will go, "Well, hmm, let me think about that." And that's when they've engaged in that effortful metacognition, the introspection. And oftentimes, it's very hard to explain, "Well, why do I give this movie an eight instead of a nine?" And then in motivational interviewing, they would ask, "Well, nine is not a ten. What are some reasons why you didn't give it a ten?" And when people give you arguments for why they wouldn't give it a ten, they're really giving you counter-arguments against their position, and oftentimes, they'll move down to eight or seven or so. That's the essence of that. And what I found was, there's a group in California engaged in something called deep canvassing and I went out there and visited them three or four times and trained in it, went door-to-door with them. At the time, they were dealing with same-sex marriage laws, and transgender bathroom laws and things like that. And they would go door-to-door. And they would go to places where it was very likely they'd find people who had voted against those issues. And they would ask them, "How do you feel about the issue? Where are you on a scale of one to ten?" When I met them, they had done it 17,000 times. They'd had 17,000 conversations and they'd A/B tested their ways to a conversational technique that was more likely to end with a persuasion. Deep canvassing, I found when I was talking about that with people, people would tell me, "Have you ever heard of street epistemology?" And so I went out to Texas and met the people who are using that technique. Street epistemology is very similar, except what makes them unique and what makes them great to have in your toolkit, is that street epistemology is great for topics that are fact-based like 'is the Earth flat.' And deep canvassing is great for issues that are much more political or attitudinal or morality-based. And I'll give the listeners some tips and tricks from these. Each of them has ten steps or more, but you don't really need to know all ten steps. The most important step is step one, and it's the same step in every one of these, which is you need to establish rapport. We are social primates. Right off the bat, we're going to try to determine whether or not you're a potential enemy or an ally. We're going to really be looking out for: is this person attempting to steal some agency from me? Are they going to put me in a position where I'm going to be shamed or ostracized by people who are important to me? That's upfront and, by establishing rapport, you assure the other person that you're not out to shame them. You ask for consent to explore their reasoning. You make it very evident that you're not really there to even change their mind. You're just interested in what they think about the issue and you want to explore it. The idea is that you're trying to get out of a debate dynamic, where I'm going to win and you're going to lose. Instead of going face to face, we're gonna go shoulder to shoulder, and we're going to marvel at the very idea that we disagree with. I find you a rational and reasonable person. You seem educated. You seem to care about things that I care about, too, and the fact that we would disagree is interesting. I would like to understand why we disagree. Would you like to explore why we disagree together? And so now you're going shoulder to shoulder, looking at the disagreement instead of trying to defeat the other person. Once you've established rapport, if it's a fact-based issue, you ask for the claim. You confirm the claim by repeating it back and ask if you've done a good job summarizing it. And once they say that you definitely have their position, you definitely are communicating it well, you clarify their definitions, and you make sure you use their definitions and not yours. Like for some people, politics is all about civil duty. For other people, it's a bunch of lizard men in suits, giving up the country with cigars. Whatever they're using as definitions, use those. And at this point, you bring in that numerical measure of confidence. If you're talking about the Earth being flat, on a scale of one to ten, or from zero to 100%, you ask, "How certain are you of that?" And at that point, that's where the rest of the conversation takes place. Because you will ask, "Well, what reasons do you have to hold that high level of confidence? And what methods have you used to judge those as being good reasons?" and so on. And that's pretty much the whole conversation from there. Believe it or not, that's all it takes usually, to get somebody to reconsider their position. With a political or attitudinal issue, it's almost exactly the same thing. You ask, from one to ten, how strongly they feel about the issue. And then in deep canvassing, they will usually show a story or tell a story or offer a political ad that could be for or against that person's position. It doesn't really matter. What they want the person to do after that is say, "Do you still have the same number scale?" and after they've shared that, comes a really powerful question, which is, "Why does that number feel right to you?" It's the same idea here. They're gonna go, "Um, well," and then they start to offer up their justifications and their rationalizations, their reasons for feeling the way they feel. And from there, they ask things like, "Was there a time in your life before you felt that way?" Or in the same way as street epistemology, they're asking the person to try to come up with, or try to understand maybe for the first time they've ever even attempted this, "What do you think is informing your justifications and rationalizations? And what do you think, even underneath that, is the source of this attitude?" And all the techniques that I found, they all fall into the same bucket. They call these technique rebuttals instead of topic rebuttals. If you're in a good faith environment, if you're a scientist or a doctor, a lecturer, and you're all playing by the same rules, topic rebuttal is a great way to go. My facts versus your facts, the preponderance of the evidence for this hypothesis for another, that's topic rebuttal. But in a conversational dynamic — especially with someone who is a stranger, or a loved one, or a family member, where reactance is very likely, and you're trying to avoid that, you're trying to maintain or trying to establish a rapport that you can continue forward for years to come — technique rebuttal is the way to go, which is truly trying to help the other person, assist them in introspecting, to try to understand why they hold this level of confidence or this level of certainty, and what's motivating that.
SPENCER: So you have the person reflect on why they hold that level of certainty and not more and not less. And you mentioned a little while ago that you could have them reflect on, "Well, why aren't you at a stronger number? Why aren't you at a weaker number?" which gets them to generate arguments on one side. Let's say I'm a seven in opposing gay marriage, you say, "Well, why aren't you an eight?" And then that gets them to reflect on why they don't oppose it more strongly. Is that the idea?
DAVID: Yeah. And then, oftentimes — I've watched so many of these and been part of some of these — people will, on their own, generate cognitive dissonance. They'll say, "Oh, wow, I've just said I have this position. I just said this. I just produced arguments on both sides of this." And that feeling of, "Oh, I need some resolution to this dissonance that I'm generating on my own," creates a completely different kind of conversational dynamic. You don't feel like you're being berated or challenged. You don't feel like you are being asked to take on somebody else's views. It doesn't feel that way at all. It feels like this person is helping me understand why I have this position, and in so doing, I'm wondering, have I ever really thought this through? And that's the place where the most powerful change usually occurs.
SPENCER: Would you be up for doing a quick roleplay, where I'll play the role of someone with a belief and you [inaudible]
DAVID: We could do it. If you want to do something neutral.
SPENCER: I can be a flat Earther.
DAVID: Yeah, we could do flat Earth. I often demonstrate this using movies, because it's so easy and quick and neutral, but we can do flat Earth.
SPENCER: Just to be clear, I don't believe in a flat Earth, but I will play someone who believes in a flat Earth, and you can take me through it. Does that sound good?
DAVID: Yeah, easy. And it's also important to remember that 180s are not the goal here. Any amount of change counts as change, and it usually takes many conversations. So if you're looking for that complete flip, you are going to be disappointed, but it does happen sometimes. I have seen it happen. In this case, okay, sure. Well, Spencer, I'm so happy that you want to talk about this. I find this such a fascinating issue as somebody who is such a nerd when it comes to the space stuff and sci-fi stuff. I just want to hear your thoughts on it. What I would want from this is, you could show me what this is all about. If you're into it, I would just like to explore the topic with you. Are you cool with that?
SPENCER: Yeah, that sounds great.
DAVID: Cool. And if you're really okay with it, I'd like your consent to explore where your beliefs and all this come from, if you're cool with that, too.
SPENCER: Yeah, I'm totally happy with that.
DAVID: Okay, well, I guess to get this going, it's really way easier if we have something like a claim we can start with. When it comes to flat Earth, what would you say is your fundamental belief when it comes to all that?
SPENCER: Yeah. Just like everybody else, I grew up thinking that the world was round, and I never really questioned it. But as I started watching more and more YouTube videos about this, I started to realize that there were all these anomalies that I never paid attention to with the round Earth theory. And as I dug deeper into it, the round Earth just didn't hold together. I'm not exactly sure what there is, but I'm pretty confident the Earth is not round. I'm pretty confident it's flat. I think most likely, there's an ice wall that surrounds much of what we call the world, but I don't know all the details.
DAVID: If I hear you correctly, you used to believe in a round Earth because that's what we all get taught in school, but you were able to find all sorts of places, voices and people online and YouTube and things like that, who started to present you with some doubts. And then, when you go forward from those doubts, there are also some things that seem to make a little bit more sense than a round Earth, like the ice wall and the flat stuff. Is that true?
SPENCER: Yeah, exactly.
DAVID: Okay. I'm wondering, I know it's very complicated. There are all sorts of ways to try to make sense of how the Earth could be flat. But just this one idea that the Earth is flat, where would you put yourself on, like if you had to rate your confidence level, like zero to ten? Or let's use percentage. From 0% to 100%, how confident are you? How certain are you that the Earth is flat?
SPENCER: I'd say I'm at 90%.
DAVID: 90%. That's way up there. Wow. How many things in your life do you have around 90% like that?
SPENCER: Yeah, I guess basic fundamentals: things I care about, people I care about, things like that.
DAVID: Yeah, me, too, me, too, me, too. And I like that you said 90% and not 100 because, I mean, it's tough to be 100% about anything, right? I'm wondering, with flat Earth, what would it take to get you up to 100%?
SPENCER: I think I would probably have to see some of the things with my own eyes. I'm still counting on what other people are saying. But I haven't gone and checked out the ice wall. I haven't viscerally seen this information.
DAVID: Well, if you did take a voyage to the edge where it's supposedly supposed to be and there was no ice wall there, if you just kept on going, how do you think that you would respond to that? How would that affect your confidence?
SPENCER: Well, that wouldn't necessarily mean that the Earth is round. Maybe it's just flat and goes on further. But I guess it would call into question certain information sources that I've trusted. It would make me less confident that they were right.
DAVID: Okay, so it sounds to me like you do value evidence. You really do want to be right about this. At the end of the day, you want to believe true things, right?
SPENCER: Of course. I mean, one of the reasons I think the Earth is flat is just, you look around, it seems flat. You'd need a pretty strong, convincing argument to believe otherwise, because that's the observation of our senses. It's flat.
DAVID: Yeah. So I'll stop us here. But you can see, because I can feel this conversation, it is about 20 or 30 minutes ahead of us, but what I've got here is a lot of green lights, because one of the things you want to establish early on is, is this person open? And if not, then you'd go down a path of, how do I better open them? if you said, is this person 100% or 0%, that's going to be a huge red flag, that you're in what they call pre-contemplation. And so we'd have to engage in the kind of conversation that would get you into contemplation, which is just a fancy way of saying you have entered a state of active learning on this topic but there are probably reasons why. There are about four or five bullet points in the literature about how to get a person more into contemplation if they're in pre-contemplation. What you did give, which is really important, was you used the word 'trust.' And it's very likely — and is not true for every flat Earther, but it's true for most — that this has nothing to do with the facts at the end of the day. And it has nothing to do with the Earth being flat or round at the end of the day. What it has to do with is a lack of trust in, first of all, probably military industrial complex kind of things, and then a lumping in of NASA and the scientific community into that same sort of sphere of distrust, seeing it as some sort of interplay between those worlds. And there's going to be all this desire for autonomy, this desire for agency, this desire for 'I need to trust that this source of information has my best intentions in mind,' and so on and so forth. So what we do is we go into a place where we explore that, we explore the trust aspect of it. And then we come back to, "Well, what can we do, do you think, to really understand this and find out, is it flat or is it not?" And what you're helping the other person do is get out of that 'I'm going to tell you about all the YouTube videos I've watched. I'm going to tell you about all the books about this topic. I'm going to tell you about all the websites you can go to.' Instead, we're getting into, "Let's explore the concept of critical thinking together." And it may not be that you have ever heard about some of this stuff, or you have a foundation for what is the best way to go about coming to a position on an issue that we can't possibly be experts on. I have a position on climate change. I have a position on gun control and so on, but I couldn't possibly be the expert on that topic. I have to depend on experts to help me make sense of those things. What methods am I using to vet those experts and to vet expertise itself? And that's the conversation we'd have. And at the end of that conversation, the hope would be that you've opened the person up to the possibility that, "Okay, I could change my mind about this. My opinion is not immovable." And that's really the goal you're going for in this early first conversation with the other person.
SPENCER: I felt that in the roleplay, you did a really good job of getting consent throughout, where you weren't pushing me to do anything. You were saying, is it okay if we do this? Is it okay if we do that? And that brings up for me this question of ethics, because I think whenever you're talking about persuasion, there's a question of, is it ethical. If you're trying to move someone's opinion, if that's your goal, to move someone's opinion or move someone's attitude, you're going into a situation trying to change them in some way. How do you think about when it's ethical to persuade?
DAVID: Yeah, I think that if you're going to get into the weeds of, we're going to do things that have steps, there should be a step zero. And the step zero should always be, why do you want to change this person's mind? What are your goals and motivations here? And why is this important to you? Be honest with yourself about that first. Otherwise, you could have what I would consider unethical purposes. You could be just wanting to be right, you want them to be wrong. You want your side to win and their side to lose. I think that persuasion is ethical when the other person has complete agency at all times. This is a conversation where both of you admit to the fact that everything is a mystery we're trying to solve. And if we have disagreements over anything, that's great, because I can gain from your perspective and you can gain from my perspective. I can gain from your experience, you can gain from mine. And at the end of the day, what we're trying to do is reach the truth. And if the truth is our goal, then I'm willing to change my mind in this conversation. That's what we're doing. We're trying to solve a mystery together. I'm not trying to overwhelm you with wherever I'm at currently. That sidesteps most of the ethical concerns that I have when it comes to things like this. And then if your goal when it comes to a fact-based issue is, I would like to have the most accurate understanding of this, and if you agree to join me in that goal, then our combined efforts will move toward being the most factually correct we could possibly be, if we're working together. When it comes to a political or moral or attitude-based issue, we need to agree that we want to reduce harm in this world or suck some poison out of the world. And we're looking for goals that are going to create an outcome that is positive, and we need to establish whether or not we agree what positive is in that regard. That's where the ethical and moral concerns will come into play (I think) most of all. If you haven't been honest with yourself about that, it's gonna be difficult for you to join forces with another person who has different political views but shared values. That's what you're really hoping for, is that we may have different political persuasions but our values are the same, which is, we want the same sort of things from this world, and we want to reduce the same harm. We just have different strategies for doing so, and what do you think about teaming up and trying to figure out who's got the better strategy? Or how can we modify each other's approaches to get closer to the goal that we both share? I think if you think in that way, you approach it from that perspective, you're much less likely to get into these quandaries. But I agree with you, yes, persuasion can and often is used for unethical reasons. And it can be, as a concept, as a psychological entity, it just is, it just exists. And how it is employed and used is going to be the determining factor as to whether or not it is ethical.
SPENCER: I think a tricky issue here with the ethics is that most people feel like they're on the good side. And if your justification is, "Well, I'm trying to persuade them to my beliefs, because I'm right, and I'm on the good side, so of course, that's ethical automatically," then the problem with that argument is it's completely symmetrical. Everyone thinks they're on the good side, everyone thinks they're right. So everyone can use that exact same argument and it doesn't mean that it gives you the ethical high ground. And so I think it's not enough to just say, "Well, I'm on the right side, and they're on the wrong side, so any persuasion is fine."
DAVID: Yeah, I totally agree. Hopefully, your objective is to engage in a conversation in which you might actually change your mind. Hopefully, you'd be open to the idea that maybe you're wrong. And wrong can mean many things, can mean: factually, morally, politically. There are all sorts of ways to be wrong. But we're set up, thankfully, via evolution and via lots of different inputs, we are set up to get into a sort of 12 Angry Men scenario with other human beings and realize that we have a limited perspective and a limited amount of experiences and a limited amount of expertise. That plays into all sorts of interpretations that may be incorrect, that may be wrong, that may be incomplete. And on top of that, we are social primates who are going to really want our side to come out ahead in everything and we will tend to believe that we are on the right side of everything, whatever side we happen to be on. All that needs to be rolled into step zero for me, which is, "Do you know why you're trying to change this person's mind?" If you haven't been honest with yourself about that, that's where you get into these very-easy-to-slip-into unethical uses of powerful persuasion techniques.
SPENCER: David, thanks so much for coming on. This was a great conversation.
DAVID: Hey, thanks so much, man. This is great. This is easily the most actual — us actually talk about it — conversation I've had about this so far. So I really appreciate that.
SPENCER: I'm glad to hear that.
[outro]
JOSH: A listener asks, what project or product are you most excited to release next?
SPENCER: Well, one project I'm really excited about is a website we'll be launching that will help you get estimates of psychological constructs. And I'm not going to say too much else about it. Hopefully in the next six months, 12 months, we'll have a first version out, and you can play around with it.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: