CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 045: Explanatory Depth and Growth Mindset (with Daniel Greene)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

May 27, 2021

What is the illusion of explanatory depth? Are there forms of debate or dialogue that actually help people to change their minds (instead of stacking the incentives such that people feel forced to harden and defend their views)? What is epistemic "debt"? Should people avoid having opinions on things where they haven't thought deeply and carefully about all of the relevant considerations? How does one choose which experts to trust? What is "growth mindset"? How can social science be used to do good in the world?

Daniel Greene is a postdoctoral researcher and fellow at the Center for International Security and Cooperation at Stanford University, where he works with Dr. Megan Palmer to research methods of engaging life scientists with the potential safety and security risks of their work. He has a Ph.D. in Education from Stanford and worked as a social psychologist and data scientist at the Project for Education Research that Scales. You can find more information about Daniel at danielgreene.net.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Dan Greene about the illusion of explanatory depth, mental models of belief and opinion, the relationship between expertise and trust, growth mindset, and trends in the social sciences.

SPENCER: Dan, welcome. It's really good to have you here.

DAN: Thanks, Spencer. It's great to be here.

SPENCER: The first thing I want to talk to you about is this idea of the illusion of explanatory depth. Can you tell us what that means and how we can apply that idea?

DAN: Yeah. The illusion of explanatory depth is a concept from social psychology, which is the idea that you sometimes think that you understand something better than you actually do, and when you are prompted to try to explain it, you realize how little that you know. For example, if you ask people how the refrigerator works, or how the toilet works, or how a hospital works, people might express a certain level of confidence in how well they think they understand that. And then if you ask them, "Okay, well, just tell me how does it work? Walk me through the causal steps of what happens," often people will be surprised at how much they're stumbling to explain these things. I've certainly had this experience many times, and it's kind of funny and baffling and humbling. And if you ask them again to rate how well they understand something, often, their confidence in understanding will go down. So this suggests that people sometimes think that they know something more than they do, and that the process of trying to articulate it reveals their own ignorance. And this isn't a new idea; this is a new label for a phenomenon that I think people have observed for a long time. In some ways, I think the whole project of Western philosophy in many cases — or philosophy in general — involves probing into things that we think we know about, and then asking, "What exactly do you mean by that?" Just picture Socrates going around asking people, "What do you mean by justice? What do you mean by love?" And people think that they know and then, the more he probes, the less they realize they understand. And so I just think that's a really fascinating concept (and a useful one to run with), to try to improve your own thinking, or at least realize the state of your own thinking.

SPENCER: I like the idea of applying this illusion of explanatory depth to words like love or justice, and then thinking of that as what philosophers are doing. That's a cool idea. But as I understand it, there's another aspect to this, too, which is not just that people realize they understand the thing less than they thought; it also can change their attitudes or their certainty around a topic. Isn't that right?

JAN: That's right. There's a study done in 2013 by a researcher named Philip Fernbach and his colleagues. They applied this idea to policy positions — things like single payer healthcare, or a national flat tax — and people come in with certain positions and certain levels of confidence in those positions. And he would ask people to rate how well they understood the issues, and then to explain how that issue worked in causal terms, like "Okay, first, this would happen, then this would happen, and that would lead to these consequences." And then he would ask people to re-evaluate their understanding of these things and then they would rate the extremity of their attitudes (or the strength of their opinions) on these issues. What he found was that, not only did people become less confident in their understanding, the extremity of their attitudes of their positions went down. So they became less partisan; in other words, they became more moderate. And this showed up in some behavioral intentions, so they reported to be less willing to donate money to a group advocating for one cause, which might make you more confident that this isn't just lip service that people are giving the experimenter.

SPENCER: That's really cool.

DAN: Yeah, it's interesting. It's like they're realizing their uncertainty, and that is actually having an effect on their position. This is a potentially powerful intervention to moderate extremism among partisans.

SPENCER: This seems like, if you had groups that really disagreed on a topic, that you could get them all to explain the topic in detail, and that might actually bring them closer together because they realize there's a level of uncertainty that they all have.

DAN: Right, maybe, though it's funny that you said "groups" and it's interesting to think about how this might change in a group setting. I think this research involved individuals struggling to explain something. But I know that group dynamics can add some interesting elements to people's thought processes and justifications for what they believe. So you might imagine that, in a group setting, if I'm trying to back up my opinion about some policy issue, I might be uncertain but I might not want to show that uncertainty or acknowledge it because my fellow partisans are watching me. And so who knows, maybe this effect wouldn't be as strong in a small group setting. Maybe it has to happen one by one to seed doubt in people's minds about the certainty of their opinions.

SPENCER: Good point. It also suggests why debates can be such a farce, really, if a debate is just a point-scoring match where you're just trying to make your opponent's view look dumb and your view look good, as opposed to actually trying to get to the truth of the issue. And the fundamental challenge there is, there's an audience, right? And when there's an audience, you're not even really talking to your opponent, you're talking to the audience.

DAN: Yeah, I absolutely agree. I think debates are fun. They're fun to watch. It's fun to watch people score rhetorical points on each other. And I think there's probably value in sharpening one's rhetorical ability perhaps, though in some ways, it seems dangerous to me as a way to improve your ability to persuade yourself of different positions. I'm curious if you've heard of any sort of alternate discussion mechanisms (instead of a debate), something where you can score points for acknowledging your own mistakes, or have some kind of scoring or reward system that encourages people to move towards agreement and truth and consensus rather than towards persuasion of a third party.

SPENCER: Good question. I've actually run workshops a few times where, basically, the goal was to find things that people disagreed on. So people would be paired up with a stranger, they'd be given a survey of a whole bunch of controversial topics, and they would each fill it out. And then they'd overlay their sheets on top of each other to find examples where one of them really strongly agreed, and the other strongly disagreed. And it could be anything; it could be about capitalism versus socialism, or it could be about abortion rights, or it could be any topic. And then, once they found a topic that they strongly disagreed on that they were willing to discuss, the idea is that they would go through a structured format, and the key to the format was that it changes the goal of the discussion from convincing the other person, to becoming a more collaborative enterprise. So the way I structured it is, first, person A has to explain to person B why they think what they do until Person B can state it back to their satisfaction. It's a variant on like rap reports roles, if you've heard of those. And then they would flip roles, so then person B would explain what they think until person A could say it back to their satisfaction. And now they both understand each other — okay, that's a great starting point, which is actually often missing in debate — and then the last stage was that they would actually collaboratively work together to try to figure out why they disagree. So they're not trying to convince each other to agree with them, but they're trying to figure out together what the source of their disagreement is. I suppose it is similar to the Center for Applied Rationality's Double Crux technique, although I think it's a totally different variant on that.

DAN: Yeah. What was the outcome of that workshop? How did it turn out?

SPENCER: Well, I've done it three times and I feel like it actually went really well. I was worried a little bit that there would end up being some really heated arguments but, actually, there really weren't. People seemed to get along really well during it, even talking about quite controversial things. One of the people — actually, this was kind of surprising — came up to me after and said, "Oh, I had this discussion about circumcision with this other person here and they actually convinced me that I shouldn't have my child circumcised. And my sister just had a baby, so I'm going to go and try to convince my sister not to circumcise." And I was like, "Wow, that happened in this conversation." So that was really, really cool to see and there were some other ones like that, where it just seemed like people were talking about things that were difficult. Another one was, one person (who's a libertarian hyper-individualist) was talking to a Chinese woman (who's very communitarian). And the libertarian came up to me afterward, saying how her mind was kind of blown, experiencing the culture of someone who's so communitarian because she never thinks in that way of like, "Oh, well, the group comes first before the individual." So that was cool, too.

DAN: Did any of the conversations have audiences or third parties? Or was it just like one-on-one private conversations?

SPENCER: No, it's one-on-one, and that's really important. I did ask some of them if I could listen in with their permission. But other than that, there was no audience because, again, it just completely changes the social dynamics, as we were talking about before, because now you're thinking, "Well, what does that person watching think about me?" instead of, "Okay, I'm following the conversational rules."

DAN: Right. It's so funny and refreshing to be in a conversation with someone who has a mindset of being excited about mistakes or opportunities to learn. There's such a clear difference between someone who is excited when you point out a flaw in their reasoning versus defensive when you point out a flaw, and the idea that it's actually possible to be excited. The classic example, I feel, is mathematicians who often don't have a dog in one mathematical fight, one way or another (though I'm sure sometimes they do). But I think there's a culture or stereotype among mathematics where you're constantly switching sides, like, "Maybe it works out this way or maybe it works out that way. Let me take this position for a minute to try to attack the problem from this angle and you take the other side, and then you'll point something out. And then we'll both learn." You have a math background. Was that your experience?

SPENCER: Well, I think with math, it's so verifiable compared to other things, that it is really rare that you'll have a long-standing disagreement, unless it's a topic where nobody can really prove the thing, [laughs] and then it's just one mathematician's intuition versus another. But more commonly, it's like, "Oh, you think this, I think that. Okay, let me show you this proof." Then, "Oh, okay. Yeah, you convinced me." Although funnily enough, I think people underestimate how often math papers actually have mistakes in them. I think mistakes are not that uncommon, actually, in math papers. But ironically, usually when there are mistakes, they're actually correctable. In other words, the theorem is still right, or very close to being right; with a few tweaks, it can be made right. And that's because we have this false sense that the way you do math is, you do a series of deductive steps to prove something. But really, the way you do math is (yeah, you do some deductive steps), but there's a bunch of intuitional stuff where you're trying to understand the thing on a deep level. And then later, once you've convinced yourself of the thing, you try to come up with a proof to show that it's true. And that's why the proof is the last bit. A lot of times, it doesn't actually have that much power in terms of determining [laughs] why you believe the thing.

DAN: Right, yeah. I think sometimes the illusion of explanatory depth can creep into that situation where you might not actually know why you believe something that you believe in. There might actually be a good reason, but it's something to be discovered and articulated through communication or through writing.

SPENCER: Yeah, great point. A lot of times, our intuitive knowledge that we've learned just from experience, is hard to articulate because it's a deeper-in-our-bones knowledge or a predictive knowledge or something like that. I'm a very amateur martial artist in mixed martial arts, and just doing it for fun. And I was trying to teach a jiu-jitsu move to my friend (which I could do very well. It's actually my favorite move, and I do it a lot in martial arts) and I could not explain how to do it. I just kept having to get on the mat and demonstrate it, because every time I tried to turn it into words, I was just failing.

DAN: Yeah. It's fun to think about other examples, when people just feel really strongly that something has value, and they just can't articulate it. Religion comes to mind as an obvious example for me. Talking to family members or friends who are religious, you can poke lots of holes in specific arguments and, yet, people keep coming back to religious traditions, religious communities, and it's an example of something where it seems obviously like there's something of value that's more difficult to access in your typical way of explaining things.

SPENCER: Yeah, absolutely. And I think it's also important to keep in mind that, if you ask someone, "Well, why do you think this," or "Tell me what's good about that idea," or something like that, the thing that they'll say to you is very often just the first thought that comes to them about why might it be a good idea. It is not necessarily their best argument for why it's a good idea, because their best argument isn't the first one that pops into their mind. And also, they might have a lot of this intuitive knowledge that, again, is hard to explain but is based on life experience. You might ask someone, "Oh, what should I do in this scenario with my friend where I'm having a fight," and they might tell you to do something. And it really might be just that they've had so much life experience of dealing with different people that their internal predictive models are making a prediction about what will happen. But how do you turn that into something you can explain?

DAN: I have a funny example of this. It gets at this idea of what a belief is. You ask someone why they believe something and to report their belief, and then they give you something, they sort of barf out some words. What is that? What does that mean? What is the underlying thing that produced it? And it ties back into the idea of the illusion of explanatory depth. When I was in grad school, you write your dissertation and you work on it for a long time, then you have to defend. You have this panel of advisors who grill you privately after you give your talk and ask all these questions, and you try to prepare and have all your answers ready. This was a couple years ago. I just presented, I thought it went well, I was getting grilled, and I was trying to respond to this question and that question. I thought I was handling everything. And then this one advisor I had, he's sort of quiet the whole time and I was worried because he often likes to throw curveball questions, and he's someone with a philosophical background. And the topic of my presentation was people's beliefs about job skills. It was a research paper on when and where people believe that certain kinds of jobs are available in the economy or whether they believe that they have the ability to get new job skills. I was talking a lot about beliefs about different kinds of jobs, skills, related stuff. And finally, this guy raised his head and said, "Dan, you've been talking a lot about these beliefs and different things. What is your mental model of what a belief is?" And I was just like, "Oh, man, this is the ultimate curveball. It's ten minutes left in the defense, and he's dropped this giant philosophical bomb on me." And I managed to ramble something out about some predictive models in people's heads that predict some experience out in the world. And one of my other advisors saved me and jumped in and changed the topic at some point. But it was funny because it's an interesting question. And it's the kind of thing where you have an intuitive answer of what a belief is, but when you get asked this Socratic question, it's like, "Ooh, hmm, kind of hard to say," and you discover that your picture of it is relatively shallow.

SPENCER: Perfect segue back into the illusion of explanatory depth. And I totally agree with you that many of these seemingly obvious concepts — like belief or truth or things like that — they actually get really complicated when you have to try to explain the gears of them. I've heard you write about this idea of epistemic debt. You want to talk about what that means?

DAN: Yeah. It's what happens when instances of the illusion of explanatory depth kind of pile up on one another. I like this mental metaphor of an IOU. You think you have some informed opinion about refrigerators or love or truth or beliefs or whatever. But when you look under the hood, I picture this little post-it note that just says, "I owe you some informed opinion about this in the future." And that's fine. We can have an interesting conversation about when it's necessary or important or valuable to cash those in. It's unreasonable to expect that we're going to have complete models of every word we ever use, and everything we ever think. We have to be strategic about what things we flesh out and which we don't. But it's nice to be aware of which ones we've left unspelled out. Anyway, I picture my mind as being full of these IOUs and then, if you forget about them, it's easy to build IOUs on top of other IOUs, and get into an epistemic debt, a debt of knowledge or a debt of awareness of your ignorance. So you think you know how some policy positions work, and then you use that as a foundation to jump off into other policy positions. Again, philosophy deals with this all the time. It deals with these foundational words and concepts that we use to make sense of the world and tries to trace them back and cash those in. I heard a recent podcast with a philosopher at University of Chicago, Agnes Callard, talking about the project of philosophy as cashing in these terms that we use all the time to make sure that we can trace it back to something we can take firm footing on.

SPENCER: Yeah, I really like that. This also relates to the idea of writing out beliefs as a way of cashing in those IOUs. Do you want to talk about that?

DAN: Yeah, again, not an original idea but one that I found just really valuable for myself. You might think to yourself, "How do you find these IOUs? How do you decide what to do with them?" For me, writing out my beliefs, my attitudes and my positions about things — just like those people in the research studies did — is just a really powerful way of discovering my own ignorance, and deciding how much I want to change that ignorance. Maybe I probed into some issue and I realized, "I know really next to nothing about the US healthcare system. I'm not in a position to have strong opinions about this one way or another. Do I want to learn more about it? How would I advise someone else who wanted to learn about it? Am I just going to take one partisan perspective? What would I read? What would I do?" I could think about that. And then I could decide, do I actually want to engage in that project or do I have other things I want to learn about? I've decided for myself that I'm not the person to come to with questions about the US healthcare system. I'm pretty ignorant about that and that's okay. There are other things that I'm going to learn about, other things I'm going to focus on, and that's okay with me.

SPENCER: Do people avoid having an opinion on topics like that, if they haven't done this exercise of trying to make sure they understand, at a gear level, how the thing works? It seems like a lot of people have very strong opinions on healthcare, that most likely would not be able to really explain how the system works right now.

DAN: Right. Well, in theory, you want to be calibrated. In theory, you want to have a level of confidence in your beliefs that scales in proportion to the depth of your thinking and the accuracy of your thinking. And if you have a simple model, it's okay to not be confident about it. There's an interesting set of questions here around how to get a lot of bang for your buck in terms of having an informed opinion about something without needing to have a detailed model. This is the challenge of identifying experts in something. Let's say, I want to learn about the US healthcare system and I want to have an informed opinion. I want to vote or donate or take some actions that I think will actually achieve some goal — like improving people's health outcomes equitably, or something like that — but I don't have time to really dig into everything. Instead, I just want to listen to some pundits, and do what they say is good. Well, how do I choose a pundit? How do I choose an expert without being an expert myself? What are signs or signals that experts show that might lead you to be confident in their opinions, or confident that they've done the work, rather than them doing the same thing you're doing and just listening to other experts? How do you know that you're bottling out somewhere?

SPENCER: Yeah, I think it's such an important question, because the reality is that we might say, "Oh, yeah, go learn about the topic. Go figure out your own opinion." There's so many topics that we just can't do that for. It's just totally unrealistic.

DAN: Yeah, exactly. And so what makes an expert smell right? [laughs] What makes an expert seem like, "Oh, they thought about this well." For me, a shortlist might have things like, they seem emotionally calm about the topic. Obviously, it's possible to be passionate about all sorts of important things. There's value in that but, in my opinion, passion can make you partisan. So if someone is able to dispassionately explain their position and able, as you said, with Rapoport's rules, to dispassionately explain an opposing view, in a way that seems like someone from that other side would agree with, that's a really good sign to me, and amazingly rare, amazingly rare. People often just don't even seem to be trying to fairly represent the other side as if that would be beneath them or something.

SPENCER: It seems like incentives are also really important. Not that someone couldn't have a totally accurate view and also have a very strong incentive to believe that view. But on the margin, I think we should assume that people who have a strong incentive to believe something are more likely to delude themselves or misrepresent information — even unknowingly misrepresent it — just the idea that people can believe anything if their paycheck depends on it. And I think that if you have, for example, someone who originated a particular theory arguing for that theory, that person might not be in the best position to actually evaluate the strengths or weaknesses really objectively, because their status and prestige and self-identity might hinge partly on that theory being true.

DAN: Yeah, it's hard to set up incentive systems that don't have these creeping problems that you described.

SPENCER: Absolutely. What do you think about it actually changing their mind? Because it makes some people feel like that might be flip-flopping. And at the extreme, if someone changes their mind every week, clearly, that person is not a reliable source of information. But on the other hand, it feels like, if they change their mind at least every once in a while on really big topics, that's sort of evidence that they're doing a process of actually trying to figure out what's true, rather than just arguing for what they already believe.

DAN: Yeah, I have this image in my mind of a guided missile. You want to see if this missile is approximating some target of the truth. If it's constantly shifting wildly to the left and the right, you might not have confidence that it's going to hit the target. And if it's never shifting, if it's just going in a straight line, you might be worried that it's not adjusting and updating with new information or new facts.

SPENCER: I love that metaphor, that's great.

DAN: There's almost an aesthetic that you want to see in discussion, where someone's acknowledging points, pushing back on some things, making little updates, that starts to come up on the work of Phil Tetlock on Superforecasters (which clearly looms over a lot of this conversation) and that the habits of mind and practices that forecasters do in a situation where they are incentivized to be accurate, where they actually have skin in the game, their decisions are being scored. And they pursue strategies of little updates, like a guided missile that's aiming towards the truth.

SPENCER: Yeah, the skin in the game one also seems really important and I guess one way to phrase that — I know Taleb talks about this idea, I'm not sure how he uses it exactly — but one way to think about that is, do they actually lose something if they're wrong? Or if they're wrong, does it just not matter at all? Because, if they actually lose something if they're wrong, maybe thinking about this a different way, they might be more likely to be exerting self- skepticism and trying to consider the other side. It's one thing to say, "I believe that the government should do X." It's another thing to say, "I'm willing to bet a third of my life savings that, if the government does X, things are gonna turn out better," right?

DAN: Right. It's hard. A lot of the things that I'm really interested in and passionate about are tail risk scenarios. I work in this area of biological risk management. What if some disease breaks out and is really dangerous? Or what if some accident happens in the lab, what do we do about that? These are unlikely events, and a lot of things we really care about are these tail risk bad events. And it's really hard to score your accuracy on events that just don't happen very often. Taleb again, he's the black swan guy, so he's familiar with this challenge. How do you demonstrate skin in the game for something that just doesn't happen very often?

SPENCER: I think he would say you just can't predict these events...well, [inaudible] but that's my interpretation of him, that he thinks that there are these huge black swan events that are so unpredictable, that your best bet is to try to put yourself in a situation where you actually, not only are resilient to them, but maybe even gain from black swans, which is the idea of antifragility. Going back to talking about when to trust experts, there are a couple of the things I wanted to mention about that, and I'm curious to get your reaction on this. One thing that we did research on is, what are the traits that seem like the sort of traits that get people to the right conclusions? And what we came up with are these two personality traits, one we call skepticism –– but we mean it in a very specific sense, which I'll mention in a moment — and the other we call seekingness. So the idea of skepticism is that you vet new ideas really carefully before accepting them into your worldview. So if someone says, "Oh, this treatment works for this thing," you're like, "Well, does it, really? Let me go read about that before I adopt that as a belief." We all have trusted sources. For any person, there's some other person that could say something, and they'd be like, "Oh, that's probably true, because that person said it." It's not so much that they don't have any trusted sources; it's that they just have a higher bar of scrutiny before adopting ideas. So that's the skepticism trait. And then the seekingness trait is that they actively go out and try to find ideas that are different from the ones they currently have. Maybe they read a wide variety of news sources, or they have friends from a wide variety of groups, and they discuss ideas and hear about those people's perspectives, etc. And so the idea of this is that these traits –– I actually expected them to be negatively correlated, but when we studied them, we actually found that they essentially had no correlation to each other, which is fascinating — and so that means that they're not necessarily intentioned against each other, even though you might expect them to be. And it seems to me that they both help each other in important ways. If you're highly seeking, but not skeptical, then you might just adopt a whole bunch of bad ideas from a lot of different places. And I think this is a critique that sometimes new age movements will get, that they're so open-minded that some of the ideas they bring in are really cool and helpful, but then they'll also pull in ideas that just are wrong, or don't really help or are just nonsense. And then, on the flip side, if you're really, really, really skeptical, but you don't have the seekingness attribute, then you're just solidifying your worldview because you're not letting in new ideas, you're just kind of stuck. And I think sometimes people make critiques of the new atheist movement around this, like the idea that they're critiquing religion or critiquing the psyche — powers don't exist, homeopathy is bullshit, and so on, or the skeptic movement — but they're not necessarily like, "Okay, well, what else? What are the harder questions? What are the bigger ideas? What are the more complicated things to figure out?" So at the micro level, at the individual person, it seems to me that these two traits together combine to make someone better at figuring out the truth, and therefore might be things you might want to look for in an expert if you can identify it. But then at the community level, the ideal community might be one that combines both these traits, so it takes in a lot of ideas from other places, but scrutinizes them before adopting them wholesale. Any thoughts on that?

DAN: Yeah. I love that. I think that they go together like peanut butter and jelly. They seem intuitively really useful for tracking the truth. The thing that comes to mind (and I'd love to ask you about) is, to what extent do you think that skepticism and seeking are domain-general versus domain-specific? In other words, I might be very skeptical, or I might be very curious and open-minded and seeking new ideas in one domain of knowledge but, in others, I might be quite different.

SPENCER: Good question. And I think I'll answer it on behalf of all personality traits simultaneously — because my answer to the skeptical and seeker traits is sort of the same as my answer to personality more broadly — which is that I think, if we actually just had to break down someone's personality, zeroth-order approximation is that all humans are the same. And that's totally false but, on the other hand, humans are more similar to each other than they are to bears or dogs.

DAN: Or rocks.

SPENCER: Or rocks, exactly. So that's like the zeroth-order approximation is just like the average...

DAN: We're done here.

SPENCER: Yeah, exactly. Then we can add on like a first-order approximation that is better than that, which is to say, people have stable personality traits. There are conscientious people that are always organized and perfectionistic, etc. And there are agreeable people that will always be compassionate and won't contradict you and so on. And then the second-order approximation is that, well, in fact, our personalities react to our environment. And you said there's an interaction between personality and environments. Yes, this person is usually agreeable but there's some things they can't stand, and if you mention that topic, they're gonna get really pissed at you, and they'll make you feel bad, and not act compassionately. So that's my answer, and then there are successive levels of approximation, and they're each a useful viewpoint, but it's like going from Newtonian mechanics to quantum mechanics or something like that.

DAN: Right, and then the question is, how much more predictive power does quantum mechanics buy you on top of Newtonian in what situations? Or how much extra power does the situationist second-order layer buy you on top of the first-order layer? And for me, my hunch would be that, I like those levels, I agree with that. I would suspect that one big part of the (quote) "environment" that layers on top of the first-order to get you to the second-order is your social environment and your community. In terms of your seeking and your skepticism, I would imagine that, at first approximation, maybe you're very skeptical or you're very seeking, but you look around you and there's some topics that it'd be really inappropriate to be skeptical about, or it'd be really inappropriate to seek new ideas around; those are going to be off-limits. And for other people in different social environments, that's not the case. It'd be really interesting to me to look for that distinction, or that level — from the first to the second — and see how much more purchase we could get on people's truth-seeking ability by looking at what's taboo for them to question or what's taboo for them to think about.

SPENCER: Absolutely, and to make this even more complex, there's probably personality traits around the extent to which people are socially influenced. So if you're someone who's heavily socially influenced, then in the domain of topics where there's strong social pressure to believe a thing, you'll probably just believe that thing and you won't be skeptical or seeking around it (well, at least skeptical with regard to the ideas already believed by your peers). But maybe in another domain that's orthogonal to the social pressure, where the social pressure doesn't care whether you believe the thing or not — has no skin in the game there — maybe you'll actually be very skeptical, very seeking. So that's on the one hand, and then you can imagine a person who's just very not influenced by social forces — and I do think people like that exist, though I don't think they're that common. I think the default thing is, we are extremely socially influenced — but those people though, actually, maybe they're better true seekers in general.

DAN: Right. I think of Robin Hanson here as maybe an example, people who are low in agreeableness maybe. They're willing to just say the thing that other people are maybe overlooking or missing, and they're just not as affected by social pressures, and they serve a really important purpose in a group. I think I saw a tweet by you awhile back that was talking about your appreciation for people who are on the lower end of the agreeableness spectrum and the service that they provide to groups in society, and I definitely agree with that.

SPENCER: Yeah, totally. Well, I think I moved from (a long time ago) thinking there's some better personal traits and some worse, to really changing my view on that. Do you want to say something about that?

DAN: It just seems intuitive that, if people indeed differ on these different spectra of personality (and maybe we should name what we're talking about here). I think we're probably both picturing things like the big five system of personality, which is a well-researched personality model that's existed since, I think, the 70s or so.

SPENCER: It's five basic personality factors: agreeableness, openness, conscientiousness, extraversion and emotional stability (which is sometimes called neuroticism).

DAN: Yeah, exactly. Especially given the names of some of them, like neuroticism, who would want to be neurotic or who would want to be disagreeable? It seems like these are the good ends of the spectrum and the bad ends of the spectrum. But if you take a second thought about it, at least my take was, it seems likely that there's value for people to differ on some of these things if they're existing in some community that has some mix of different qualities. You could imagine situations where it might be disadvantageous for individuals and good for their group. Or depending on the mix of other people around them, it might have different results. And even just telling stories, emotional stability seems to be linked to people's ruminating and worrying about possible futures, and you could imagine it being really valuable to have someone in your group who's worrying about the future and anticipating ways that things could go wrong, or someone who's disagreeable and flagging ways that we're getting caught up in groupthink, or someone who's maybe low-conscientiousness, which is a self-control and grit-type measure. And you can imagine someone who is helping the group to deviate from some rigid plan that it's been following in order to find something that's valuable off to the side.

SPENCER: Yeah, and going back to disagreeableness in particular, my feeling is that in the intellectual realm, it's the disagreeable people that tend to be willing to publicly stand up and be like, "That's a bad idea. I disagree with that. That's wrong." And if you have a bunch of agreeable intellectuals (or even just average agreeableness), they'll be like, "Oh, well, maybe that has these flaws." But you know they'll beat around the bush enough that the thing that needs to be scrutinized doesn't necessarily get scrutinized, and me being quite an agreeable personality (I think I'm in the 78th percentile), I am actually really grateful for that role (that I certainly don't want to do) of just very brutally challenging bad ideas that really should be challenged. I'd rather do it in a softer [laughs], gentler way but that's not always what's needed. That being said, I will say that I think, at the extreme ends, personality traits often do become bad, and here, I don't mean morally bad; I mean that they just can be harmful for the person that has them. And so for example, I think, a really low level of emotional stability or a really high level of neuroticism can be extremely unpleasant to the person who has it and can wreck their life. But I think even the traits that we generally think of as positive (like conscientiousness), at an extreme level — you know, if you're 99.9 percentile — it actually might be very hard to live that way because it might mean you need everything to be in its perfect place, everything to be perfectly organized, and have very little tolerance for any kind of violation of the patterns you're used to, or the rules, or that kind of thing.

DAN: Yeah, agree.

[promo]

SPENCER: I don't know whether you consider this a personality trait or not — I'm curious to hear your thoughts on it — but do you want to talk a little bit about growth mindset?

DAN: Yeah, I'd love to. For context, my background is in social psychology and I worked with Carol Dweck, who's the originator of the growth mindset concept and the researcher who laid the foundations for that area. Growth mindset is not a personality trait. I would call it a belief or a set of beliefs. And it's the belief that one's intelligence or ability in a specific domain or in general, is malleable and improvable with effort or the right strategies. And the opposite idea is a fixed mindset, the idea that your intelligence or abilities are fixed. I actually pulled up some of the survey questions that people use to measure growth mindsets. Let me give you a couple of these.

SPENCER: I think that's really helpful, yeah.

DAN: So you ask people how much they agree or disagree with the statements on a spectrum. And here are the questions: one, "You have a certain amount of intelligence and you can't really do much to change it;" two, "Your intelligence is something about you that you can't change very much."

SPENCER: Those would be low growth mindset if you said yes to those, right?

DAN: Yeah, I think some of them, they're reverse coded. And then three, "You can learn new things but you can't really change your basic intelligence." So these are assessing a fixed mindset, and then if you reverse your score on them, you're assessing a growth mindset.

SPENCER: Can you do a quick comment on those particular questions? Because something that's bothered me for a long time about those questions is that they're talking about intelligence. And the reason that bothers me is because I think a lot of people think of IQ as being intelligence. And then they think about the research on IQ and the difficulties people have had building programs to help people change their IQ. And if they're thinking in that way, they might say, "Yeah, well, if intelligence is IQ, I don't know how to change my IQ." And that's a pretty reasonable thing. My understanding though, is that the more updated versions of her scale actually broadened it a bit, which made me happy because I think it took it out of just the mere intelligence realm. Any thoughts on that?

DAN: Oh, interesting. I haven't seen updated versions of those scale questions. But yeah, I have the same intuition, I think that intelligence means different things to different people. I think to a lot of people, it does mean something like IQ. To other people, it might just mean something like general problem solving ability or something like that.

SPENCER: Right. In that case, it makes a lot of sense to me, the way it's worded.

DAN: Yeah, and depending on the way that you scope that word, intuitions around growth mindset might be quite different. Like you said, there's a lot of research on IQ and on GE — this hypothesized underlying statistical factor that explains people's differences in performance across suites of different mental tests — and that GE factor seems to be difficult to affect, barring some kind of childhood nutrition stuff and extreme circumstances. And so if you go that route, you might think, "Well, yeah, intelligence is hard to change." On the other hand, if you focus on particular domains, or if you focus on broad intuitions about your ability to navigate through the world successfully — which I think a lot of people would use as a common sense day-to-day definition of intelligence ("Wow, this person is really smart. They realize that they should do this option instead of that option. They take different strategies.") — that seems a lot more malleable. And I think that leads into the research on growth mindset, which maybe I'll just touch on a bit. The first ideas were articulated in, I believe, the 80s, and then steadily a series of research studies fleshed out this idea that people have different implicit beliefs about the nature of intelligence and ability, that it might be more changeable or more fixed. And then that led to intervention studies where you would try to deliver different types of messages or activities to change people's intuitive beliefs or theories, and then try to see if that trickles down into changes in behavior or learning. And so in classroom settings, in particular, the theory is that people with a fixed mindset in a new learning situation, say they're learning math

And you start learning, and maybe it's easy at first, and you think, "Oh, good, I've got it." Then at some point, you hit a wall, or you find some challenge and that is really threatening because it might prove that you don't have the magic math ability. Then the natural strategy to take at that point is maybe to hide that you don't have the ability. There's not much you can do about it, maybe you want to avoid feedback or attention from the teacher, because there isn't really much of a point. It might be embarrassing if you don't have the ability, or at least just very discouraging. Whereas, someone with a growth mindset might hit a wall with math and think, "Oh, I need some help. I need to try harder. I need to use some different learning strategies. I need to take some different path in order to learn it." It's easier to interpret difficulty as a sign that you need to do something differently, rather than interpreting difficulty as a sign that you don't have the ability. So this all cashed out in a series of applied studies and educational settings, from elementary school on up where researchers would try teaching students about a growth mindset or conveying these ideas in different settings and in different ways and looking for results on learning outcomes.

SPENCER: The way that I think about it (and that was really well said), but just to add a slight re-summary of what you said, that you can view feedback — let's say, you get some test scores or something like this — as either an indication of how good you are at the thing, or as an indication of how much harder you need to work or how much more you have to learn. And so if you view it as an indication of how good you are, then it might be really demoralizing if you do badly, because you're like, "Oh, that's just telling me I'm bad at the thing." Whereas, if you view it as an indication that you have more to learn, or need to work harder, then you're like, "Oh, I did badly. Okay, I need to go try harder next time," or "I need to go learn these things I don't know." Do you think that's a fair summary?

DAN: I think that captures a lot of the big idea of growth mindset. I think that it can apply in different domains. So you could think more generally about yourself as a smart kid or a dumb kid; you can apply these terms to yourself. I think that there are sometimes linguistic cues that peers and teachers can give, that might lead you to believe that your abilities are more fixed or more malleable. So for example, a teacher might say (they might be trying to be nice and supportive and say), "Oh, you didn't do so well on this math quiz. But you're good at English." And it's like, "Oh, so I can't get better at math, so English is my thing but math is not my thing." It can be easy to infer some kind of implicit theory about the nature of ability from that, and then you go on to interpret things through that lens.

SPENCER: Do you think it's better to have a growth mindset? Can we state that?

DAN: That's a great question. There is some debate and controversy about...I'll phrase it as whether (quote) "a growth mindset works." Is a growth mindset true? Does it work? And is it useful, as you ask? I think that those questions are maybe trickier than they initially seem to people. Sometimes I'll see people share a study and say, "Growth mindset has been disproven," or "Growth mindset, it works." And you have to ask yourself, what does it mean if this belief works? If you think of a growth mindset as a belief, what are you actually saying? I think there are maybe a few ways you could break that down. You could ask yourself whether the underlying intuition of the idea is true, whether indeed, people can improve their abilities or improve GE or what it is that the belief is claiming. And then you could also ask, "Do these survey measures predict anything?" That's sometimes something that people mean when they ask, "Does growth mindset work?" Does it predict learning outcomes or does it predict things that we care about? And then maybe the last thing is, "Does growth mindset work?" meaning, did my pet intervention to teach people a growth mindset and help them get better grades, did that work? And those are all really different things.

SPENCER: That's such a good breakdown. And that illustrates to me so well why I think we need more philosophy in psychology, because those are distinct areas of expertise, philosophy and psychology. Yet, it seems to me, in cases exactly like this one, we need a philosophical mindset that's different from the sort of social science mindset to answer questions exactly like this. What do we even mean when we say the growth mindset works? What are the different possible breakdowns for that? And I think that psychologists are maybe a little bit too reluctant to actually do the philosophy needed to disambiguate this stuff.

DAN: Yeah, philosophy might even be over-ambitious or something. Even just if someone cashes out what they mean by "growth mindset replicates" or "growth mindset doesn't replicate." What's replicating? An intervention effect? A survey relationship? And then most broadly, I think one thing that's somewhat made it hard to talk about some of this stuff is that some early growth mindset advocates tended to overhype it as the solution to all problems in education. I don't think Carol Dweck herself is one of those people. But I think that it was easy to see this as a kind of, 'Well, if you just believe in yourself, then anything is possible' kind of message, and that was never the actual literal meaning of a growth mindset or anything that Carol said. I think her claim, if I could try to speak for her, is something more like, "Everyone has the possibility to significantly improve. And if they're aware of that, that should help them improve." It's a pretty intuitively plausible idea worthy of investigation, not, "Anyone can do anything, anytime." So I think one nice starting point for this is just that there's a lot of common sense to the idea. There's a sense in which it's probably not particularly harmful to convey the idea that you could get better at things. Maybe having someone believe something that's totally off-base could lead them to waste time and effort doing something that they're not likely to be able to do. But I think the stakes have been artificially inflated for growth mindset, both positively and negatively. It's not going to solve everything in education. And it's not going to just crush students' hopes and lead them to think that they can do things that they can't possibly do. The empirical effects — the effect sizes of growth mindset interventions — tend to be small in some sense but practically significant, considering that they're basically free to do. Depending on the intervention, they're basically these little low-cost reading and writing activities that students do. And you can just think of this as a distributed way of teaching, conveying some wisdom about the nature of your mind, that you can get better at things. I think that's a reasonable starting point to take when you're thinking about stuff like growth mindset. It's not gonna be a huge deal one way or another, but it's cheap, and it's a safe bet.

SPENCER: Yeah, I agree. And I think that we underestimate the value of really good bang-for-the-buck interventions, even if they don't have a huge effect size, that might be much more effective than the average hour or five hours of a student's time, when we think about how many hours a student spends just learning random stuff in school. Imagine we had 20 interventions like growth mindset, each only took a couple hours to teach but had effect sizes that are reasonably good for the amount of time investment. Stacking them together, that could actually change someone's life. I don't think the idea of growth mindset changes that many people's lives. I think on the margin, it helps a little bit, but it's a really good use of an hour.

DAN: Yeah, totally. I want to make three quick points that relate to that. One is that, intuitively, we should expect that growth mindset messages will have a really large variance in who they (quote) "work for" and who they resonate for. Some people already have a growth mindset. For some people, the idea of being able to get better or not get better at something is maybe irrelevant to their performance at school because it's just a habitual thing. They find it fun or interesting, or they want to please their parents or who knows what, but growth mindset just might not play as much of a causal role in their decision to work hard in school. Whereas, for other people, it's this critical thing. They're grappling with this question of, "Am I a math person?" or "Am I smart?" This growth mindset message might be a huge relief to them. As an example, for me, there's some interesting related work done by a researcher named Paul O'Keefe about the idea of a growth mindset of interest. In other words, when you learn about a new field or you get into a new hobby or practice, should you expect it to be immediately interesting, like a passion that you connect with and discover? Or is it something where interest grows over time as you get better at the thing and learn more about it? And I know for myself, this was such a huge deal in my own mind growing up, where I was just looking for my passion, looking for my superpower, looking for my thing that I just immediately would know that I was good at, and I liked it, and everything would just flow. I never really found that thing and I think it caused me to give up prematurely on a lot of hobbies and interests because, at some point, they would get hard, and they weren't immediately fun. So I wish that I had heard this idea of a growth mindset of interest when I was a kid. Other people might feel totally differently, but again, that might be one of those hour-long messages that you would distribute in the population. For a few people, a light bulb would go off, and for the rest, nothing would happen, and it'd be really hard to detect effects because they'd be so heterogeneous. But it could be really valuable.

SPENCER: I think that's a key point and it's so often overlooked. So much of social science is based on looking at average effects, and you say, "Oh, the average effect size is D equals whatever." But really, what might be happening under the hood is that a bunch of people are completely unaffected by the thing; they get no benefit. A small number of people might even be negatively impacted by the idea. And then there might be some people for whom it's life-changing because that was a sticking point for them. And I think that the standard way of doing science (where we just look at averages) is poorly-equipped sometimes to deal with the idea that you might need to throw a lot of interventions at the wall to find the one that's actually going to unblock whatever's blocking that person from having a better life.

DAN: Yeah, exactly. Is your underlying model of the situation that people need to be pushed a little more to nudge them along some spectrum, or is it the idea that they need a complete set of building blocks in order to achieve some goal? For a few people, you're gonna give them that last building block or that last puzzle piece that finishes the puzzle and allows them to move forward significantly. So your underlying picture of what's happening and what is needed should affect your expectations about the pattern of results that you get, whether it's like, "Oh, an average bump-up a little bit," or is it this qualitative shift for a few people?

SPENCER: So one way to visualize the thing that you were just talking about (which I think is really critical) is to imagine that someone's trying to get to a particular place and there are two ways that you could help them. One is that they might be stuck because they've come to the end of a road and they can't move forward. And there, you might be able to just get them going much faster, if you could find them a way out of that dead end. And that would be one in 20 students, or one in 50, or whatever, that hears about growth mindset, and suddenly, it unlocks a barrier that was really holding them back, like every time they took a test and didn't do well, they would be crushed, and assume it's because they were an idiot and they could never learn or whatever. The other model is, someone's trying to get to where they're going and they're not stuck at a dead end, but they just need better sneakers and, if you give them better sneakers, they're just gonna be now 5% faster. And I think both of these things happen, and some interventions are more like the 'seek better sneakers' — trying to accelerate a bunch of people a little bit — and other interventions get people unstuck, and then there's a kind of mix of that. Just looking at the average can distort what's going on, or at least mislead us.

DAN: Yeah, I think another point that makes it challenging to grapple with growth mindset research is the variability in design and effectiveness of different kinds of interventions. This relates to the sneakers versus dead end concept, you might think about some interventions that are more like training, where it's possible to practice it more or deliver more of the intervention, and that delivers a corresponding greater benefit. And there are other interventions that might be more like persuasion, or messaging or belief change where you say something clearly once, or you give one nudge, and it causes a change in someone's belief. But if you say it ten more times, it might have a completely different effect, because now this person is thinking to themselves, "Why are they telling me this ten more times?" It changes the meaning of what you're saying. And I think that growth mindset interventions are just like that where — unfortunately for our ability to do rigorous generalizable social science — the social world is complicated and messy and hard. And we have to make these interventions in a craft-based way where we make it and we just think, "This feels right, this feels good." And we're not overstating the point, we're not understating the point. And people differ in their sense of what a good intervention looks like or feels like. I think that the craft of making good psychological interventions still has a lot of art to it, as well as science, or you might say implicit knowledge. That means that the effect sizes of interventions are gonna vary a lot. And so you might see a study that says, "Growth mindset didn't replicate," or "Growth mindset did replicate huge effects." And it just comes from differences in the design of the intervention and the way that it landed among this particular group of students. It's just that social science is hard, and the effects are heterogeneous. And it's difficult to cash out your intuitions about interventions.

SPENCER: I think that is vastly underestimated. A bad version of intervention will essentially never work, no matter what it is. Imagine you're giving a really effective drug to people but you administered it in the completely wrong dose. You way under-administer, or you actually administer the wrong drug. So the idea that a single piece of intervention failing proves that something works, is totally wrong. It's just obviously wrong. At the same time though, what's dangerous about the thinking is that it's an escape card you could always play. It's like, "Well, my intervention is not bad. It's just that that was a bad implementation of it." So I think both of those things are true. On the one hand, a bad version of intervention will pretty much always fail, and it doesn't prove that the underlying idea is wrong. On the other hand, you don't want a world where things are disprovable, but you can never say an intervention doesn't work, because someone can always just say, "Oh, that wasn't a proper implementation." Any thoughts about how to navigate those two issues that seem to be inherently in tension?

DAN: I think the best that we can do (off the top of my head) is to update in smaller bits to be less inclined to dramatically change our opinions about the effectiveness of an intervention or the existence of some concept or construct based on a single study. If it was a poorly designed intervention, in this particular case, well, okay, that's possible, but it doesn't mean we should throw the whole thing out. Conversely, if it seemed particularly well-designed, maybe it was a fluke; maybe there was this particular population that it worked well for. I think it just creates a certain degree of humility about our ability to update our beliefs about the social world.

[promo]

SPENCER: Have you looked into the whole power posing debate?

DAN: Oh, a little bit. What's the current state of affairs with that?

SPENCER: Well, I find it super interesting. Because basically, this research came out by Amy Cuddy and her colleagues, claiming that power posing — basically, power posing is the idea of assuming certain powerful body postures like, you can imagine, the posture Superman might have, with his hands on his hips standing up tall, that kind of thing, or other kinds of very confident body postures and powerful body postures — will actually have all these positive benefits. For example, the idea that they might make you perform better in a high-stress situation, or that they might change your cortisol levels, or cause you to be willing to take more risks, things like this. And so the study came out, and then there was a TED talk that mentions the study, that was an incredibly largely-viewed TED talk; I think it was one of the most popular TED talks at the time. And then a bunch of people jumped on this research and said, "Hey, this research was not very well conducted." And people tried to run replication studies, and they failed to replicate a bunch of the effects. In particular, they failed to replicate that it changed people's cortisol levels and I think they failed to replicate the change in people's risk-taking and also, I think one of the original authors on one of the first studies came out and said, "Yeah, I actually think that our methods were not good and I no longer buy this research," even though they were involved in creating it. So what I think then happened is that there was a big swing from, "Oh, power posing is really cool and exciting. And what a great little intervention that you can use before going into a meeting, or before giving a talk, to just power yourself up," to "This is all bullshit and garbage." And this struck me as a really strange kind of dichotomy, and the reason it struck me as a strange dichotomy is because it just seems totally obvious to me that body posture has a small effect on our mood. I don't know to what extent you've observed this, but just in your own daily life, shifting your posture, upright shoulders, standing up tall versus using a really crumpled-over posture, has an effect on your mood. Do you perceive that in your life? Or is that not something you feel is true about yourself?

DAN: I don't perceive a dramatic difference, to be honest. I think I noticed that effect on my physical comfort but I don't find myself feeling like Superman, per se.

SPENCER: Right. So this brings me to my hypothesis about this research. My hypothesis is that some people have this effect much more strongly than others, which is related to what we were saying before. And for people who have a strong reaction from their body posture, it's just so obvious that this is a real thing, because they can just do the self-experiment like, "I'm gonna use this posture. Okay, I feel a certain way. I'm gonna introspect on how I feel. Oh, yeah, I definitely feel lower now. Okay, I'm gonna use this posture. Oh, wow, I'm feeling more confident, cool." And they can just do that self-experiment over and over again, and it's abundantly obvious [that it works for them]. Other people either don't have the effect at all, or have it to such a low degree, that they're like, "Well, why should I believe this thing?" So actually, this was always very baffling and surprising to me. So I started two things: first of all, I looked at whether the replication studies actually fully didn't find any of the original facts, and it turns out, they actually often did replicate the feeling of power effect. So in other words, people self-reporting feeling the power actually was higher in most of the replication studies, not all of them, but most of them. So the replication studies, what they were failing to replicate were things like risk-taking behavior and cortisol levels. But funnily enough, I didn't have a particular prediction about those things, like I don't know whether they do curves, or who the hell knows? Is it even strong enough to make a measurable change? So then, actually, we had my own replication study. And indeed, I found a power posing self-reported power effect, where it did make people report feeling more powerful. And I actually tried a few different body postures, like ones that were kind of neutral, ones that were more powerful, ones that are weaker, and you got exactly what you'd expect: the more powerful-seeming the posture was, people reported feeling more powerful, more confident, better mood, and so on. So this basically led me to think that, essentially, there probably is an effect but it's a self-reported effect. It makes people believe that they feel more powerful. And we could debate how exactly to interpret that, like, "Well, is that a placebo? What does it even mean to be a placebo if you're just asking someone how they feel?" It also seems like people vary on this trait, not everyone is as affected as everyone else. And finally, it seems like the effect is small; this is not a huge effect. So you know, the idea that this is gonna radically change how good you do in a talk is probably not true. But if you're one of the people that's on the higher end of this reactivity to your body posture, yeah, maybe it is a good idea to do a power pose right before going into a difficult situation. I don't know. Any thoughts, reactions on that?

DAN: No. I think you're raising perennial issues about the challenges of individual differences in responses to interventions and about the challenge of picking a meaningful outcome measure and the circumstances under which changes in your individual feelings and attitudes do or don't trickle down into changes in behavior. That's interesting.

SPENCER: It also seems to me that people have a bias towards these (quote) "objective measures." People really like measures like cortisol, because it feels so 'science-y.' What could be more scientific? You're giving people a blood test. But the problem is, if you actually think about something like cortisol, first of all, these measurements are often much less accurate than people realize. I've been just keeping track, every time I go to the doctor, I write down my cholesterol level or my blood pressure level whenever they take one of them, just because I'm curious to keep track over time, and notice patterns. And the amazing thing is, how much it varies from each time to the next time. When you're told that this is your blood pressure, it's almost like they're telling you as though it's an immutable trait, when in reality, it's affected by so many things, about what happened that day and how you're feeling and it just naturally fluctuates.

DAN: Whether you took the stairs to the doctor's office or something.

SPENCER: Exactly, so these objective measures often have more error in them than people realize. But the other thing that's even more pernicious, I think, is that they often don't have the interpretation people want. For example, what's a better measurement of whether you should power pose before you go into a stressful situation, the cortisol level or if you feel better? [laughs] And I would argue, whether you feel better is probably more important than the cortisol. I don't know, is that important? What's the right change in cortisol that actually means something? Who knows?

DAN: Yeah, I think often those cortisol levels are included in those studies to make a more academic point about the power of beliefs to change physiological variables, that this is a social psychologist saying, "Look, it's so interesting that the way that we think can affect things like our blood pressure or cortisol levels." And there's a whole set of research results around construal of stress, like construal of the worry, the butterflies in your stomach that you might feel before a test, and if you think of that as a sign that you might do badly, you might feel one way. But if you construe those stress feelings as a sign of your body gearing up to face a challenge, you might construe them in a different way and have some...I believe there's been results showing differences in [inaudible] cortisol levels and (I think also) test performance. This is research by Jeremy Jamieson. Things like cortisol are interesting from an academic perspective to make this point that beliefs matter and they trickle down not just to behavior, but to physiology. But when you're actually using this from a practical perspective, yeah, it's not necessarily the case that physiology is going to be the most important thing to look at.

SPENCER: Right. I was just also gonna add that, while it's really cool when we can see a biological measurable change from something psychological, I also think we should be totally unsurprised about this. And in fact, it's often when it is real is a better indicator of how cool our measurement tools are than anything else. We know that every thought you have changes the brain in some way. And we know that your emotional state changes chemicals in your body. And so the fact that we can measure those changes is really cool. But should we be at all surprised that these changes are happening? This is exactly how we know the brain works.

DAN: I think one underappreciated and bittersweet challenge of growth mindset research is that growth mindset as an idea has been tremendously successful. It's gone viral across the US. It's been out for a while. You can go to any school of education in the country and it'd be hard to find one where growth mindset is not mentioned. Often when I tell people that I did research on growth mindset stuff, some people have no idea what that is, but a lot of people will say, "Oh, my God, growth mindset, I can't get away from this idea. It's everywhere." So among certain segments of the population who do education stuff, often, it's everywhere. It's in corporate training, it's in the therapy settings. And I think that actually has an effect on the population, and then has an effect on the ability to study growth mindset that's hard to capture. Growth mindset is sort of in the drinking water in a lot of schools, so to speak. And that could have a couple of effects. One is that the average level of endorsement of growth mindset might go up if students are hearing it from different places. But another, maybe more cynical perspective, is that students are hearing that, "Oh, growth mindset is the thing that you're supposed to say. It's the thing you're supposed to endorse." Your teacher might have some poster on the wall talking about it. They might have had some professional education course mentioning it, and they give some superficial presentation about it in their class that might not actually be backed up by their behavior or the way that they treat students. So they might actually act in ways that presuppose or convey that some students have more potential to learn than others. But students might learn that growth mindset is the right thing to say. And if you ask them a survey question about it, they're supposed to say, "Oh, yes, intelligence is malleable."

SPENCER: Students are experts at figuring out what teachers want them to say.

DAN: Oh, yeah. I think if you think back to your school days, you can remember just really being encouraged to learn what the teacher wants to hear, saying what the teacher wants to hear. And I worry that growth mindset research has to grapple with that. And it's a victim of its own success. As it gets disseminated out, it needs to preserve the quality and accuracy of implementation, and it needs to resist this challenge of students just saying what they think we want them to hear.

SPENCER: It could also be a problem where, if kids have already assimilated the idea, adding intervention on top may not do much. It's like, if everyone's already taking a medicine, giving them more of the medicine may not benefit people. I feel like there are just so many complex issues to untangle, and I think either of the perspectives — "Oh, growth mindset's amazing. Everyone has to learn it," or the perspective that this is total bullshit — in my view, probably neither is correct and the reality is some really complicated thing in the middle, where it's a somewhat helpful intervention for some people, maybe occasionally life-changing for a small percent. But also there's all these other complicated factors that make it context-dependent. It turns out the world's complicated.

DAN: Right, turns out the world's complicated. And then in a complicated world, we still need to make a decision to do something. And in that case, our decision should rest not only on whether it (quote) "works" and we have some definite evidence, but also how difficult it is, how costly it is, and common sense. And I think when you put those things together, growth mindset ends up looking pretty darn good because it's so cheap. And it seems to have little evidence of a downside and it might be good for people so I'm on board with your proposal of sprinkling these little hour-long-or-less interventions throughout the school system and looking for outlier wins.

SPENCER: To finish up, I just wanted to quickly discuss with you this broader question of how we can use social science to do good in the world. Any thoughts that come to mind about that?

DAN: Yeah. I think an important first question is, who is we? "We want to do good," meaning you and I, or meaning...

SPENCER: Are you sure you're a psychologist, not a philosopher?

DAN: [laughs] Or an activist, right? Who's the person or group with the power and desire to do this? One could hear that question, I think, in a really noble and inspiring way. But it's also a little scary. Can we understand how humans work well enough to exercise our vision of what a (quote) "good world" looks like? Okay, it seems very plausible to me, in theory, that social science could improve the world. General physical science has allowed us to control the physical world. And to the extent that we can understand patterns in the world, in our social world, and model humans in predictive ways, then we could use social science to improve the social world or change the social world. I think that that raises the hackles on my neck and the necks of many people about the possibility of some kind of authoritarian social control and there's a history of that in social science. I don't think social science is nearly effective enough to do that and worries about it have been greatly overstated, from social media and tech companies today, back to Edward Bernays and public relations in the early 20th century. I don't know if you're familiar with this, but Bernays was, I think he was the nephew of Sigmund Freud, and he's considered the founder of the field of public relations. And so he really innovated this idea of mass marketing targeted to different groups in order to influence group behavior. And he had some impressive wins and had this vision of social controllers influencing world events, and I think it's a scary worldview. On the other hand, we have people controlling physical systems to make massive changes in the world, many for better, some for worse. So it's not exactly a new problem.

SPENCER: Yeah. This brings up something that I think is really important, which is this idea of helping people achieve their own goals and pushing people to achieve your goals. I think where social science is ethical is when it's helping accelerate people to their own goals being achieved. And when using business, of course, the business needs to try to achieve its goals, too. And that means looking at the Venn diagram, the intersection of what the person is trying to achieve, and what the business is trying to achieve, and at the intersection there, that's how the business can use social science in an ethical way. To me, that's an important thing to think about. Unfortunately, sometimes social science is not used in this way. For example, a company might want people to spend 5% more time just scrolling on their newsfeed. And if you asked the person, "Would you actually want to increase the amount of time you spend in your newsfeed?" they might say no, but in the moment, they're being pushed to do it through a variety of very subtle forces acting together to influence them.

DAN: Totally. I think you bring up a good challenge of inferring what people (quote) "want," what their goals are, and what they report as wanting or act as if they want at different times. There's a long tradition of tension around inferring people's goals and values across the social sciences, especially psychology and economics. And I think one other challenge that arises with this idea of helping people achieve their own goals — though I really like the idea and the intuition around it — is that it still gives you wiggle room around whose goals you want to help achieve. So if you have some vision of social control and you want to manipulate the world in some way that you want, you could help a certain group of people achieve their goals but not help some opposing group achieve its goals. You could try to asymmetrically help some people and not others. And I think that that can end up resulting in similar concerns.

SPENCER: I believe that idea I mentioned a moment ago is from Rob Haisfield. I just want to give him credit for that. In terms of your question, I think about this idea that your right to move your fist stops right at the other person's nose. And insofar as you're helping one group achieve their goals, if that's not coming at the cost of other people not achieving their goals, then great. I think we can agree that's a good thing. But there are groups of people where their goals involve preventing others from achieving their goals or even hurting other people potentially. The best kind of resolution I know of is that you have to add this additional thing, which is, don't just help people achieve those goals that are not harmful to others. I just wanted to end on an optimistic note. What do you see as the greatest positive potential for social science improving the world? Any thoughts on that?

DAN: That's a big question. I think that there are maybe two buckets of ways that social science can help that I like to think about. There's helping broad groups of people be a little bit happier, a little bit healthier, a little bit better in their lives, a little bit more improved. And then there are social science programs that are much more targeted at narrow groups of people that are often designed to change their decisions or their behaviors in ways that themselves have ripple effects.

SPENCER: What would an example of such a group be?

DAN: An example of the broad approach is scalable mental health treatment, like I know you're working on with UpLift, MindEase, and there are lots of other examples of things like that. Or technological systems that help people connect and get things that they want and flourish in their own ways. Those might not have massive effects across lots of people. They might have small effects for many people. They might make everyone's life a little bit easier, or they might have outlier effects, like we were talking about with growth mindset. They might not work for most people, but then a few people have big benefits. And then there's a second category of interventions that might focus on things like using social science to change policymakers' thinking about climate change, or using social science to affect how life scientists doing risky biology research think about the risks of their work (which is my area). So you start out by thinking about a really serious problem, like nuclear proliferation or climate change or biological risks. And then you think about who is involved with that problem? Who are the stakeholders? Who are the actors, the decision makers? Sometimes it's a big broad pool of people; other times, it's a narrow pool of people, it's a small group. And then you might think of ways to apply thinking from the social sciences to affect or improve their actions, their choices, their decision making. Depending on how you construe the social sciences, that might be quite broad. You might think about the role of game theory and, in political science, in affecting the nuclear decision-making calculus that different countries do. And maybe that's been tremendously helpful, maybe it's been harmful, maybe hard to say. But I think those are two pathways where I think social science can do a lot of good.

SPENCER: Awesome. Dan, thanks so much for coming on. This was a lot of fun.

DAN: Yeah, I had a great time. Thanks for having me, Spencer.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: