CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 035: Social Science and Science Journalism (with Jesse Singal)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

April 8, 2021

Should we trust social science research? What is the open science movement? What is the "file drawer" effect? How can common sense help social science dig itself out of the replicability crisis? Is social science in the West too focused on interventions for individuals? How useful is the Implicit Association Test? How useful is the concept of "grit"? How should journalists communicate confidence or skepticism about scientific results? What incentive structures stand in the way of honestly and openly critiquing scientific methods or findings?

Jesse Singal is a contributing writer at New York Magazine and cohost of the podcast Blocked and Reported. He is also the author of The Quick Fix: Why Fad Psychology Can't Cure Our Social Ills, which came out April 6, 2021, and which you can order here. You can read more of his work at jessesingal.substack.com.

Further reading

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Jesse Singal about effect size and replicability, the Open Science movement, publishing null results, and journalism and scientific communication.

SPENCER: Jesse, welcome, it's great to have you here.

JESSE: Thank you for having me, Spencer, I appreciate it.

SPENCER: So you just finished this book, The Quick Fix. Do you want to give us a quick idea? What is a quick fix?

JESSE: Yeah, quick fix is basically an attempt to solve a complicated problem that is a bit oversimplified. The argument of my book is that psychology, and particularly social psychology, has been producing a lot of quick fixes that went viral via the TED Talk stage or big books, but don't turn out to be as exciting as we might initially expect.

SPENCER: Someone gets up on the TED stage and says, "This is going to revolutionize your life; it's going to change everything." They give some psychological result that's actually a tiny effect size, and psychologists are debating whether it really exists at all. People eat it up.

JESSE: Yeah, and I know one you've actually studied is power posing. That's sort of a paradigmatic example: you take a fairly small, in retrospect, statistically suspect study, and then you make pretty big claims about what it shows. From there, there are a lot of professional benefits to overclaiming in that manner.

SPENCER: Yeah, so power posing. For those who don't know, it's basically this idea that adopting different body postures can do potentially a lot of things depending on who you trust, but can change your feelings of power, how powerful you feel in the moment, can change your mood, can change the cortisol levels in your body, can change your risk-taking. There was this early study that claimed all these really cool benefits from just adopting one of these poses for a couple of minutes. Then there was an incredibly popular TED talk about this; I think it was one of the most popular TED talks of all time, actually. Then the original research started coming under fire. Do you want to tell us a little bit about that, Jesse?

JESSE: Yeah, so it was a combination of a big failed replication with a much bigger sample size in the original, some other mixed results, some hits or misses replication-wise. But the big thing was Dana Carney, who worked with Amy Cuddy on the original study. She posted a note on her UC Berkeley faculty webpage in 2016, I believe, basically saying, I don't believe this effect, I think we p-hacked. P-hacking is basically a way of including or excluding trials to make your findings statistically significant. It's something a lot of psychologists used to do without really knowing it was bad. So this isn't like evil, fraudulent researchers. It's just research practices that were a little bit weaker back then. So yeah, Amy Cuddy and some of her colleagues and co-authors really did stick with the claim for a while that power posing matters. And I know you've done some interesting work suggesting it could have a nice impact on someone's felt sense of power. I'm not ruling out that there's anything there. But her big claims, I think, are considered to be debunked by now.

SPENCER: So I'm really interested in this phenomenon sociologically because I've seen this thing again and again, where it seems to me there's something that is probably a real effect, but it's very small, like it's a good little tiny boost to something. Then there's some early, not very well conducted research that basically claims it's incredibly important, incredibly powerful, and it kind of gets widely circulated. Then that research gets attacked, and people are like, actually, the research kind of sucks. And I think sometimes in the field swings back too far the other way, it actually says, Oh, the effect is total bullshit, when in fact, really all along what it was this tiny little effect. It's kind of cute, kind of interesting, but not mind-blowing, and that's what I think happened with power posing. The reason I think the effect is probably real, first of all, if you look at there's a Bayesian meta-analysis done that looked at a whole bunch of studies. And while they couldn't find the cortisol effects and the decision-making effects, they did find the report of feelings of power quite often in the studies. And then the other thing is that actually, I ran a preregistered replication of it. We're actually writing it up now. We never bothered to put it out, but we're going to put it out now. Really large sample size.

JESSE: I'll make sure it's in my next book, then.

SPENCER: Okay, cool. There's a really large sample size. I think it was over 800 people. And we did find that power posing had an effect on the feelings of power. And then you could ask the question, well, how big is that effect? Well, the answer is really small. So if you're doing a study on 40 people, there's almost no way you would even find it. Because the larger the sample size, the greater the ability to detect an effect. So if it's a really small effect, you're not even going to find it with 40 people, right? And then you could ask, okay, well, maybe there's just a little effect, but is it a placebo? And I think that's an interesting question. And in this particular case, it's sort of a funny question because it's like, well, if you believe you're going to feel more powerful, and it makes you feel more powerful, then you still feel more powerful, right? So I don't know, it's up for debate what it really means to be a placebo in this case. But anyway, that's kind of my opinion on it. Curious if you have a reaction to that.

JESSE: I think that's reasonable. I think a similar thing of the pendulum swinging back too far the other direction could potentially apply to grit, which we can get to, but within power posing, I think the problem isn't just overstating the claim, but also this entire chain of causal claims about the root of a societal problem. Cuddy's claims are premised on the idea that women don't get ahead in the workplace, in part, or in large part, I'd argue, because why else would we care so much about power posing? Because they are made to feel not powerful. I don't think there was ever a really good reason to think that that was a main causal driver of women sometimes not making as much as men or not getting the same professional opportunities. There was a good Harvard Business Review article, I think, based on some academic research, debunking the confidence gap. I view confidence as a pretty close proxy to feeling powerful. To me, it's not just overstating the effects, which I think she did in her TED talk, but also coming up with this whole sort of sociological theory of why there is gender inequity that was itself never that solid.

SPENCER: I guess if you have a study that shows some kind of effect, it's really useful if you can then say, "Ah, this is actually a cause of a major societal problem," right?

JESSE: It's a wag the dog that you first find the cute lab effect, and then you decide without much real inquiry that this is a big deal for the real world.

SPENCER: Right, exactly. Another thing that I think is kind of interesting is when we talk about an effect size, we're really talking about an average over many people. I think that this happens in antidepressants as well, where you might have one person who takes antidepressants, and they're like, "I don't know, maybe it helps me, maybe it doesn't." Then you'd have another person take it, and they're like, "Holy crap, I feel so much better." You kind of average over all those people, and then you're like, "Oh, well, the effect is kind of okay." But in real life, we get to try a bunch of things. Maybe one person takes it, and they're like, "I don't know if it helps," so they stop taking it. The other person takes it, and they're like, "Holy shit, this really helps," and then they keep taking it.

JESSE: Yeah, they're not even functional without it, or they're suicidal without it. Exactly, exactly.

SPENCER: And so I think of things like power posing as that kind of thing. Hey, why don't you try doing some power poses before your next talk? If you find that helpful, great. Chances are, you aren't going to find it life-changing, but maybe every once in a while, someone says, oh, yeah, actually, I feel like that really does make me go on stage with more confidence. And that helps me.

JESSE: As I say in the book, this is not something that is going to harm you. If you try it, if you power pose, do it. It's more the idea of someone becoming a pop science superstar on the basis of really overheated claims that bothers me. I think all these ideas, the reason I call them half-baked is because they're half-baked. They're not zero-baked; they're not just dough to torture the metaphor. There's usually some little thing there.

SPENCER: Yeah. And you know, it's interesting because here we're talking about claims couched in science. But then you have this whole self-help literature, much of which really doesn't even attempt to do anything scientific. Many of the techniques, you couldn't even find a single study talking about them. So I'm curious how you think about that? Well, part

JESSE: The story here is that self-help and social psychology, in particular, are intersecting. When Amy Cuddy says in her TED Talk that you can rewire your brain by doing a power pose and go from closed in and introverted to really... I forget the exact claim, but she does say "rewire your brain." That self-help language, except in the trivial sense that everything rewires our brain every second, I would prefer social psychologists not to talk about a modest lab effect as rewiring your brain. So, yeah, I think social psychology has sometimes borrowed too much from self-help. Positive psychology, which I have one chapter about, absolutely has self-help claims. It doesn't really... maybe it's an even bigger problem than social psychology. So, yeah, I just think there should be more of a barrier between those two things.

SPENCER: I see. So how would you prefer scientists talk about their work? Let's say they have something that they think could be useful for people, you know, if they were to do this before a presentation or something like that.

JESSE: You wouldn't get famous or get a TED Talk doing that. I think our best hope is changing norms within psychological science toward things like pre-registration and replication. You have journals for null results. I'd like it to be the case that 10 years from now, you can have a blockbuster study as a 28-year-old social psychologist that really took this idea and put it through its paces and showed that it's probably false, with data from a null-result study with a sample size of 5,000 people. I think the more we value that sort of work and understand that it isn't just positive results that add to our body of scientific knowledge, the more the incentives will change, and people won't need to overclaim on the TED Talk stage to have a successful career.

SPENCER: What you're talking about ties in with the whole open science movement. Do you want to talk a little bit about that?

JESSE: Yeah, the Open Science movement is, as the name suggests, a movement to make science more open to promote data sharing. Everything I'm about to talk about involves a complicated, nerdy debate among people who are much more statistically sophisticated than I am. The book just gives a quick overview of all these ideas, but they're important. Two ideas that are important are pre-registration and registered reports. The basic ideas have to do with if you test a bunch of variables or a bunch of effects, and you can pick and choose which ones you then submit to a journal to publish, that gives you a lot of leeway to sift through a lot of noise for what might appear to be a signal. But it's just always going to pop up accidentally, by the the nature of statistics. Innovations like pre-registration and registered reports allow you to publicly post beforehand exactly what your research plan is and what your hypotheses are. The stronger version of that is you actually submit to a journal your entire experiment. You say, we're going to test power posing in the following manner. Under this system, the journal will accept or reject your paper without knowing the results; they're committing to publishing the results, whether you get a null result or positive results. That is potentially, if it became common, revolutionary, because a lot of the problems in psychology right now have to do with the bias for positive results. That leads to, for example, the file drawer effect, where if you get a null result, you know you can't get it published anywhere. So you just throw it in the file drawer, and the world is denied potentially valuable knowledge about an idea being weak.

SPENCER: There are so many interesting things to unpack there. Let's talk about the file drawer effect for a moment. Here's what people are doing: all these studies, and most of them just never see the light of day. Imagine 10 people study the same thing. Nine of them find no effect; those go into someone's file drawer, and nobody sees them. One of them finds an effect, and that one gets published. Now the research literature has a really biased representation. Then someone does a better analysis, right? All those negative results are missing. They look at all the results published and aggregate them together, and they say, oh, look, it looks like there's an effect. But in fact, what they don't realize is there are all these missing negative ones, too; they might actually get the wrong result. Then they try to do kind of bias corrections, which is complicated statistics to try to guess how many negatives there might be that are missing from the literature. There's a whole debate about how accurate that is, and so on.

JESSE: Yeah, you become sort of a ghost hunter almost, looking for research. It just makes it much more complicated.

SPENCER: Exactly, exactly. Clearly, that's really problematic. I think I have a slightly controversial opinion on null results, which is just to say, I actually think the vast majority of null results are useless. The reason is that if you think about the space of all possible psychological hypotheses, the chance that X does Y — almost all of those are false. Our prior should be that there's a really low probability of working. If someone tests some really wacky random hypothesis, by publishing that null result, they add almost no information. What I think would be ideal is when people get a null result, most of the time, all they do is submit to some database something really simple, like, here's a description of the test I did, here's what I was manipulating, and here was the outcome — something really simple. It takes about five minutes. But when there's a null result on a hypothesis that's widely believed, that's where I think the big exception is. If you're testing something that's already out there in the literature and people believe it works, and you get a null result, that's where I think it's incredibly important to actually publish it in a journal so that it's really out there in the literature, as opposed to just coming up with some crazy idea, testing it, and saying, oh, it doesn't work. I think just limiting it to five minutes probably makes sense. I'm curious to hear your reaction to that.

JESSE: What you're saying I would need to think it through because so many of the ideas in my book, there was an issue where there probably were a bunch of null results that were not published. That could have slowed down the spread of these ideas. I guess you're saying those null results could have lived in some online universe where they would have been accessible, even if they weren't published as full-blown peer-reviewed studies.

SPENCER: Yeah. So the challenge that I see is that because publishing null results doesn't get much prestige, people really don't want to invest the huge number of hours it takes to write up a full paper and describe all the methodology. It has to be, if you're actually going to get them in the world, something really, really short and simple for people to get them out there, with the exception being of a null result that contradicts some finding that's already believed. If everyone thinks power posing works, and you find the power posing doesn't work, that should be published in a top-tier journal, in my opinion, because it's actually contradicting an existing finding people believe in.

JESSE: Yeah, I think there's some wisdom to what you're saying. I'm going to have you ghost write my next book on this stuff.

SPENCER: Alright, cool. So that's null results. Then we've got pre-registration, where you basically in advance submit what you're planning on doing to some database, and that way you can be held accountable that you actually did the thing that you claimed. Now one of the disturbing things about that is that nobody really checks pre-registration plans. There have been some really interesting examples where, for example, drug companies have been required to pre-register, and then nobody checks. It turns out the analysis they end up doing were not the things they pre-registered. So that creates an interesting question: who's going to enforce any of this?

JESSE: That would render just about useless if that's happening a lot. I literally in that part of my book, I've really designed it to be an intro level for lay readers, presenting some of the tools we could use to improve science. I'm always optimistic they will. But that's exactly the sort of thing where it's like, you know, what's the expression, it's all in the doing? A lot of these ideas sound good, but unless you can really force people to stick to them, it's like the honor system. It doesn't really do much for you.

SPENCER: Now, on the other hand, it does make it much more likely someone's going to catch you, right? If your pre-registration is sitting out there, someone could check in and say, "Hey, you didn't do what you claimed." Even just knowing that could happen, maybe that does have a really useful effect.

JESSE: Yeah, I think making these more public in general undoubtedly nudges behavior in the right direction.

SPENCER: So after reading about all these kind of funky results that maybe don't hold up, what did you come away with in terms of your view on social science as a whole?

JESSE: I just about lost all my faith in it. And I think we should be incredibly pessimistic, except for the fact that there appears to be real energy, especially among younger researchers to turn the ship around a little bit. But what's the range they get? When they try to do big replications of multiple studies? It's usually like a coin flip, if that, in social psychology, I think it was like 25% in one big effort. So how are you going to tell me that a body of literature where if you pick a study at random, there's a coin flip chance that it replicates? Why should I believe in that? Why should I feel confident as a journalist that I'm bringing actual facts to my readers or good beliefs to my readers?

SPENCER: It's so sad to me, because it seems that social science is really working on some of the most important problems, not certainly not all of the most important problems, but problems like how do we help humans be happy? And how do we help mental health? And how do we help humans get along and not have conflict? And how do we fight racism? There are so many of these really important questions; it seems like social science should be the field that's tackling many of these things, not the only field, obviously, but having a real role to play. And then it's like, oh man, if you can't trust the research, then what are we doing? And who's actually playing the role of answering these questions?

JESSE: There's really interesting history that I unfortunately had to cut from my book, just because it was too in the weeds, it didn't fit with the rest of the arc. But World War Two spawned some amazing collaborations between social psychologists or psychologists in general and people from other fields like sociology, anthropology, political science. That sort of cross-disciplinary stuff, I think, often benefited. I have long forgotten the best examples of this, so don't ask me, but I think it long benefited from a little bit more richness and depth, just because you had people from different disciplines in the room. There's this history where then, starting in the 50s on, psychology gets a lot more professionalized. It turns a lot toward experimentation in lab settings, particularly social psychology. And often there are no sociologists or anthropologists to be found. I just think the field sort of became a bit too enamored with its own cute lab tricks, and that's why it went down a less productive road.

SPENCER: It's really interesting, because on the one hand, you might think, hey, isn't it good that it became more mathy? Right? I love math. You think, doesn't math equal rigor? And it's true, it certainly seems to be the case that it's a more mathematical science today, where there's more like, okay, we're going to collect a bunch of data on a bunch of different people, and we're going to run some statistics, and that's going to answer this question. Whereas if you go look at some of these old classic psych studies, I mean, they're hilarious. There's the guy who had someone dressed up in a bag and sit in the back of his classroom for months. He studied the reactions of his students to this person in a bag, just like he wanted to know, are they going to be friends with the bag? Are they going to hate the bag?

JESSE: I would hate the bag. I would just instantly not trust it.

SPENCER: Yes, but then it would slowly grow on you, and you wouldn't know why.

JESSE: Right.

SPENCER: But you know, it's like, you just don't really hear about a lot of studies like that today. Another one of my favorite old studies was this guy. He's a psychologist, he hired an actor, and he wrote a completely nonsense speech for this guy, just total gibberish, but the actor was incredibly charismatic. Then he got the guy a speaking spot at one of the psychology conferences. He had this actor give this gibberish speech very charismatically to all his psychologist colleagues. Then he handed out a survey to ask them their reaction, and they loved it. They were like, Oh, this is great. Then he's like, Ha, I tricked all of you. You're not even reacting to the content. You're just reacting to the charisma. That was kind of amazing, actually. You just don't see a lot of that today.

JESSE: No, there was this real burst of interest for obvious reasons after World War Two, conformity and groupthink, and it led to some very colorful experiments. I have to wonder, I mean, we already know with the Stanford Prison Experiment that's fallen apart a little bit, some of the other big ones have fallen apart. But overall, I just find a lot of the famous experiments from the 50s and 60s to be more theoretically grounded. A lot of 21st century social psychology, a lot of the time, you'll read the papers, and they don't even really bother to produce a theory for why this would be true of human behavior. It's just, Oh, we found this cool thing. It must be true because the p-tests.

SPENCER: It just seems like they're just like, Oh, we found an effect. Let's reverse engineer some explanation for it. Yeah.

JESSE: Especially with social priming, which is this area focused on the extensively large effects subtle stimuli can have on us. A lot of those studies are so divorced from what we already know about human nature, which I have a chapter on that I get into a fair bit.

SPENCER: It's so sad. But if you read, there's a whole bunch of scientific ebooks that, if you read from over a decade ago, almost all of them have this priming research in it somewhere. It's like, oh, yeah, if you prime people with old person words, then they walk slowly out of the lab, and you give them a warm cup, then they interpret people as being more warm, and all this kind of stuff. And it's just like, oh, is all this stuff just total bullshit?

JESSE: Brian Nosek told some other journalists, Brian Nosek is a big open science guy. He said he wasn't aware of any of the big social priming studies that had replicated. My favorite was one where it's like, if you look at a statue of The Thinker, you know, that famous statue, you become less religious. And if you look at a statue of a guy about to hurl a discus, you become more religious. I think a really big effect size, maybe 20 points on a 100-point religiosity scale, and to his credit, the co-author of that study told Vox after it failed to replicate, he's like, Yeah, that was a silly study. But did anyone ever believe that looking at a statue could make you significantly more or less religious? We all know from everyday experience and from a lot of other research that that's false. But there was a period when, as I read the book, some of the smartest people in the world thought these effects were real. Danny Kahneman, who was a genius, said, You cannot dispute these, they are real. He wrote that in a book, which did not age well, unlike the rest of the book, which is wonderful.

SPENCER: He wrote that very angry letter to the priming psychologists saying, Look, get your act together. Your field is going to be in shambles if you don't show that your results are real. I mean, I'm paraphrasing.

JESSE: No, yeah. And the subtext being you're making me look like an idiot by supporting you guys.

SPENCER: It's really wild. I'll just say for clarity, when we're talking about priming, there are differences between social priming and the classic priming studies, because there are these word priming studies, where they'll give you a list of words that can make it more or less likely you'll think of another word later. And that's definitely, as I understand it, does hold up. Is that right?

JESSE: Yeah, there's an interesting feature of our cognition where if someone flashes the word "ice cube" in front of you, you'll be more likely to recall it later on. Yeah, that's distinct from social priming, which is more about our behavior and often led to overblown claims that people's belief in global warming swings a great bit just because they feel warm or feel cold, stuff like that.

SPENCER: Yeah. And you also pointed out the effect size of how, you know, could looking at The Thinker really change your religion so dramatically? That doesn't make any sense. I just want to point something out about that, because I think it's really insightful. If you're thinking about effect size, if the effect size is really, really tiny, then it's like, well, why would you care? Right? This is probably not going to be practical or useful. Yeah. On the other hand, if the effect size is really, really huge, then it kind of makes it hard to believe. Because it's like, really? Could it be that this little thing that you do in two minutes influences our behavior that much?

JESSE: It's a Goldilocks theory of effect size? Exactly, exactly.

SPENCER: There's some middle range where it's actually most plausible that that thing is both useful and real.

JESSE: I think there's something to be said. I want to write a book about this, so nobody steal it. But common sense and common sense alone, I think, could help social psychology dig out of the replication crisis. There are these interesting studies where if they just give laypeople a description of a study and say, how likely do you think this is true, they can predict better than a coin flip rate whether it will replicate, and I believe they're even better at it if they're professional psychologists. It's interesting because we think of science as being sort of an antidote to common sense, because common sense is often not true. But when it comes to looking at a statue can make you way less religious, there's probably a place for common sense to step in and be like, let's triple check this.

SPENCER: Yeah, absolutely. But it raises this kind of difficult threading the needle problem, which is that insofar as we reject stuff that disagrees with common sense, then we can't learn anything new. And insofar as a result, violence comes in, so we're suspicious of it, right? It's like really we need a science that's so reliable that even when it violates common sense, okay, maybe that causes us to double or triple check it. That seems fair. Yeah, but sometimes it can actually be right and have those be wrong?

JESSE: You can sort of use Bayesian reasoning to say, okay, we got this surprisingly strong effect that takes up our prior probability. It's true. Let's run it again, with a slightly higher prior probability. There are ways around that problem. But I agree that common sense alone shouldn't be the gatekeeper here.

SPENCER: Exactly. We should be aiming for violations of common sense, even though we should also be more skeptical when someone claims that they have something that violates common sense.

[promo]

SPENCER: Going back to your book, one of the themes throughout it that I found really interesting is the critique of the focus on the individual. Do you want to comment on that?

JESSE: This is something I could certainly feel differently about in 10 years. But I think for most difficult societal problems, inequality, racism, or getting people enough education, it's just pretty unlikely that there's going to be any individual tweak that helps us make that much progress. And the book's big argument is on all these issues, the implicit association test with racism implies and often outright states that by measuring a bunch of people's implicit bias, and presumably down the road, there'll be some intervention to reduce it, we can make the world racially fair. I'm skeptical of that, just because it isn't really attitudes, per se, producing racially discriminatory outcomes; American racism is much more complicated than that. For another, the tool's weak, maybe that's a side issue. But over and over, I found myself looking into these issues and being skeptical that tweaking individuals can get us that far. I think that's a little bit different from the question of whether a single motivated individual seeking to improve themselves can make progress with CBT, or mindfulness, or whatever. But in terms of mass solutions to pressing social problems, I'm quite skeptical of individualized approaches.

SPENCER: In terms of the contributions of individual behavior versus societal, systemic things, the Effective Altruism community also gets criticized sometimes for the same reason that people say they're not focused enough on changing society. They're too focused on interventions on the margin. If we give people who are really poor some cash, they can benefit a lot from that. Or if we get people who have malaria, some malaria nets.

JESSE: I don't talk about that in the book, but I love that. I love giving people cash and malaria nets. I think oftentimes you prove that by doing that, you make a big difference. To me, my gripe is really with individual psychology. How much money could you spend changing? I'd be more skeptical of an intervention that tried to change villagers' behavior to reduce the probability of them contracting malaria. That seems to be the general thrust of American social psychology right now.

SPENCER: I see what you're saying. I think sometimes these things are really wrapped up together. A really interesting example was this nonprofit that was trying to get people to put chlorine in their drinking water in an area where the drinking water often was contaminated and would get people really sick. I looked into what they did a bunch; I thought that was really fascinating. The first thing they did, which I think was a really brilliant idea, was they installed these chlorine dispensers right at the wells where people get their water. You're like, Oh, that's amazing because people are going to the well, the chlorine dispensers right there, they can have information, but they didn't get a good uptake of people using the chlorine. So then they pushed out these educational campaigns to try to explain to people why they should use chlorine. Still, they weren't getting really good uptake, so they then actually paid young people in the community to wear these pro-chlorine T-shirts and had them drive around on motorcycles promoting chlorine. They also realized that if the chlorine dispensers ever went empty, they would have to make sure to get it going really quickly because if people get used to it being empty, then they stop using it. After all this work, they finally created this pretty robust chlorine usage program. If you think about all the things they had to do, it wasn't enough just to put in the chlorine; they also had to solve these really complex behaviors and challenges. It was really the intersection of all of that coming together that led to this intervention. To me, this is how I think about behavior change in the real world. One little trick is probably not enough; you have to bring all these different pieces together. But I also believe that if we do it right, if we pay close attention, if we iterate, maybe we can change behavior for the better. It's just not going to be one quick trick.

JESSE: No, I agree. I'm sympathetic to that. There's a MacArthur Award-winning social psychologist named Elizabeth Levy Pollock, Betsy Levy Pollock, at Princeton. One of my favorite studies I've written up was an anti-bullying intervention in New Jersey schools that seemed to do pretty well. We had to cut it from the book, unfortunately, but anyone can, maybe I'll include a link in the show notes to my article about it, which I'll send you. It's a richly theorized understanding of why people bully, having to do with pluralistic ignorance. One kid is bullying another, and all these kids are gathering in a circle watching. Everyone thinks everyone else is cool with the bullying, when in fact, most kids are against bullying. She and her team developed a specific intervention where they got the most well-connected kids in a school via social network analysis and taught them to more assertively institute these anti-bullying norms, and it appears to work. If you know enough about human nature and human psychology, I'm not saying I do, you could read a study like that and predict it would be more likely to work because it's actually a well-grounded theory of why people bully, and it is attacking a causal factor there. In a way, I would argue the implicit bias test isn't actually attacking a proven causal driver of racism.

SPENCER: So maybe the idea is there's a difference between, "Oh, I found a cool effect in the lab. Let me try to generalize this to everything as a quick fix," versus, "Okay, I'm going to investigate bullying, and I'm going to look at many different ways to tackle it. I'm going to notice that, oh, here's how it's actually working. It's actually the kids are not in favor of bullying, but nobody's saying anything." There are certain influential kids, basically trying to study the kind of causal structure of the problem in great detail, and then designing a targeted strike based on that causal understanding. It's almost like the reverse, if that makes sense.

JESSE: Yeah, it's sort of like, I don't know, maybe this is the wrong analogy, but top down versus bottom up. I don't want to be too unfair to these researchers. The reason the IAT researchers studied implicit bias is because they speculated implicit bias is a big driver of societal discrimination. That's a perfectly reasonable thing to hypothesize. My problem is, as far as I know, that's never been remotely proven that we should care more about implicit bias than a million other factors, including explicit bias. But I think the way you put it is right. There's a difference between, "There's this cool lab intervention, we're just going to assume on the basis of evidence it works in the real world," versus spending a lot of time observing the real world. I think it's not an accident that Betsy Levy Pollock has done a lot of field research and then tries to take those insights back with her to her office and to her lab.

SPENCER: Yeah, that makes sense. So let's talk about the implicit association test, because I thought you wrote a beautiful article about it that I learned a ton from. Do you want to break down what the test is and what your current thinking on it is?

JESSE: Yeah, so I adapted that article into a chapter with a lot of new stuff. The implicit association tests, you sit at a computer and you can do this on Harvard's Project Implicit website. It'll say something like hit "I" when you see a positive word or a Black face versus "E" when you see a negative word or a White face. It basically draws certain conclusions about how quickly you associate concepts in your head. It gives you a score telling you how much implicit bias you have against White people or Black people. There are different versions of the test, all sorts of stuff, and, you know, sort of anti-fat sentiment. German researchers did a version with Turkish names because Turks are sort of an oppressed minority in Germany. For a long time, from when this test was introduced in 1998, the proponents of it said that this is a big deal. They think implicit bias is driving a lot of racist outcomes in society or racially discriminatory outcomes in society. One problem is the test is a very weak predictor of human behavior. It accounts for something like 1% of the variance in ostensibly racist behavior in lab settings. I'm not even sure all their lab tests of racism are actually tests of racism because it gets complicated to test racism in the lab. They admitted in 2015 the test is too weak to diagnose individuals. The other problem is, like you said, the target is striking; it just isn't clear to me that there's much evidence implicit bias is a bigger factor than any of a million other things. So why are we spending likely hundreds of millions of dollars and so much time and energy talking about implicit bias without knowing it's that big a deal for real-world outcomes? I think it does affect some real-world outcomes at the margins, to be clear, but I think its role has been overstated.

SPENCER: It's interesting because I agree with you, the implicit association test is quite weak. I think one of the strongest points to that, which I actually first learned in your article, was the really weak test-retest reliability, as I understand it. If you take the Implicit Association Test, let's say the race one on a Friday, and then you take the same test again, let's say the next Friday, the correlation between those two results, where you'd want them to be really high, is not. I think the correlation was something like 0.5 or 0.4.

JESSE: Somewhere in there. Yeah, the race one is 0.5, which my understanding is that's much lower than what's considered acceptable test-retest reliability for other psychometric instruments.

SPENCER: Right? I mean, it would be like a test for depression where, you know, it's not reliable enough to actually tell if someone's depressed because every time they take it, there's so much noise in the result. It's like, this is basically not a good way to diagnose if you even have implicit bias. Forget about even the question of how important implicit bias is; it doesn't even reliably tell you that. And yet, you know, as you point out, this is sort of this coming-of-age ritual where everyone takes this test and then decides, oh my gosh, I can't believe I have this implicit bias. What they don't realize is it's not really a good measure of that.

JESSE: No, I think it measures some implicit bias. There are some patterns with, I think, Republicans scoring higher, supposedly anti-Black sentiment, and Black or White people scoring higher. So there are some patterns. There's something there, but it's just very, very noisy.

SPENCER: I agree with you. I think it probably does measure a little bit of implicit bias, but it seems like what it's measuring is a little bit of implicit bias plus a whole bunch of noise that makes it unreliable. Plus maybe some other stuff, like out-group stuff. Do you want to talk about that?

JESSE: Yeah, well, the one time I do mention Betsy Levy Pollack in the book is she co-authored a study testing the theory that one of the things the IAT is signaling is not negative sentiment against the group but awareness that they are seen as downtrodden, as an out-group or a power-down group in society. She and her colleagues created a fake group called Noffians and induced people to see Noffians as oppressed, and then gave them an IAT. Sure enough, they came out as implicitly biased against Noffians, a non-existing group.

SPENCER: That's really interesting. So it's just their association with the impressed that causes the essentially slower reaction times on certain stimuli and fast reaction times on other stimuli. So basically, we have this not very reliable measure of implicit bias. I think, though, implicit bias is more important than you think it is. If you break down why there are all these problems occurring, it seems to me that poverty is a really major factor, very clearly. Then there are structural issues where you can have a setup that ends up being racist, even though the individual people involved don't intend for it to be racist; it just happens to disadvantage certain racial groups. Those structural forces obviously seem really powerful. But I would guess that at the margin, when people are reading a resume or making a snap judgment on something, these implicit bias scores actually are real and do come into effect. I don't know what you think about that.

JESSE: I think it's reasonable to think they have some effects. This is partly a methodological challenge because some of the strongest evidence for implicit bias comes from these audit studies, where there's a consistent effect that is pretty sizable, where people with white-sounding names are more likely to get callbacks in interviews than people with black-sounding names. So that's a real effect. It seems to have been replicated. But there are a couple of caveats. One of them is that there might be a socioeconomic confound, where the white name Christian sounds wealthier than the white name Cletus, and whatever biases there might come down to socioeconomic status. These very clever researchers in Chicago made up a fake set of ethnic sounding names, which I think sounded sort of vaguely Eastern European. They discovered the same bias against those names as black-sounding names relative to typical white ones. So even in the area where there's the strongest evidence for implicit bias playing a causal role, you can really quibble about how much of that effect is implicit bias versus other factors.

SPENCER: Right, people tend to prefer people that are like them. In general, people tend to favor the in-group more than the out-group, and then there's implicit bias on top of that. These measures are noisy; it really gets complicated. Also, in terms of just explicit bias, having run many studies online, sometimes I have study participants that are just clearly expressing really explicit racism. It's just shocking. Some people are just really racist and not in the sort of subtle way that we're talking about.

JESSE: Not only that, but as I point out in the chapter, every time the Feds investigate a police department accused of abuses, they find a lot of explicit racism. To be clear, I don't think the IAT folks ever claimed explicit racism was dead per se, but I think they over-extrapolated from the fact that it's undeniably on the decline in survey results. That doesn't mean it's gone or that it doesn't explain some outcomes.

SPENCER: Right. I think what you're referring to are these studies done every year where they would ask people a whole bunch of questions that are very explicitly racist. The rate of people agreeing to them fell and fell to the point where it was so low that I think they just stopped doing the survey because almost nobody would agree to them anymore. But still, it's hard to know; the exact number depends on the question, but maybe 5% or 10% of my sample, and some surveys will say things that I think are pretty racist. That's certainly very far from what it was 50 or 100 years ago. It's just not even comparable, but that could be another factor causing some serious consequences for some people.

[promo]

SPENCER: All right, so another topic that I want to talk to you about is how you think journalists should act in the world of science communication, especially in a world where we can't trust all the scientific results at face value.

JESSE: Yeah, I think the easiest answer is we should just have way fewer write-ups of single studies. I think that's been a big culprit because you often can't trust a single study unless it was conducted in a careful way. I was actually just talking to someone about what science journalists should do, and I would steer them away from covering single studies and toward coverage of the replication crisis and coverage of these methodological reforms. The problem is there's this beast that needs to be fed with hot takes, and hot takes lead to bad science write-ups. I don't really know how to solve that, and it's victimized a lot of otherwise good journalists. If journalists could slow things down a little bit and just not do single study write-ups, that'd be a good start.

SPENCER: Yeah, so basically, assuming that any one study is probably not definitive on a topic. But I think what gets me really demoralized is you'll have topics like ego depletion. I don't know if that's one you look into, but there'll be like 300 studies on it, and there's still a debate on what to make of it all. Is it real? Is it bullshit? Is it this or is it that? At what point are journalists supposed to chime in? You know,

JESSE: Yeah, no, that's a really interesting question. I think this is another. This was an offline conversation I had with someone, but for example, there was huge debate over mindset interventions and studies pointing in different directions. I think some failed replications. There are occasionally the seminal studies that everyone seems to agree are a big deal. Nature did one, I think it was Nature, with a huge sample size of mindset interventions. These are interventions where you tell kids, your brain is like a muscle, you can strengthen it, you can become smarter; intelligence isn't set.

SPENCER: So it's like growth mindset?

JESSE: Growth mindset versus fixed mindset. For a long time, there were some overhyped claims here. What was interesting is this study showed something, and it was something you could potentially induce with a 60-minute session. It only worked on the worst students, but that's who you'd want to target anyway. The end result was something between total debunking and total embrace of the hype. Everyone agreed this was a big important study because of its sample size and the quality of the researchers. Something like that should be reported on, but I don't think it should be reported on as every last crappy study that pops up on Eurekalert, which is the sort of database of new studies sent out daily.

SPENCER: Right. When it comes to that mindset intervention, teaching kids growth mindset, one of the things I think is really interesting about that is it's an example where, as far as I can tell, there probably is an effect there, in fact, it probably does something. But it's probably way less than you think if you watch the popularized talks about it. On the other hand, it's so short, as you pointed out; you can teach it in an hour or maybe a few hours. That's probably actually a pretty good use of time for students. Think about the average three hours in school; nobody's ever done a randomized control trial on that. How valuable is the average three hours of school? Probably not as valuable as a growth mindset intervention for the same three hours, right?

JESSE: Yeah, I mean, I think if you, halfway through the semester, take the 20% of the weakest kids, pile them in a room, and give them this talk, it's such low cost that the bang for the buck sounds good to me. I'm still not thrilled that they overclaimed for so long about it. But it could be 20 years from now that it's standard in education, or it could be they do another study that's less impressive. I think a lot of these interventions might end up in a place like that, where in the right context, it can do a little good, but it never should have received as much hype as it did.

SPENCER: Right. That speaks to this idea that, on the one hand, there's the cost of administering the intervention. The lower the cost of administering it, the lower the bar needs to be in terms of effect size or value produced. If you can teach it in an hour, it doesn't have to be that great to be worth teaching. The other one is the risk of the intervention. It seems like a growth mindset intervention is probably not going to harm anyone, teaching them that you can grow your intelligence level and get better at anything you try, as opposed to when you fail, that tells you who you are as a human. On the other hand, you could imagine interventions that are a little shaky and maybe actually have more risk of going badly. There, we want the center of evidence to be higher.

JESSE: Definitely.

SPENCER: One thing that people could take away from all this is being like, oh, individual-level psychological interventions don't work. But actually, I think the opposite is true. I think there really are some that do work. I think you mentioned this in your book briefly, but cognitive behavioral therapy, for example, has tons of evidence that it sometimes dramatically helps people with depression or anxiety or things like that, usually administered with a therapist, but there are ways to self-administer as well.

JESSE: You know that seems to work. To me, it's the difference between thinking that these interventions can solve society-wide problems versus help make an individual feel better. There are a lot of reasons to help make individuals feel better, and there's a whole field of clinical psychology dedicated to that. So yeah, when it comes to that stuff, I'm all about individual interventions.

SPENCER: Okay, that makes sense. But also, a lot of times those interventions take, if you think about how much time someone is putting into them, they might be doing an hour a week for 14 weeks or something. That's a far cry from a two-minute little thing you can do on yourself, right?

JESSE: Exactly. A lot of these ideas are pitched as requiring very little time or effort, and I think that's why they catch on, or that's one of the reasons.

SPENCER: I'd love to hear your thoughts on grit a little bit. Do you want to talk about what grit is and what's your opinion on it?

JESSE: Yeah. Grit was a new skill developed by Angela Duckworth and other superstar social psychologists. The Grit scale has two sub-factors. One is passion. That's just feeling excited about what you're doing. Perseverance is not giving up when the going gets tough, meaning you're not necessarily enjoying what you're doing in the moment. There's also this aspect of not flitting around from thing to thing. I think one of the items reflects that. It's like I often switch projects midstream. So it's measuring two or three different things in reality, and it was presented as uniquely predictive of success in various domains. Angela Duckworth said it was more that it beat the pants off of those who are, direct quote, "I think," to the Times, "that it beat the pants off of measures like SAT scores and intelligence." It gave rise to this really inspiring storyline where it sort of goes back to the fixed mindset thing, but your ability level isn't fixed. It's not just intelligence that matters, but also your hard work. That's something that can be addressed. Duckworth was honest in that she was all along. She was like, we don't know any grit interventions that work. We're just exploring this idea. One of the problems was that grit turned out to be just about identical to conscientiousness. The correlations, once you do the correct statistical corrections, are so high, they're basically the same thing. Conscientiousness is a well-known Big Five personality trait that's been studied for decades. So she didn't really discover anything new. There might be, in this subtle way I described in the book, a little bit there in terms of what grit can predict that other stuff can't, but it turned out grit just wasn't that good a predictor, especially when you control for that other stuff that supposedly doesn't matter. The first big nationally representative study of grit and states was done by Israeli researchers. I forget the exact numbers. I don't have them in front of me, but I think they found that intelligence is 38 to 50 times more important than grit for workplace outcomes. Or maybe that was educational outcomes. Then a similar number, but a bit smaller for educational. I could be flipping those. The point is, they found intelligence was just much more important. There's never been evidence we can really tweak grit, except with a lot of time and effort. That wouldn't really be scalable anyway.

SPENCER: Yeah. So the link between grit and conscientiousness, I find really interesting. I guess the way that I would put it is that you've got this Big Five personality trait of conscientiousness, and grit, I would say, is a narrower facet, a subset of conscientiousness. Conscientiousness is a bigger thing. Grit kind of fits inside it as sort of more narrow attributes. It seems to me like the best-case scenario would be that we could say that those specific sub-facets of conscientiousness that represent grit, perhaps they're better predictors of certain interesting life outcomes than conscientiousness as a whole, right? There might be a reason to try to narrow to those if they actually were better predictors. My understanding is that maybe the perseverance one might be true of, like, maybe there's some evidence of that; maybe the passion one, there's kind of less evidence. It's a little bit unclear how much extra juice we're getting from narrowing conscientiousness versus just keeping it as conscientiousness.

JESSE: Yeah, that's extremely reasonable. That's all stuff that even the fiercest critics of grit have admitted — that there's this little slice of it that might be useful. But again, we started out with these big claims worthy of big book deals and TED Talks, and grit was this revolutionary new way to understand performance. Then years later, it's like cookie crumbs, basically.

SPENCER: It seems, though, that Angela Duckworth, much to her credit, has taken these criticisms really seriously. At least I've seen her talk about these in a way that I thought was very respectable.

JESSE: She is significantly more honest and humble than some researchers, I do have some gripes with the way she's presented some of the stuff. There are a couple of grit studies that aren't actually on grit. In one instance, in my book, she just took another scale, called it grit, and considered it a grit study, which I think is actually a little bit dishonest. People can buy my book and chase the footnote for that; it's sort of a long story. But overall, I would put her more as a model of a superstar academic who's more honest about the limitations of her research and what we don't know and so on.

SPENCER: It seems like there's this weird selection effect where it's like, okay, imagine you have 1,000 scientists producing results. Then, some subset of them, let's say 100 of them, are promoting their work, telling journalists about it, overclaiming it, etc. Who do you think's gonna get talked about? Who do you think's gonna become more famous? And it's like, how do we combat that?

JESSE: I don't know, man. I think part of it is just that psychology is getting a little bit less averse to criticism, including criticism from outsiders. I think that's helpful. I think humans are humans, and we respond to incentives. Who wants to give a TED talk that becomes a punchline 10 years later because everyone realizes you were full of shit? I think that does affect people. I don't want to present this in a punitive or cancel culture way, for lack of a better word. But that is how people's behavior improves and gets more honest. Also, the statistical basics of p-hacking and why we might want to care about pre-registration and registered reports, all that stuff is trickling down to 25-year-olds in training, or psychologists in training, way more than 10 years ago. So I think there are some signs of hope for all this.

SPENCER: I agree. Talking to younger social psychologists, it's like they were coming of age just as all the stuff was sort of unraveling. When they're doing studies, they're like, I don't want that to happen to me. Imagine you're a researcher who's been publishing using shoddy methods for two decades. That would be really impressive if you're like, yep, a lot of my research was junk, but I'm going to do a good job now. That's commendable and wonderful. But also, we're talking about humans here, right? How many people are really willing to give up on two decades of their own work and say, maybe that was kind of shit?

JESSE: I feel bad for some of these guys, including John Bargh, who's the guy I criticize heavily who's at the center of social priming, but I think that might be the case with him that decades of work are going to not end up leaving a mark. It isn't necessarily his fault, although I think he will reclaim it. There are a lot of aspects of human history where if you're in the wrong place at the wrong time, you get sort of screwed. I think a lot of late 90s, early 20th century social psychology is in that category, unfortunately.

SPENCER: I'm a mathematician by background, and I'm really interested in the kind of mathematical aspects of this, among other things. One thing that strikes me is that there needs to be more training in these methods. As you have alluded to, I don't think this is really fraud for the most part. There are some really crazy examples of fraud in social science, but that's not mostly what's happening. It's mostly kind of gray area stuff where people are justifying to themselves, "Oh, I could throw out this outlier. I should have started out anyway." Now my p-value looks nice, but the better trained you are in the methods, the more you really deeply understand them. I think it's harder to do the bullshit. It's harder to convince yourself, you know?

JESSE: It's sort of similar to junior science reporters who will write off a press release about a study rather than reading the study itself. That's just a norm that changes as soon as you get more experienced and start to take your job more seriously. I'm sure there are a million examples in social psychology pertaining to methods and statistics that I don't even know about. But yeah, I think that's exactly right.

SPENCER: I have a more sort of meta question about how you work, which is, you've done a really phenomenal job of uncovering problems with a bunch of things like the post-dissociation tasks, like the wonderful article you wrote about that. Digging into these issues really deeply. Critiquing these big name scientists, just on a psychological level, how does that feel doing that?

JESSE: I think I'm jaded about authority. I don't really mind that, because when I'm doing that, it's not just me; I'm often surfacing other people's critiques. The IAT was poked and prodded and pulled apart by a small group of very dedicated researchers who really felt that it was overstated. It took them years to really be heard out, and eventually, the test creators conceded some of their major points. If it was just me, maybe I would feel differently about that, but I'm often standing on the shoulders of, if not giants, good-sized scientists.

SPENCER: Got it. So you feel like you're just part of a team that's trying to surface the truth on this?

JESSE: I'm synthesizing a bunch of other research done by people who are often smarter than I am. I sometimes feel qualified to just tweet out, "I'm skeptical of this new paper, here's why." But I would never, without talking to a lot of researchers and reading a lot of stuff, have felt qualified to criticize the IAT.

SPENCER: It feels to me like it takes a certain personality to be willing to do that kind of work, right?

JESSE: Yeah, you have to be a little bit of a jerk. And journalists should be jerks. Because journalists need to be contrarian and slightly confrontational by nature, I think, to be good at our jobs.

SPENCER: There is something to that, right? If a journalist is unwilling to criticize a really powerful person, that's going to be a really big roadblock to doing good journalism, right?

JESSE: Absolutely.

SPENCER: What would you call that personality trait? Is it a little bit of disagreeableness in the Big Five, or is it something else?

JESSE: Some combination of disagreeableness and some level of neuroticism of really not wanting to write stuff that's wrong, and not really caring if you step on toes to make sure your story is accurate? It's an interesting question. I think it's those two. There are also journalists who are perfectly amiable and friendly and outgoing, who do good debunking work, but I think having a little bit of a punk rock attitude toward authority helps a lot.

SPENCER: One thing I've been thinking about lately is the different roles of different personalities in society. I'm someone who's very high in agreeableness. I just want everyone to be happy. I don't want anyone to be angry. I want people to get along all the time. But we also need people that are just like, forget people being happy. We need to totally dig into this and figure out what's actually true. And if we have to trash people to figure out the truth, the truth is more important, right?

JESSE: You definitely need people like that, especially in journalism. Especially in academia.

SPENCER: Absolutely. I think sometimes academics can be absolutely brutal. At the end of someone's talk, in front of all their peers, someone just criticizes their paper right in front of everyone. That takes a certain personality.

JESSE: That was an interesting subplot that I don't write about. Susan Fiske, a legendary social psychologist, referred to some of the people who are too mean and critical on Twitter or blogs as methodological terrorists. But that shows that this old school sort of genial approach — writing a letter to the editor after, being kind, being nice — there's something to that. I value civility. But I've interviewed researchers who wanted to criticize bad research, and I encountered so many institutional roadblocks to doing so that I would rather live in a world where you can put up a blog post on Medium saying, "Here's why I think this is wrong. Tell me if you agree."

SPENCER: I've encountered some young researchers who told me that they found problems in the research being done in their lab or the research done by famous people. Sometimes they just can't replicate the findings. They've run it seven times and still can't replicate it, or other issues like that. Usually, they just don't tell anyone. Maybe they'll tell their friend or they'll tell me if I'm chatting with them, but they generally don't tell people. I think this is just an incentive issue. Do you want to be the person who claims that big names' research was flawed? Is that actually good for your career? I don't think it is. For most cases, I think it's bad for your career.

JESSE: One of my first big stories was about this scandal over fake gay marriage opinion data. A guy, David Brockman, it turned out, is a genius. He would have been fine either way. He's intimidatingly smart, given how young he is, but he was terrified to criticize this big study, which, in retrospect, is crazy because this one has so much acclaim and accelerated his career. But that's a very big deal within the social and professional networks to criticize your peers. I think academia is sometimes used to weaponize the concept of civility to stymie would-be debunkers.

SPENCER: Right, and if you're going to apply for tenure track jobs, you have to think about what the tenure committee thinks about it. If I've critiqued one of the big names in the field, I think one thing that maybe people don't get fully is that there's enough bad work out there that any given random researcher has a decent chance of having some paper that wouldn't replicate. So then you're like, you've got a table and there's a whole bunch of people sitting at it. This is like the tenure committee, right? And maybe half the people in the room have poor research or something, and they really have a stake in not having all this stuff torn down.

JESSE: Yeah, no, I think you're correct about all this. There's not enough disagreement on this podcast, Spencer.

SPENCER: Yeah, I gotta work on that.

JESSE: We're both too high in agreeableness.

SPENCER: Maybe. Any other topics you want to cover before we go?

JESSE: I mean, I would just say, if people find this interesting, sales for a first-time author are a very big deal. So if you are able to, check out The Quick Fix: Why Fad Psychology Can't Cure Our Social Ills. I also have a podcast called Blocked and Reported and a Substack [newsletter] (https://jessesingal.substack.com/). I am a caricature of a caricature of a journalist, given that I have a newsletter and a podcast. But if you enjoy this conversation, you might enjoy those.

SPENCER: I think the two caricatures cancel out and make you a journalist again.

JESSE: There we go. I'm just a human again.

SPENCER: Jesse, thanks for coming on.

JESSE: Thank you so much. I thought this was great.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: