with Spencer Greenberg
the podcast about ideas that matter

Episode 122: Career science, open science, and inspired science (with Alexa Tullett)

September 15, 2022

How much should we actually trust science? Are registered reports more trustworthy than meta-analyses? How does "inspired" science differ from "open" science? Open science practices may make research more defensible, but do they make it more likely to find truth? Do thresholds (like p < 0.05) represent a kind of black-and-white thinking, since they often come to represent a binary like "yes, this effect is significant" or "no, this effect is not significant"? What is "importance laundering"? Is generalizability more important than replicability? Should retribution be part of our justice system? Are we asking too much of the US Supreme Court? What would an ideal college admissions process look like?

Alexa Tullett is a social psychologist who works at the University of Alabama. Her lab examines scientific, religious, and political beliefs, and the factors that facilitate or impede belief change. Some of her work takes a meta-scientific approach, using psychological methods to study the beliefs and practices of psychological scientists. Learn more about her at, or send her an email at

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Alexa Tullett about meta-analyses versus registered reports, conceptual replications, and conceptions of open science and inspired science.

SPENCER: Alexa, welcome.

ALEXA: Hey, thanks so much for having me, Spencer.

SPENCER: I'm excited to chat with you because I've been listening to you on your podcast for awhile talking about lots of fun psychology topics, such as on Two Psychologists Four Beers — which I guess is your new podcast — and your former podcast, Black Goat. So I'm just really excited to explore these psychology topics and how do we make science better.

ALEXA: Yeah, this is a cool opportunity for both of us to interject in the other person's podcast world and actually respond, as opposed to being a listener who just gets to listen and doesn't actually get to weigh in.

SPENCER: Yeah, exactly. First question I have for you — a really big one — is, how much should we actually trust science?

ALEXA: That's a question that I think about all the time. It's sort of the premise for a class that I teach to my undergraduate students. And I guess the short answer is that, I don't exactly know, but I find myself frustrated. I consider myself a scientist and so, when I come across questions in my day-to-day life that are — at least they're empirical questions, right? — so presumably, they should be answerable by science; whether the answer is out there or not is another question. But I find it shockingly hard to find the answer and feel confident in it. One example that I encountered recently is that we found out that we have poison ivy in our yard. And so we've been looking into ways to remove poison ivy, without getting the rash that comes from poison ivy. And there's a lot of mixed messages out there about how to avoid getting poison ivy. Some people will tell you, “It's a good idea to take a hot shower," and then other people will say, "That's the worst thing you can do. That's gonna spread the oil all over your body. You have to take a cold shower, because for some reason, that's not going to spread the oil over your body." And for questions like this, I find it really difficult to get a clear answer. I think, ideally, you would find some kind of peer-reviewed article online that you find trustworthy. But that in itself is quite a challenging task. And I think it's made slightly easier because of my science background. But as soon as we get out of my area, that becomes more and more difficult. Do you have a level that you trust science? Or do you have an answer to that question?

SPENCER: Yeah, that's a tough one. Well, one thing that I tend to do if I'm thinking about a medical question or health question is — there's this thing, The Cochrane Collaboration, which does these really nice in depth meta-analyses on questions. And so one place you can start is, "Okay, have they done a meta-analysis on this?" Unfortunately, in the whole space of possible things you might want to know the answer to, they've only covered a tiny, tiny fraction of them, despite having spent decades working on these meta-analyses. So it's really great when there is one that exists, but (for) many things, there isn't one. And then my fallback plan is to go to Google Scholar, and you can search a topic and then you can put in phrases like, end quote “randomized controlled trial”, or end quote “systematic review”. And sometimes that helps to just get a quick answer. But again, there's so many things where there just either isn't enough empirical evidence at all, or where you might get some studies, but it's not that clear (that) you can trust them, or they have contradictory answers. So I feel like in the space of all questions, science has answers on surprisingly few of them.

ALEXA: Yeah, I agree. And at my most skeptical — so I think, for the majority of questions that come up in my day-to-day life, I'd be pretty excited to find a meta-analysis. But at my most skeptical, I'm even worried about meta-analyses, because meta-analyses are often combining the results of published studies and we know that publication bias is a problem. So for instance, another question that I've wondered about recently, is whether it's okay to drink coffee when you're pregnant or something like that. And that's something that there's also a lot of advice out there about; some people say that it's really bad and some people say that it's fine. And so the problem with trying to find a meta-analysis about something like that — which may or may not exist, I'm actually not sure — is that it's probably much more likely that studies that say caffeine has a harmful effect, or it does matter whether or not you drink caffeine are more likely to be published. And so those will get combined in the meta-analysis. And I know that some meta-analyses tried to do adjustments for how many studies would have to be out there in order to combat this publication bias, but I don't always find those super persuasive. So then, it becomes a question of trying to find — my preference would be finding a registered report that addresses a question and there's just vanishingly few registered reports out there. So it's really hard to find scientific answers to questions.

SPENCER: If I use that example — because I just searched on Cochrane and they actually have a review — it's called “Effects of restricted caffeine intake by mother on fetal, neonatal and pregnancy outcomes”.

ALEXA: What's the answer?

SPENCER: This is what the Cochrane Collaboration says about caffeine, "Authors' conclusions: there is insufficient evidence to confirm or refute the effectiveness of caffeine avoidance on birth weight or other pregnancy outcomes." Very helpful.

ALEXA: Yeah, there you go. [laughs] Well, ironically, I sort of trust that more than if there were some big effects.

SPENCER: It's not useful; it may be trustworthy, but it's not useful. It's also kind of funny how most meta-analyses — almost all, I would say, just end (with) “we need more evidence”. We have maybe a little bit of evidence, but we really need more to figure out the truth. You make a bunch of good points. With meta-analysis, first of all, are they including all the negative studies, right? Because if those studies were never published, then the person doing the meta-analysis, no matter how good a job they want to do, they're just not going to be able to include them. And then there's correction methods, like trim and fill, but there's some evidence that methods like trim and fill actually can miscorrect or don't do a good enough job of correcting. It's a fundamentally hard challenge; if you only see part of the distribution, how do you fill in the rest, right? And there's just to go approaches, but doesn't mean that it worked out well. And then you mentioned registered reports. Do you want to just explain what that is and why you would look to those?

ALEXA: Yeah, right. Registered report is actually very similar to how I thought science proceeded before I learned more about how science works in practice. With a registered report, authors will plan a study, they'll write something like the introduction and methods section of a paper. So they'll have a research question in mind — let's say, "What is the effect of caffeine on prenatal development?” or something like that. And then they will submit that methodology — that plan for a study — to a journal, and the journal will vet that planned study and decide basically whether or not the research question is interesting, and whether the methods are appropriate to address the question. If the reviewers and the editor decide that the answer is yes — this is a good study, it's addressing an important question, it's well-designed — then they agree to publish the outcome no matter what happens. So if you find something that's kind of uninteresting, like, "Oh, there's no effect here”, or maybe the effect is the opposite of what you hypothesize, it will get published in the journal regardless. And the reason that that's different from almost all scientific articles that are published, is that typically journals — editors and reviewers at journals — are evaluating papers once the results are known and that leads to things like publication bias, where somebody who gets a paper on their desk that says caffeine has these dire consequences for fetal development, it's like, "Wow, everybody needs to know this. This definitely needs to get published." And somebody who gets this paper that says, "Looks like it doesn't really matter whether you drink coffee or not while you're pregnant,” that's not that interesting, right? So that's why registered reports have this extra degree of trustworthiness to me, because they'll be published no matter what the results are.

SPENCER: Do you actually trust registered reports more than meta-analysis? Because that was what you seem to be implying.

ALEXA: Yeah, I would say that's definitely true. There are some advantages that meta-analyses have that registered reports do not. I would say on average, meta-analysis is dealing with more data than a registered report, depending on the registered report — often it's done by a single lab or something like that. A meta-analysis might include studies from 20 labs or 50 labs, so there is the advantage of more data in a meta-analysis. And then there's also — often in a meta-analysis — there's the advantage and disadvantage of including perhaps multiple designs. So sometimes there will be a conceptual question and the people who create a meta-analysis will include studies that vary on things that still fall within that concept. Let's say it's a meta-analysis looking at empathy. The people who are doing the meta-analysis might include various ways of measuring empathy. I guess in that way, it could increase the generalizability of the findings. But the the reason that I say that I trust registered reports more so than meta-analyses is that, first of all, registered reports have more safeguards against bias. Another feature of registered reports — besides the fact that they'll be published regardless of outcome — is that authors have to stick to their original plan and that decreases the likelihood of false positive results. Basically, I think that the results in a registered report — even though they might be more circumscribed than the results in a meta-analysis — they're more likely to be unbiased, they're more trustworthy. And the big problem with meta-analyses is their likely bias and it's really hard to determine the extent of the bias, because we often don't know how to evaluate that.

SPENCER: You're kind of getting at this question of heterogeneity. I like to think about meditation as an example. Imagine you have a meta-analysis on ‘Does meditation help depression?' or something like that. Well, what does it really mean to meditate? I mean, you have everything from using a simple meditation app for two-minutes a day to going on a 10-day intensive retreat. And then there's dozens, if not hundreds, of different types of meditation you could be doing. But somehow the meta-analysis wants to draw one conclusion about the full state of these extremely different interventions. And then you also have the fact that a sufficiently poorly implemented intervention will never work, right? If someone teaches meditation extremely poorly, it's not going to help people but that doesn't mean that meditation doesn't help. And somehow the meta-analysis is averaging over all these different effects with different sizes, different lengths, different quality, and trying to draw some conclusion. And it seems like there's something fundamentally flawed about that idea.

ALEXA: This is like also a conversation that comes up with something that people call conceptual replications. This is the idea that you take a study — let's take your meditation example. Let's say you want to look at the impact of meditation on chronic pain or something like that. So a direct replication would be: there's an existing study out there, they use a particular meditation strategy, they use a particular way of assessing chronic pain, and a direct replication would be doing exactly that again, and seeing if you get the similar results to the people who did it originally. A conceptual replication is when you try to capture the same construct, but you change something about the methodology. Perhaps you use a different meditation strategy, or perhaps you measure chronic pain in a different way. The conceptual replications are necessary; we need to do studies in multiple different ways because we usually aren't trying to draw conclusions about one specific way of measuring chronic pain. We usually are interested in chronic pain as a general construct that can be measured in multiple ways. So heterogeneity has its uses, but it also introduces complications. Sometimes people have complaints about conceptual replication being treated as a test of the strength of the original study. Because if you get a conceptual replication that is consistent with the original study, you can say, "Awesome, our effect is so great." And if you got one that's inconsistent, you can still say, "Well, the original effect is real, but it was the changes that are the reason for a different result with the conceptual replication." So you're safe either way; it's not a very tough test of the original idea.

SPENCER: Either way, you feel like your hypothesis can't be refuted. Sometimes listening to your podcast episodes, I feel like you almost have this depressing attitude that social science in particular — we're gonna focus on that — is just so problematic that we sort of have to start from the beginning and be like, "Okay, we don't know what we know, let's just rebuild things from scratch." I mean, maybe I'm interpreting you as being more negative than you really are?

ALEXA: I think that's a fair characterization that I'm pretty skeptical of the claims that are out there and that, in some ways, I feel like there are areas where we could be starting from scratch and that wouldn't be a waste of time. I feel like that can be seen as pessimistic or optimistic, depending on, I guess, your position and your approach or your perspective. I think it's probably pretty depressing if you've been in this field for your entire life, and you feel like you have made a big contribution. And then now, people are responding by saying, "Actually, we should doubt all the claims that are out there." And it's not an optimistic take for the field as it currently stands. But even from that perspective, I think that there should be something encouraging about a field that is growing and changing. I think that social psychology, behavioral sciences are changing quite dramatically and getting a lot better. One thing that I think is exciting that I tell new graduate students, is that there are really interesting questions that people really care about out there that we don't really yet know the answer to. Whereas if you started in the field of social psychology when I did, it felt — to me at least — hard to come up with a really new idea. We would sometimes submit papers to journals and they would say, “Yeah, duh, we already know this.” Even if you were putting your own spin on it, it would be like, “Oh, well, this is just motivated reasoning. We already knew about this.” Or “this is just like stereotyping and prejudice” or something like that. So there was the sense that we already knew a lot. And there is something exciting about being in a field where we decide, okay, maybe we don't actually know that much and there's a lot of stuff still to learn. And there's also something exciting about being in a field that is currently going through a lot of self-reflection, and changing in what I think are really positive directions. So I guess that's my optimistic spin.

SPENCER: So it was like a great reset of knowledge and it's like, “Oh, that means there's all these new things to discover because we maybe didn't have the answers before that we thought.

ALEXA: Yeah. But I still think that it's very hard to get to the answers. So it's not an easy path forward either way. (laughs)

SPENCER: Yeah. One thing that worries me is that — while I think a lot of open science practices are really good ideas and really do make science better — I worry that they don't necessarily get to the core of how to actually figure out important truths about humans. Because there's a difference between: let's do things in a way where we don't trick ourselves or don't p-hack, and let's actually make real discoveries on this really-hard-to-understand phenomenon.

ALEXA: Yeah, right. I wanted you to tell me a little bit more so I know that you make this distinction between, let's say, somebody who we would label as an open scientist, and somebody who we would label as an inspired scientist. And so you see these as reflecting different motivations and different approaches and different ultimate goals. And yeah, when I considered these two alternatives that you proposed, I wondered if they weren't one and the same. Obviously, I'm used to thinking about open science as, I don't know, maybe the ideal? And you seem to be suggesting that actually, there's something separate that isn't ideal so I wanted to know more about that.

SPENCER: Yeah. I'll start at the top of the hierarchy (of) the way I think about it, which is that there's a traditional way of doing science in order to get publications that you may call an occupational scientist, and they're optimizing for getting good jobs and staying in the field. They have the credo of “publish or perish”. And we know that in social science, that has led to a bunch of bad practices like p-hacking and using small sample sizes and so on, and kind of created this replication crisis. Then the way I view it is we kind of have a reaction to this, which is what you might think of as being an open scientist. And because it's a reaction to the kind of the flaws of being an occupational scientist, it's about shoring up those weaknesses. And so that means things the power calculation before you run your study to make sure it's sufficiently powered, you can actually find the findings you're looking for...okay, that's great. Preregister your study so people know that, after the fact, you didn't change what you're actually studying, and you didn't p-hack. So for each of the flaws of occupational science, it's trying to fix that flaw. But I guess the way I would say it, is that open science to me feels focused on making work defensible. In other words, nobody can criticize my work because I did all the right things. But I think of it as a little bit like doing science with your hands tied behind your back, or like you prove you have nothing up your sleeve because your hands are tied behind your back. But your hands are also tied behind your back. Maybe that actually has negative implications for one's ability to figure out important truths. And so then, I introduced this idea of an inspired scientist which is, to me, sort of the ideal, and it borrows things from open science, but it's not exactly the same. Because it's not about proving to others that you're doing unimpeachable things. It's about trying to figure out the truth of the way things actually work in reality, and that's what everything is focused on. And the proving to others only comes at the very end when you actually want to convince people but it's not like a fundamental aspect of discovery.

ALEXA: Okay, so I have two questions. The first is, when you say open science, can you say a little bit about what falls into that category? So are you thinking of literally making data open, making materials open, open access publishing, those kinds of things? Or would you include things like preregistration and registered reports and things like that in this sort of open science umbrella?

SPENCER: Yeah, all of the above. I think of it as this whole set of ideas that has grown in response to the replication crisis.

ALEXA: Okay. And then when you describe these hypothetical individuals, are they people who differ depending on their sort of personalities? Do you see these things as defined by different people or different systems?

SPENCER: Well, I think in practice, people have a mix of motivations. But the way I would describe these three different ideas of occupational scientists, open scientist, and inspired scientist is, what are they optimizing for? If you think of it — and of course, people are really a mix — but if you think of it as, you could be optimizing to just get lots of publications and have your career go well. So that's occupational scientists. You could be optimizing for doing unimpeachable work that nobody can criticize, and that would be, I think, open science. Or you could be optimizing for discovering truths about reality, not worrying about how much you're gonna be able to convince others of it, but actually just trying to discover the truth. And that's what I think of as the inspired science motivation.

ALEXA: So I guess I'm not totally clear on the difference between trying to come up with the results that are unimpeachable and come up with results that get us closer to the truth. So those two things seem very intertwined to me. And in particular, the idea that we should be able to convince others of our findings seems really important if our ultimate goal is finding the truth. It's easy to convince ourselves of stuff, we're constantly convincing ourselves of lies. But the step of trying to convince other people that they should believe our results, and that our results are sound — that seems critical to the goal of getting closer to the truth. So actually, when I first was exposed to these ideas of yours, the distinction between an open scientist and inspired scientist, I thought maybe another way to think of it — or the way that at least aligns with the way that I've pictured these things — is to see, okay, traditional science, we know what that is, that's, as you say, people have a mix of motivations; they want to find the truth, that's probably why they're in science. But they also want to keep their jobs and they want to get promoted, and they want to be recognized for their work. And so the traditional scientific system rewards things that are not steps towards the truth. So sensationalist findings and things that are surprising but untrue and things like that. And then I see open science as the mechanism for allowing people to be inspired scientists. Basically open science, I think, puts the systems in place that allows people to focus on finding the truth without worrying about losing their jobs. The best example I can think of is registered reports, where you can design the best study to investigate your question and you don't have to worry about whether you find a positive result that would be the ticket to getting published within a traditional system. All you have to do is design the best study possible, convince editors and reviewers that this is a worthwhile project to do, an important question to try to answer. And then you just tell them what you found and that's it. So in that sense, I think that, yeah, I see open science as the path that actually unties people's hands so that they can be inspired scientists. But maybe you see open science as more constraining in some way, than I do.

SPENCER: Yes, that's a great point. And I think you're right, that there are some aspects of open science that actually enable people just to do better science, full stop, because they kind of shift incentives in a positive way. So I do agree with that, but I'm gonna make the argument that these actually are different ways of approaching science. And I'll see if I can convince you of this. One way to think about it would be to consider really extreme examples to try to illustrate the point. So my extreme example for an open scientist would be someone who...let's say they have a finding that they're about to publish, and they know that they're gonna get attacked on all sides, because lots of other people are going to disagree with it. So they're trying to bolster the way they do their work so that it can't be attacked. So maybe they're going to preregister their study. They're going to maybe do sensitivity analyses and show that it holds up even if you analyze it in different ways. They're gonna use a sample size that they can strongly defend through power analysis, and so on. So they're bolstering their work. And that's a positive thing; I'm not saying it's bad. But that's just the mindset they're in. Now, the extreme example I like for inspired scientist is, imagine someone whose child has some rare disease and their child is gonna die in three years if they don't discover how this disease works and then figure out how to cure it. And that mindset is, “We have to figure out how this works. I don't care about people being convinced. I don't care about bolstering my position in a social way. I just care about figuring out how this thing actually works.”

ALEXA: It's funny that you gave that example because, as you were describing the way you see an open scientist, I was thinking, “Exactly, that's exactly what we would want to do,” if we were addressing a really important question like, “Does this medication work? Do these vaccines work?” for instance. You just have to cover every single base. So if I were trying to cure my child of a disease, I would absolutely want to go that path. I would be worried that if I didn't have those checks and balances in place — (if) I didn't have this accountability to proving to others that this is the way that things work, (then) I might decide, okay, there's some initial evidence that this works; I'm desperate, so I'm going to use it. And that seems like definitely not the way to get to the truth. That example is slightly complicated by the timeline. There are cases where — I think that if there's a lot of urgency to a question — there might be instances where you decide to make a decision based on less information, rather than trying to make your conclusion completely unimpeachable. But I think if you want the truth, the open science approach is the way to go. And the more important the question, the more important that you stick to that path.

SPENCER: That's really interesting to me, because I feel like if I had a child who was gonna die in three years if I didn't figure out how they all most worked, I definitely wouldn't be doing things like trying to prove to other scientists that my results were correct, I would just be trying to as rapidly as possible, iterate to the truth.

JOSH: If you're like me, you'd really like to learn quick practical tips for improving your life or understanding the world. But it's hard to know where to look. And it's easy to be overwhelmed by the flood of blogs, media sites and academic papers. Well, there's good news. Once a week, we send out a newsletter called One Helpful Idea where we distill down a single idea that we think you'll find to be valuable. We know you're busy so each idea is formatted to be read in just 30 seconds. And at the bottom of the newsletter, we also include links to that week's new podcast episodes, which is a great way to keep up with the podcast. And we include in each email, a newly released essay by Spencer. So if you only listen to our podcast, you're missing out on a lot of our content. To sign up for the One Helpful Idea newsletter and start receiving bite-sized ideas once a week, visit

SPENCER: You mentioned the viewpoint of others — and I think that's sort of the key distinction here. Open science to me is about showing to others that your work is true; whereas inspired science to me is about actually figuring out the truth. And they are heavily overlapped; I'm not saying they're totally different. But I do think that they have different orientations because figuring out the actual truth has nothing to do with showing other people your work is sound.

ALEXA: Oh, I think it does.

SPENCER: Oh, how so?

ALEXA: That must be where we disagree. Because I think that as individuals, we're really limited and we have these preconceived notions and biases and expectations. And so science gains a lot of credibility by requiring other people to also be persuaded of your results. I would say that that's the premise on which peer review is built. But also, others have argued this. So like Naomi Oreskes, when she talks about why she thinks people should trust science, one of the things she says is, it's not just one person who's telling you these facts; it's a collection of scientists, it's a collection of people who know a lot about this, and who are thinking about it critically. And so if scientists as a group tell you you should trust something, then it's much more trustworthy than if just one Joe Schmo. So I see that as — the core of any credibility that science has comes from having to convince other people.

SPENCER: So do you view it as just too hard to not self-deceive?


SPENCER: So you think (we're) just a lost cause (LAUGHS). We're going to self-deceive so we need to tie our hands behind our back so that we don't accidentally put something up our sleeve without realizing it, even just for ourselves, even if it's not just about convincing others.

ALEXA: Yeah, definitely. I mean, that's a social psychologist talking, right? I spent a lot of time thinking about motivated reasoning and bias. And I'm coming at this from the perspective of, we're not capable of being unbiased as individuals and so we really need to be accountable to others, in order to produce findings that are trustworthy. That goes back to earlier debates, maybe current debates about p-hacking. I think people first started using the term “p-hacking” which refers to sort of selectively analyzing or reporting your data in order to get a P value of less than point oh five, which reaches this threshold of statistical significance, which is often a requirement for publication, or at least boosts your chances of getting published. When that idea that people were sort of strategically trying to get their P values under point oh five in order to get published but this compromises the integrity of their results, I think a lot of people thought, “What a hostile view of other scientists, to think that they're trying to trick us, to think that they're like deliberately lying to us in trying to convince us of their results that are obviously untrue.” And I never saw it that way. And I think a lot of people who introduced us to the notion of p-hacking and tried to explain that we need to worry about this, explain it as something that is sort of like a natural consequence of human nature. So when you do an analysis one way, let's say you have a hypothesis that, again — meditation has some positive benefits for us. It increases wellbeing — and you do your analysis that you had originally planned, and you get a P value that's, let's say, 0.2. You're like, “Damn,” and then you try the analysis in a different way and you get a P value that's less than 0.05, and you're like, “Oh, yeah,” it's so easy to convince yourself, “This was the way I should have done the analysis all along.” And so I don't think that it's cynicism. Well, in a way, it's cynicism and, in a way, it isn't. It's not that I think that scientists are trying to lie to people most of the time. I think it's just really easy to convince ourselves of outcomes that both align with our expectations and are also likely to get us publications and advance our careers and things like that. So yeah, I'm very, very cynical about our ability to be honest with ourselves.

SPENCER: I think this is the crux of our disagreement here. Because, while I totally agree with you — humans are very prone to self-deception — I think that we don't have to deceive ourselves. And I think there are ways to prevent ourselves from deceiving ourselves that are different from showing other scientists that we're doing good work. For example, if you're thinking about p values, the way that I think one should actually consider p values (if you're trying to figure out the truth about the world) is if you get a p value of point oh six, you should be like, “Okay, this thing could be explained by sample error, but probably isn't.” If you get p equals point oh four, you should have essentially exactly the same consideration — precisely the same consideration essentially — because point oh four and point oh six are so close together. If you get point oh one, you should be like, “Okay, this is definitely not due to sampling error.” Maybe it could be due to some other weird effect that's not interesting, but it's not due to sampling error. So it might be the right way to think about p values and if you're being truth-seeking, that's what you think. And if your child's life depended on you getting the right answer, I think that that is how you'd think about it. Whereas if you preregister, — “We believe there's going to be an effect of A on B”, and then you get P equals point oh six, then the answer is, “Oh, we failed to find an effect,” and we throw it away. That is, in my view, the open science way to do it because you're kind of adhering to sort of an external convention and I get why that's done. But to me, that's not exactly the truth-seeking thing to do.

ALEXA: I don't necessarily see being married to null hypothesis significance testing and p value cutoffs as necessarily part of open science but I see your point. And I think the idea of preregistration is often intended to be used in that context. But yeah, there's nothing that says that you couldn't do a preregistered study with a goal of finding the truth. And say, in this case, the importance of confidence in the results is really high. So we should have a more strict cutoff for deciding that this effect is real. People have advocated for that kind of thing in various ways. There's the idea of justifying your alpha and there's the idea of setting a lower threshold, stricter thresholds for p values, if we want to be more confident.

SPENCER: But this is sort of the opposite of that in this case. It's like, well, actually, P equals 0.06, that's some evidence that the thing's not due to sampling error. And to treat that as no evidence is sort of a thing we do as a way of proving to others, but it's not actually the right way to think about evidence.

ALEXA: Well, yeah, I guess the value of a p value of 0.06 depends on the relative costs, I guess, of a false positive and a false negative. And so I think that's what the people who argue that we should justify our alpha — like Daniel Watkins — would say, is that you should use the alpha that makes sense to you, given the costs of type one and type two errors.

SPENCER: Yeah, and I guess what I would say is that the idea of using an alpha itself is flawed. That a p value just gives you evidence about, ‘is this due to random noise?' And to then dichotomize it and say it's like above a threshold, below a threshold, it's just throwing away information.

ALEXA: I've always found the idea of a threshold annoying. It seems almost literally like black and white thinking. We're talking about choosing a number and saying true if below this and false if above this or something like that. So I'm sympathetic to arguments that this is not a very nuanced approach. But I also think that a lot of decisions that we make based on scientific findings are sort of yes or no decisions. So I don't think it's easy to figure out a way to discard thresholds because, yeah, often we want to know, should I drink coffee or not? Should I take a hot shower or should I take a cold shower? And those are yes or no decisions. And so if somebody is like, “Okay, well, here's the Bayes factor, you figure it out.” It's kind of like, just tell me what to do. Is it yes or no? So I do think that thresholds are tricky to get rid of because our decision making is often dichotomous.

SPENCER: Yeah, I think that's a good point. But I would say that, the way I view these dichotomies — p is point oh five or not — I think of them as more a thing you have to do in a social setting. If you're publishing papers, you have to have some standard for what we consider good enough and I actually think that's totally reasonable. We do need some standard of — okay, there's a certain amount you have to show. You can't just say anything goes. You can't just publish anything, no matter how strong the results are. I'm just saying that that's not the same as seeking the truth. When you're seeking the truth, I don't think you should ever dichotomize p values; I think you should just think of a p value as a form of evidence and...because I think that's actually the correct mathematical interpretation. However, I think it makes total sense that, in publication, we have some threshold. This is what I'm getting at with the idea that doing things for truth-seeking reasons just leads to somewhat different thinking than doing things when you're trying to demonstrate to others. And so I try to be careful about mixing those. (If) we're trying to prove to others, I think you should follow all the open science practices — preregister, use p value thresholds, maybe even less than point oh five, maybe point oh one. When you're trying to figure out the truth, I think that's another matter and I think the way to think is somewhat different.

ALEXA: I think there's another place that you and I are thinking about this differently. And that is when you talk about proving things to others, it seems — maybe I'm wrong — that there's an assumption that you have to prove to others that the effect is real. Whereas I think, a lot of these concerns drop away when you think that your task is to prove to others that your findings are trustworthy, which is different. So if you're not concerned about whether your effect is positive or negative or null, and your goal is just to convince others that you have really great evidence, then I think that some of the problems — I agree, there's lots of problems; if your main goal is to prove to other people that your effect exists, then we have the same kinds of like problematic incentives that we do in the traditional system. But if your goal is to prove to others that you have really strong evidence, and that they can trust your result, whether it's positive or null or negative, then that starts to feel very much like the inspired scientist to me, and the approach that I would want to take if I was trying to save somebody's life or investigate a question that had implications for global well-being or something really important like that.

SPENCER: That's an interesting way to think about it. I'm curious how you react to this other case — which comes up for me a lot when I'm kind of running studies — which is that many times when I'm trying to investigate a phenomenon, I notice that in my first few studies on it, I'm learning a lot about how to do studies properly. And so often, I'll end up throwing away the first two or three, because I'm like, “Oh, I can see now, having run those, I can see the flaws with that way of thinking of how to study the phenomenon.” And so I might end with five or 10, or even 15 studies, until I feel like I really understand the thing that I'm looking at. I almost view it as, you're in a dark room, and you're trying to understand what's in the dark room, and you're shining all these different flashlights and at different angles, till you begin to understand what's actually there. Just a specific example of this: we were doing some research on gender and personality. It took us actually 15 studies, before we finished, until we actually felt pretty confident in our conclusions. But then, if you were gonna go look back and try to publish the first five of those, they were really messy because they were really just us figuring out what's the proper way to study this? And how do we look at this phenomenon? And at the end, what we did (at the very end) we did do a preregistered confirmatory study with our final conclusions. And actually, there were 18 hypotheses that we had already shown in our previous studies and — bizarrely, shockingly — 18 out of 18 actually succeeded in the preregistered confirmatory study. But that's because we tested the hell out of them. So that, by the time we were doing the preregistered thing, that's just to show to others. But we already kind of knew the answer. We were already quite confident.

ALEXA: So what I would say is that like an open scientist in that situation, would say, “Great. So let other people see your crappy studies, and tell them why they're crappy, and tell them why they should trust your most recent study the most.” And to me, that's the best way, again, to get at the truth. So even if you want a more readable paper, there are still ways to accomplish this in the real world. You write up your preregistered study, and then you can still include the data for the studies that you don't like as much as — the earlier studies — you can include that online, you can include it in supplementary materials. The problem is that, when we do original studies, they don't turn out (in) a way that seems particularly publishable. And then we do studies later that end up having a result that does seem more publishable and we hide the earlier ones, then, first of all, we could be capitalizing on chance. So it makes it more likely that the sixth study is a false positive, and the first five are relevant to evaluating the sum of all of the evidence. But in the scenario that I described, where you allow people to see the first five studies, I'm not saying that it's not true that you didn't learn from the first five studies, I'm sure that you did. And it's possible to do studies that are crappy. You can do a study where your manipulation is bad, it doesn't work. You can do studies where you're measuring something, and it turns out you're measuring it all wrong. And usually we have ways of figuring out whether that's the case. So yeah, it's not to say that those first five studies aren't worse than the more persuasive later studies. It's just that showing those studies to people seems like a better path to the truth than hiding them.

SPENCER: Yeah. And I don't think there's anything wrong with showing those studies to people. I guess the point I'm trying to make here is that, to me, the most important part of science is that, those 14 studies where you're actually figuring out the truth. And then that last 15th study where you do everything really carefully — preregistered, power calculation, the big, confirmatory study — that's the icing on the cake, where you just showed everyone that you didn't screw up the first 14. That you actually figured out the truth. And that is a really important step. But the problem I have is, to me, that's not actually how you figure out the truth. The way you figure out the truth is like the fast iterative, you're shining flashlights from lots of different angles, trying to understand this phenomenon, from lots of different ways. And then you're like, “Ah, I think I finally understood it.” And then the confirmatory study makes sure you didn't bullshit yourself and shows to others that you indeed didn't bullshit yourself.

ALEXA: Oh, yeah. I see that sort of pathway — the doing lots of exploration and then doing a confirmatory study that's based on that exploration — as very consistent with open science. I think that one thing that people worry about with open science — and particularly the concept of preregistration — is the misconception that it means that we shouldn't explore our data. And I haven't encountered an advocate of preregistration that says you shouldn't explore your data. The only thing that I hear from advocates of preregistration is that you should make it clear what the difference is between exploring and confirmatory. So in your case, when you're talking about these 15 studies, the thing that I think an open scientist would object to or that I would object to, would be a scenario where, let's say, you present those first 14 studies as also confirmatory, or something like that. But if you're making a distinction between exploration and confirmatory studies, that's ideal. And I definitely don't want to get rid of the exploratory phase because it's really important. I'm surprised constantly by the things that we find. And if I were constrained to only follow a preregistered plan and never deviate from that, I do think that would be a big problem. It's just that I don't think anybody is advocating for that. I think almost everyone who advocates for preregistration would say just make it clear when you're exploring and when you're doing confirmatory work.

SPENCER: I agree with you there. Maybe if we disagree on this, maybe it's just that I think that most of the interesting things happen in the exploratory phase. And the confirmatory is to make sure you didn't trick yourself and to make sure that others can believe your findings. Whereas maybe the emphasis of open science is on that last step and not on the first part.

ALEXA: Yeah,I definitely think that's the perception. And maybe that's because we have spent so much time in the territory of doing exploratory work and presenting it as confirmatory — I would say that that is pretty in line with the traditional approach, and in line with p-hacking and things like that — that most people who are advocating for making science more open or trying to persuade people to preregister are focusing on the confirmatory stuff. So yeah, maybe that is where the attention gets placed.

SPENCER: On a different topic — but one that's quite related — I've been thinking lately about the replication crisis and how it leads to bad findings, and whether there are other reasons that false stuff is getting into the literature — or let's say, unhelpful stuff or scandal literature — and the more that I've thought about this, I think that there may be an important category of thing that might even be as important as p-hacking, but as far as I know, it doesn't have a name. And so I want to run it by you and see how important you think it is, and also whether it has a name that maybe I just don't know about. The way that I like to think about this is, imagine you're a social scientist, you want to get your results published in top journals. So what are the different ways that you could do this? The first way you could do this is you could actually make an interesting or important discovery. You make an interesting or important discovery, likely you're gonna be able to get it published. The second way you could do this is you commit fraud; very few people are willing to do this. Maybe only, I don't know, 2% of people are willing to do this, because it's crossing such a moral boundary. So it's not actually that common, but obviously, when it happens, it can be a big problem. Third thing you could do is you could p-hack. You can just use fishy statistics or run lots of things but only report some of them. And that is a big cause of the replication crisis; a lot of stuff doesn't replicate because people use fishy statistics. And then there's a fourth thing — and this is the thing that, as far as I know, doesn't have a name, but seems to me actually really important. Just for the purpose of this conversation, I'm going to call this importance laundering. Importance laundering is basically — you get a replicable finding, so if someone were to replicate it, it would indeed replicate. But the finding is actually not interesting or important, but it seems like it is. And that's why you were able to get published in a top journal. And as I've explored this, I've sort of been looking at what are the different subtypes of importance laundering? How is it that people are able to make something seem important or interesting when it's not? So okay, any reaction so far?

ALEXA: So far, I would completely agree that this is a big problem. And I think it's the source of a lot of misunderstandings about what we can learn from social psychology or psychology more broadly, or the behavioral sciences. So I definitely agree that it's a problem. And you also mentioned subtypes. And it's easy for me to imagine there are different ways. So, it sounds like you're identifying a gap between what evidence is actually showing within a study and what people are claiming. And yeah, there's a lot of ways for those two things to be misaligned. I'm curious to hear more about the different subtypes that you've identified.

SPENCER: So far, I've been able to find what I would call four subtypes of interestingness laundering. The first, you might call conclusion hacking. Conclusion hacking is where you show a particular thing, x, but you make it seem like you showed something else, x prime. And the difference between x and x prime is kind of subtle, but x prime is actually really interesting if it's true. But you didn't show that. And in fact, the thing x you show doesn't even strongly imply, or isn't even great evidence for, x prime. And so you're kind of doing this thing where it's — an example might be, maybe what you claim to show in your paper is that people who have trauma behave really differently in the real world. But maybe, all you've actually shown is that people who have trauma behave differently in this really silly little game that has nothing to do with the real world. And yes, technically, if someone reads carefully, they can figure out that you did that. But the way you talk about it confuses the reader into thinking, you showed this thing that you didn't actually show.

ALEXA: Okay, so this is conclusion hacking. You gave the example of a study on trauma and how it could affect people's behavior. And I'm taking an example from you. You talked about, hypothetically, a study where you looked at people who had experienced trauma, and then you assessed their tendency to select strange flavors of juice. And in this example, you imagined the authors would conclude, trauma causes less novelty-seeking. This has a few problems. And I would argue that they do have names already. So one problem here is that — I think in the hypothetical study that you're describing, you're imagining selecting people who have experienced trauma. I assume that you're not imagining a study where people have been assigned to experience trauma or some kind of very sadistic experiment. And then also where this juice selection task is used as the dependent variable or the (actually) just as a measured variable. And it's used to measure the construct “novelty seeking”. The idea is that, if people are adventurous with their juice choices, that this is an indication of a broader trait of novelty seeking. And so there's several problems with this study where the authors are claiming trauma causes less novelty seeking. First is a causal inference problem — which you could also think of as an internal validity problem — this is a correlational study but you're making causal inference from it and you're not putting in any of the work to justify that causal inference. You're not controlling for other variables that could very plausibly account for the relationship between measured experienced trauma and juice selection flavors. One that you identified as the possibility that people who have experienced more trauma may be older; those people might, for reasons completely unrelated to their trauma, be less likely to be adventurous when it comes to the juices they choose. There's a causal inference problem. And there's also, probably, an external validity problem. In terms of generalizing the results, the authors in this case would be assuming that this juice task generalizes to the broader construct of novelty seeking, and that seems extremely implausible. Novelty seeking is such a broad concept. It's very easy to imagine somebody who is a huge novelty seeker in the way that we would typically think of this as a personality construct, and only likes grapefruit juice or something like that. I think I would say, a third problem connected to the other two — so maybe it's not a separate problem — but another term that's used for this kind of thing is overclaiming, so this is just like drawing conclusions or making claims that are disconnected from the evidence that you've collected. There are some names floating around out there for the kinds of things that you're describing. But you said that you have four subtypes. So I'm not sure if they cover all of them, or how they align with the other subtypes that you have identified?

SPENCER: You made a lot of good points there. Each of the specific ways that you could try to achieve this conclusion hacking probably corresponds to some idea that we already have and doing things in a fishy way — there could be a generalizability problem or a validity problem, or like a causal inference problem. So those are different ways you can conclusion hack, I guess,some subtypes perhaps. So then the second category that I've identified, you might call novelty hacking, where essentially what you're doing is you're making a result seem novel, even though it's actually either common sense, or already an established fact in the scientific literature. And I don't actually have an opinion on this, but an example of this would be — some people have claimed the concept of grit.

ALEXA: Yeah, that was the first thing that came to mind.

SPENCER: Really? Some people claim it's not different from conscientiousness. And it's well known that conscientiousness predicts a bunch of things like performance at work. And then there's this construct of grit. And some people say it's a subtype of conscientiousness. But does it really predict better than conscientiousness? And this is kind of debated.

ALEXA: Yeah, exactly. I think that this also happens a lot, where people will basically come up with a new construct, or at least like a new name for a construct, and try to distinguish it from existing constructs, even when that sort of distinction is pretty tenuous. The way that they tried to persuade editors and reviewers to publish a paper is to say, “Oh, we've never examined this phenomenon before.” But actually, it's just a renaming or repackaging of something that we have examined before. An adjacent example, I think, is the idea of ego depletion. And actually ego depletion...

SPENCER: I literally have on my list for this category as a possibility because some people have claimed that it's...

ALEXA: There are a couple of ways to think of ego depletion. One way is like a really boring way. And I think that the original authors would say they were very careful not to define ego depletion in this way, which is to just say that people get tired. And that's pretty intuitive...

SPENCER: People get tired? That's shocking!

ALEXA: (laughs) I don't think that you would get a paper published, if you said...if you're solving mindless puzzles for hours and hours, people start to get lazy and bored. So knowing this, the original authors were careful to make clear that ego depletion is not this. So the distinction, I think, is that it has to be — it shouldn't just be any task, it has to be a task that requires self-regulation or something like that. But then when you get into these distinctions and you get more strict about what ego depletion is, I think the effects are harder to find. And another thing that I think is connected to your idea of...sometimes I think even an individual paper will make both kinds of statements. So they'll say ego depletion isn't about getting tired. But they'll maybe use examples, intuitive examples in the introduction or in the conclusion, the things that drive home the point and make it really relatable to the reader, that are probably about getting tired. And so there's this sort of doublespeak, I guess.

SPENCER: Yeah, totally. And we think about the technical definition of ego depletion as the idea that self-control or willpower kind of gets used up as you use it, right? It's actually pretty hard to distinguish that from being tired. What exactly is the difference? And you could see that maybe some studies — if they're not quite cleanly differentiating enough — they might find it but we can't tell if that's just tiredness. And other studies, they may fail to replicate. If you actually go through the literature, can you really be sure that it's not tiredness?

ALEXA: Yeah, it gets harder to find the effects when you make it more clearly distinguished from tiredness.

SPENCER: So then the third subcategory, I call usefulness hacking. So that's when you have results — a real result — it would replicate. But the effect size is really small and not of clinical significance. And you kind of just don't mention that, or you just talk about how statistically significant it is, or something. But in practice, there's no point to this finding. I'll give you a hypothetical example. Just trying to look at what actually causes the government to do different things. Is it more caused by what people want? Or is it more caused by what corporations want? And so you study, whether what corporations want is a better predictor of what politicians do than what people want. And you find — oh, yeah, look — what companies want actually is a better predictor. But it turns out that your predictive power is almost nothing so in fact, the real conclusion of the study should be: we were unable to predict what laws will get passed, but they (conclude that) what determines what politicians do is what corporations want, not what people want.

ALEXA: Right. This example is interesting to me — or this subtype, usefulness hacking — because it overlaps with something that you and I were talking about recently, which is the usefulness of nudging research. And there are various criticisms of nudging. One of the criticisms is that actually, even when you sort of magnify the effects, that it's still not that impactful, or it's not making that much of a difference. So nudging being the idea that we can implement these small individual level changes, for example, we can reduce our carbon footprints, we can take shorter showers, we can recycle or something like that. And that because there are many people on the planet, these kinds of small effects will be magnified and will accumulate to have these large effects and we're going to save the planet this way, or something like that. And critics have suggested that this magnification process is not actually adding up the way that people are claiming. And this is something that has been interesting for me in terms of conversations about effect sizes, and how seriously we should take small effects. And so there are some people who would argue that small effects are still important; obviously, the real answer has to be that it depends on the situation. But I think sometimes we fall back on this idea that, if we just imagine that this effect is going to be magnified many times, like, ego depletion, for instance. If this is happening constantly throughout the day, then even a small ego depletion effect measured in the lab is going to have big consequences in someone's life. Or if we magnify this effective recycling behavior across all the people on the globe, then it's going to have a huge effect. And I guess the problem with that is that we're often not actually testing whether this magnification is happening in the way that we're assuming. And so, I do think that you're right, that people get away with these kinds of claims, these assumptions that a small effect will accumulate into a big one, without actually having to show it. And there are probably many cases where the small effect actually just gets completely washed away by other factors.


SPENCER: I like your point about how it depends so much on context what a small effect is. For example, assume that we find a real effect but it's small. So it's not yes; it's real but small. If this was an effect of psychic powers, that would be like paradigm-changing, right? It would show that psychic powers actually exist. Holy crap, that would change our ontology about the world! Or let's say it's a small effect, but it's for life and death. Like a 3% chance of not dying. Well, that's actually worth a lot. Or if it's something just incredibly low-cost, you can do it in one minute of the day. And okay, it has a small effect, but it costs almost nothing to do so that's pretty cool. But I think a lot of these things don't have those properties, where they're paradigm-shifting, or life and death, or so low-cost. So then it's like, “Oh, no, we actually have to consider the effect size in order to decide if this is something we should care about.”

ALEXA: Yeah. 100%. The examples that you gave are really interesting, because I think that that was one strategy that I think that people use to defend small effects was to say, “Well, imagine how impressed you would be if I had a small but reliable effect that I could levitate or something like that.” And, of course, that would be very impressive. But not all effects work that way. Some effects are really unimpressive if they're really small, especially if we're talking about predicting behavior in scenarios where there are tons of other factors that are going to influence people's behavior. So then it could be the case that even a small effect becomes basically non-existent once you introduce other factors.

SPENCER: Yeah, absolutely. And I think in practice, I also tend to have a lot of skepticism with small effects, because they tend to be very easy to get by accident. You make some really subtle mistake in your experimental design or something and you get a small effect that actually has nothing to do with your main hypothesis. So on top of all the things we've been talking about, there's just the fact that they just tend to be less reliable. Like the difference between a small effect and zero effect is not that big. Category is what I call beauty hacking and this is where you get a result that's actually just pretty complicated and contradictory and mixed, but you make it seem like it came out to be like a really nice, elegant story. You do storytelling around it. And then, or maybe you don't really mention the contradictory results, or you put them in the appendix or whatever. And so it just comes out as this nice, clean, pretty story that people find exciting.

ALEXA: Yeah, right. Again, I think that's something that happens. And so one of the things that I think distinguishes — not the only thing, of course — but one of the things that distinguishes people who get a lot of stuff published or have really good luck publishing and people that don't, I think, is writing skill. Some people are really persuasive writers, and they can talk their way out of unpersuasive finding or they're skilled at creating these bridges between the very impressive claim that's really surprising and is going to get a lot of attention, and what was actually done in the paper, which is often much less sensational or surprising. And that's something a couple of people have talked about. So I know, Roger Giner-Sorolla wrote a paper about how basically people make these — I forget what the phrase he used exactly was — but it was this idea of beautifying your paper and making it aesthetically appealing as a path to persuasion. And I think others have talked about that as well. But again, it seems connected to the others, because sometimes the writing skill is the tool that helps you draw unwarranted generalizations or gloss over a measure that lacks validity, or disguise shoddy causal inference or something like that.

SPENCER: It's really interesting to hear your feedback on these different subtopics and this is something I'm still thinking about. But it does seem to me that just focusing on replication is not enough, because something could replicate but — for the reasons we've just been talking about — still not be interesting or important. And that could still kind of clutter up the literature.

ALEXA: Tal Yarkoni wrote a paper called the generalizability crisis and I think that that was really his main point — was, we're so obsessed with replicability right now. And maybe the bigger problem is generalizability. So if we only care about replicability, then we're also not going to be doing what we want to be doing as social psychologists and as scientists, which is discovering truths that matter about human behavior. So exactly as you described, you could be coming up with these effects that are really reliable, people get them every time, the replicability rate is really high. But they're completely uninteresting because potentially the measures are invalid, or you would have to generalize way beyond the findings in order to draw any conclusions that matter to people. So I think he would claim that that is either a bigger problem than replicability or we certainly can't ignore that and just only focus on replicability.

SPENCER: Yeah, I think generalizability is really important. I view it as one of the subsets of importance laundering because — yes, you can do something that doesn't actually generalize — but there's also these other methods we talked about of producing an effect that actually isn't interesting and actually doesn't matter. But it's not just about their generalizability; it's because you know, you beauty hack it or usefulness hack it or novelty hack it and so on.

ALEXA: Yeah. I think also, the way that he uses generalizability would include things like low validity of measures and things like that, which encompasses a lot of problems in papers. One way to think of it could be like, there's the constructing the study, there's flaws with doing that. And then there's the way that you connect your results to your verbal claims. And you can also mess that process up.

SPENCER: Absolutely. All right. Before we wrap up, I want to do a quick fire round with you. I'll just ask you a bunch of questions and get your quick thoughts on them.

ALEXA: Okay, sounds great.

SPENCER: So first question. Should retribution be part of our justice system?

ALEXA: I think the answer to that, for me, is no. Our justice system right now, I think, has both retribution — punishing people in proportion to what we think they've done, and also consequentialist goals, like trying to keep people safe, for instance. And I think that retribution as a goal is flawed, because it depends on us being able to evaluate people's blameworthiness. And I see this as something that we have relied on intuition for, and we've relied on the social sciences for, but I think it's something that is essentially impossible to assess. And we try to assess a lot of things that are almost impossible to assess. But in this case, the cost of being wrong is so high. The justice system has as one of its core principles, innocent until proven guilty. So if we can't determine for sure that somebody is blameworthy, then I think we should err on the other side and abandon the goal of retribution while still maintaining some of the consequentialist goals of the justice system. So still trying to keep people safe, but not not trying to punish people in proportion to how much punishment they deserve.

SPENCER: So another question for you. Are we asking too much to the Supreme Court?

ALEXA: A topic that you brought up earlier is this idea of whether we can keep our own values out of doing science? And I said that I'm quite skeptical about our ability to do that, to keep our own self-interest and biases out of our work. And you seemed more optimistic about that. And so you implied that, yeah, maybe we are actually capable of doing that if we try. But, because I'm skeptical of that for scientists, I'm also very skeptical of that for Supreme Court justices. And oddly, we seem to have a similar structure around those two kinds of positions. For me, the concern with assuming that scientists can be objective is that we give them too much power. So they can make these claims that then get treated as Ground Zero truth because they're scientists and they're supposedly objective, which obviously is problematic if you don't believe that scientists can be objective. And I have the same concerns for Supreme Court justices. I think that they have an immense amount of power and they are ostensibly supposed to be positions where individuals are objective, and they don't let their values or their politics influence their decisions. But I think there's recent evidence from pupils, and stuff like that. This is just that the general public, the American public, is increasingly skeptical that that's even possible, although they think that Supreme Court justices are supposed to be doing that. That suggests that this sort of position as it's constructed, might not actually be possible to execute in the way that we, as a country, think that it should be executed.

SPENCER: All right, next question. What would an ideal college admissions process look like? This is very timely, because people are debating, should you get rid of standardized tests like the SATs and so on?

ALEXA: Again, this is something where I feel like first, you have to be asking, what is education for? And who are we trying to select? So if education is for teaching people things, then maybe an ideal selection process would involve somehow trying to identify people who will benefit the most from education. The selection process that we have now, I think, tries to choose the people who will be most successful in college by standard evaluations, so who will get the best grades? Or who will do the best on standardized tests and things like that? I think it's possible to imagine a college admissions system that doesn't do that, so instead tries to evaluate who's going to benefit the most? Who wants to be there the most? But I think you could also even argue that it's so hard to select people that you could have to meet some kind of threshold for — will you benefit from college or something like that and then randomly select people. People have employed similar kinds of selection processes for grant submissions and things like that. Yeah, an unsatisfying answer, because I'm basically like, here's a bunch of different possibilities, but I definitely don't know the answer. So that's where I'll end.

SPENCER: That's a cool concept. One of the funny things about it, if you take this cynical view that a significant part of college is just credentialism, then if we switched it in the way you're describing, so credit and admit the people who are gonna improve the most or benefit the most or something, it might actually hurt the credentialism aspect — which I don't know if that's a good thing or bad thing — but just interesting to ponder.

ALEXA: Yeah, I agree that it would undermine the credentialism system, which I think is something that seems sort of appealing to me.

SPENCER: Okay. Next question. Do you like pets? And if not, what does that say about you?

ALEXA: (laughs) Um, I'm not the biggest...

SPENCER: The listeners really liked you up until this question.

ALEXA: Yeah, I know, this always does me in. I meet new people, and they're like, “Oh, this girl seems friendly.” And then pets come up and I'm like, ”I'm not that interested in pets, I don't really understand why people are so into dogs and stuff like that.” And people have a really negative reaction. They're like, “Wow, you're a completely different person than I thought you were; you might be a psychopath.” I was thinking about this the other day because I saw somebody drive by my house in an SUV and there was a poodle sitting in the driver's seat. And I was like, “Man, I do not understand people's relationships with their dogs.” This poodle is like basically a human being to this other person. And I just don't understand it. My partner and I have started toying with the idea that instead of — I'm also not a cat person so it's not like I can appeal to one side of the pet-loving world — but we were toying with the idea that we could say that we're bird lovers. And that's the justification for not really liking cats, but it still makes us seem like we like living things. But we haven't really tested that out yet.

SPENCER: But you don't? Is that true, though? Do you actually like birds?

ALEXA: Not really. I don't know. I find birds intriguing, but I'm also kind of scared of them.

SPENCER: This is pretty funny, because I actually had another podcast guest who described in the podcast how their closest connection to anyone in their whole life was their birds.

ALEXA: Oh, wow.

SPENCER: And they actually attempt to give the sounds that their birds make in different situations on the podcast. So just kind of the complete opposite, actually, of what you're describing. Do you have a sense of what you're not getting from pets that other people are? Or is it just kind of mysterious to you?

ALEXA: I had a roommate who had pets, and I became attached to them. So in some way, I see how the emotional connection can become strong. But the costs to me are really high — having to be home, having to plan your travel around pets and stuff like that, having to take them for walks when it's cold. So it's the balance of those things that I don't understand.

SPENCER: Okay, but if it didn't have the cost, if it was somehow someone else was walking the dog and the dog was always clean and never got sick, would you be like, “Yeah, that sounds great?”

ALEXA: Maybe I'm giving myself too much credit. I also think that I find things that other people find charming and cute, pretty annoying. (both laugh) So yeah, there's the whole truth.

SPENCER: You know what I think it is? There's a lot of different reasons to like animals, but like ticks on like a dog, right? Dogs often give you unconditional positive regard. People like to cuddle with their dog. It makes them happy to see their dog do cute and funny things or it makes them happy. So maybe each of these things may be just as less appealing to you. You just don't get as much joy from these things as other people do.

ALEXA: Yeah, or maybe I don't trust dogs. I'm like, “Oh, you love everyone.”

SPENCER: You're like, “It's not real.”

ALEXA: Or you just love me because I feed you. It doesn't feel earned.

SPENCER: Ohh, got it. Well, then now I think we can refute that. Because if that was it, then you'd like cats.

ALEXA: Fair. Yep. Just cold-hearted.

SPENCER: All right, a couple more questions before we wrap up. So you've now done tons of podcast episodes for your two different podcasts. And I'm curious, what have you learned about how to interview people that you feel has made you better over time?

ALEXA: That's a great question. So I've done less interviewing in the podcast that I've been involved in than I would like, so I was looking at your list of guests and feeling jealous that you get to have all these great conversations with people. Maybe that people have a lot to say and so my tendency initially was to try to structure the conversation pretty heavily. But I found that that's not always necessary. And that, yeah, people have a lot that they want to talk about and sometimes letting that flow organically can be more interesting. That's something that I also have been learning as a teacher. Zoom was something that disrupted the way that I taught discussion-based classes, where I started doing this ‘raise your hand when you want to speak' thing. And then there was no time for me to speak and students would just follow one after the other. And I was like, “Wow, this conversation has become so interesting,” and I'm not involved at all. So taking myself out of it, I guess, has been something that I've done more of. So do less, basically.

SPENCER: Try not to control things so much, just let ‘em fly.

ALEXA: What would your answer to that be? You've done lots of interviewing.

SPENCER: I think I have realized that you have to pay really close attention to things people say, and there'll be moments where they say something that's really important or really interesting, and not let it slip by. Because it might just flow by in the conversation too quickly but you gotta notice, “oh, wow, that's a really meaty thing. Let me jump into that,” and ask the follow-up questions. And so I think I used to miss more of those opportunities. Now, I'm getting better at spotting when something really interesting was coming up.

ALEXA: Yeah, one thing that I have definitely learned. It's rare that I'll listen to — either for the podcast that I've been involved in — from start to finish. Because I hate it, I really hate listening to myself talk. But the times that I have done that, have given me a lot of insight into what is happening in my mind when I'm having a conversation with other people. First of all, I'll miss things that people said, and I will only catch them...this is to your point of you have to pay a lot of attention. When I'm listening to the full episode, I'm like, “Oh, wow, you know, you well said that, that was really interesting.” And I missed it because I was trying to think about what I would say or something like that. So you can learn something about how good of a listener you are. And also, there's a big discrepancy for me between the thoughts that I have, and the things that I actually verbalize. And yet, we don't always have the chance to sort of parse those things apart. So it's really interesting to record a conversation that you have with people and then listen to it back and see the difference in the experiences but that's not really like necessarily learning to be a better podcaster and might be just like learning about yourself as a human being.

SPENCER: During your podcast, you often have thoughts about the topic that you just don't say? You kind of censor yourself?

ALEXA: That's probably true. I don't know if I experienced it as censorship or just, “I can't figure out how to say this exactly.” Or maybe I'll think of something and then somebody else will be talking and then we won't come back to it or something like that. It's fairly rare that I'm like, ”Oh, I definitely shouldn't say that, because that would be bad” or something.

SPENCER: All right, last question for you. Because we've said some demoralizing things about social science. What are you really excited about with regard to social science? What motivates you?

ALEXA: I think that the thing that is most motivating to me right now in social science, is that while I think that it's very difficult to come to conclusive answers about important questions that we have, I do believe in the process that we're using. I don't think that science is the one way to get to knowledge by any means. But I think that it's one way that's valuable, and it has strengths that other methods don't. And so I see a lot of value in learning about that process, and especially in mentorship of students to develop that approach to thinking. So yeah, I guess the scientific method is exciting to me, and teaching people and debating with them about the strengths of the scientific method and what it can tell us is something that I think are really important conversations for us to be having.

SPENCER: I totally agree. That resonates with me a lot. The scientific method is actually so powerful, even though our human implementation of it is often so flawed. But if we can go back to basics and leverage it properly, there's actually tremendous potential.

ALEXA: I hope so.

SPENCER: Thanks so much for coming on. Alexa, this was great.

ALEXA: Thank you, Spencer. I appreciate it.


JOSH: A listener asks, What impact do you hope the podcast will have? And can you tell if it's started having that impact yet?

SPENCER: My hope is that the podcast gets people thinking about these ideas that really matter more than they would otherwise. But also that it gets them to question ideas that they already believe, to try to come up with truer beliefs to challenge what they think and hopefully to just spread more good ideas and to kind of downgrade bad ideas in people's minds. In terms of whether it's having an impact, I do get listeners who write in telling me that it changed their mind in different ways or impacted in different ways. So that's always great to hear. I don't have a formal way to track it. It would be cool if I had such a way.

Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of these platforms:


Host / Director
Spencer Greenberg

Josh Castle

Audio Engineer
Ryan Kessler

Uri Bram


Miles Kestran

Lee Rosevere
Josh Woodward
Broke for Free
Quiet Music for Tiny Robots

Please note that Clearer Thinking, GuidedTrack, Mind Ease, Positly, and UpLift are all affiliated with this podcast.