CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 142: What things in life *shouldn't* we optimize? (with Christie Aschwanden)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

January 26, 2023

Why should we not optimize some things in life? Should some things (e.g., interpersonal relationships) be "off-limits" for optimization? How much time spent being unproductive is good for us? What can we learn by paying attention to our moods? Does science make progress and produce knowledge too slowly? Why is research methodology applied so inconsistently, especially in the social sciences?

Christie Aschwanden is author of Good to Go: What the Athlete in All of Us Can Learn From the Strange Science of Recovery, and co-host of Emerging Form, a podcast about the creative process. She's the former lead science writer at FiveThirtyEight and was previously a health columnist for The Washington Post. Her work has appeared in dozens of publications, including Wired, Scientific American, Slate, Smithsonian, Popular Science, New Scientist, Discover, Science, and NPR.org. She is a frequent contributor to The New York Times. She was a National Magazine Award finalist in 2011 and has received journalism fellowships from the Pulitzer Center for Crisis Reporting, the Carter Center, the Santa Fe Institute, and the Greater Good Science Center. Learn more about her at christieaschwanden.com or follow her on Instagram at @cragcrest or on Mastodon at @cragscrest.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Christie Aschwanden about over-optimization, the slowness of science, and incentives in research.

SPENCER: Christie, welcome.

CHRISTIE: So nice to be here.

SPENCER: So I've been reading your writing about science for a really long time, and you're one of the science journalists that I most respect. And so I'm really excited to dig into some of these science questions with you.

CHRISTIE: Oh, thank you so much. It's really kind of you to say that, I appreciate it.

SPENCER: But before we get into science stuff, let's get into a topic that we're probably going to disagree really strongly on, which is, should we stop trying to optimize our lives? And I don't know if you know this, but my blog is literally called “Optimize Everything.”

CHRISTIE: [Laughs] I love it. I love it. I feel like that makes me want to start a blog called, “Stop Optimizing.”

SPENCER: [Laughs] That's a great idea.

CHRISTIE: So it's interesting, I totally understand the idea behind optimization. Part of the reason I have come to where I am about optimizing one's life is having talked to a lot of people, and had a lot of friends, and myself in my own life, trying to optimize things. I think that my feelings on sort of self-optimization, a lot of them came out of my book, “GOOD TO GO: What the Athlete in All of Us Can Learn From the Strange Science of Exercise Recovery.” I dove into all of this research on different things, hacks and things that athletes can do to try and optimize their recovery. And what I found is that so much of this stuff was basically just like a big waste of time, energy, and resources on stuff that had very marginal gains, if any. And it really distracts and sort of takes energy away from things that really do work.

SPENCER: So I'm wondering, so you look into all these different ways to sort of improve one's athleticism or improve one's performance. And I'm wondering to what extent was it that they actually don't work versus it's just sort of unknown, where there's just not good evidence one way or the other?

CHRISTIE: It was a mix of both. So there's this old saying that ‘absence of evidence isn't evidence of absence'. But at some point, you have to sort of come down on, okay, this stuff isn't working. And so there was a little of both. There's stuff that we're just not sure whether it works. But I think, really, the big takeaway for me, and I think this goes far beyond just these hacks for athletic performance, is that most effect sizes for the kinds of things, the kinds of interventions that people are interested in doing to “optimize” their lives actually have pretty small effects. So, in other words, any of these things that you're sort of doing to hack your life, whatever benefit it has is probably fairly small. And so it doesn't mean that none of this stuff is worth doing. But I think that what it means is that chasing every last one of them is really, you're putting your time and energy and efforts towards things that are going to have very small benefits to you. When you know, at that same time, you could be using that energy and putting your attention to things that really make a big impact on your life. So, for instance, instead of worrying about every single little micronutrient that you're getting in your diet, making sure that you have an overall sort of balanced diet with some variety, there's some basic sort of fundamental principles of processed foods are not as healthy as unprocessed ones, fruits and vegetables are good, all these things that we all already know. But instead, we're always looking for like the very best fruit or vegetable to eat and vitamins that's so important. And I've done a lot of writing about nutrition, and most of the studies on it are actually of very poor quality. And I think that intuitively, we know that what we put in our bodies is really, really important. And there's no question about this. But we're looking for some sort of secret or some magic bullet, the one thing that we can do that will have a big effect. And this goes back to the thing I said earlier, which is that most of these effects are pretty small. So it's not the one weird trick that's going to change your life. It's sort of the overall healthy diet. And it's things like getting enough sleep. I wrote a lot of it in my book. I have a whole chapter about sleep. And sleep is one of the most important things you can do for your health. It's one of the most important things that you can do for your productivity, whether you're looking to be productive at work, or to perform well as an athlete, or to be more creative and more innovative, and all of that. People get so focused on these little metrics, that there's all sorts of wearables now where people can measure things. And some of these wearables make the sort of outrageous claims. We talked to the researchers, and some sort of thing that you're wearing on your wrist is not going to be able to tell you the quality of your sleep. So, I think one of the real dangers here is that we outsource, and we sort of look to these tricks and hacks to make things better, rather than really focusing on the big picture things that have much larger effects. So getting enough sleep is much, much more important than really fixating on these little metrics and scores that some of these wearables can send you. But I think there's a bigger reason why I'm a little bit against trying to optimize everything in your life. And that is, I believe very strongly that it's really important that each of us have some time in our day, when we're not trying to be productive. We're sort of at this moment now, where everyone has to be productive at all times. You can't just waste time sitting there. You always have to be doing something. And I think this is just really toxic for mental health. I think it's really not helpful for creativity and productivity. It's really important that we have some downtime and have some time, where you don't have that expectation that you're being productive. And trying to optimize every moment of your life really leaves no room for serendipity. I think it really blunts your ability to be creative. And I think that it sort of overestimates the ability to improve our lives with little changes.

SPENCER: So I see you making kind of two big critiques as I understand them. The first is that people are often looking for that one thing they can do that's gonna change their life. Like, if they just eat the right micronutrient or something like that is like hacks. The second is about sort of the importance of spending time where you're not trying to optimize, where you're just kind of getting downtime, relaxing, restoring, that kind of thing. So let's take those in turn. On the first one, about trying to find just the right technique. There, I could argue that, really, what you're saying is that you should optimize in an intelligent way, not an unintelligent way. Because an unintelligent way to optimize would be to devote time to getting just the right micronutrient when you don't have the basics right. You're not sleeping well, you're not eating well, and then you're just focused on sort of this small detail instead. To me, that's inefficient optimization, because you're not considering how much bang for the buck there is in terms of investment.

CHRISTIE: Yeah, and hearing you say this, I realized that maybe I'm framing my argument incorrectly. Because I'm not arguing against optimizing. I think what I'm really saying is, you should stop trying to optimize everything in your life. And so the really important thing here is that you choose the right things to optimize and that not every thing and not every aspect of your life really needs optimization, and will be beneficial to try and optimize that. And so being really careful and thoughtful about which things do I want to optimize, and which things do I not want to approach in that way.

SPENCER: And so, how do you think about what you should be trying to optimize versus not?

CHRISTIE: I think the things that are worth optimizing are things where, for instance, time is something that is helpful to optimize. But again, I think, you need to leave room so you can optimize your day. But the danger comes when you're trying to optimize every single moment of the day. So maybe I decide that I want to optimize the time that I spend at work, and the time that I spend doing my work tasks, and that can be really helpful. Do I want to optimize the time that I'm spending, say, with my family and loved ones? Well, in a global sense, I want to be sure that I'm spending the kind of time that I want, that I'm getting enough time with them and getting what I want two out of that time. But I guess there's part of me that just sort of bristles against the idea of like, do I want to optimize my marriage and optimize the time that I spend with my spouse? That feels a little bit like taking some of the magic away.

SPENCER: Right. So I think you're getting a bit into the second critique, which is like, maybe we shouldn't be optimizing all the time, maybe we should just be focusing on what's there in front of us. But I think maybe you're adding a third critique in here as well, which is something like, there's some things that are sacred that you shouldn't be trying to optimize. There's something unappealing about even the idea of thinking of them in terms of optimization. Is that right?

CHRISTIE: Yeah, that's right. And I think there's another aspect that we haven't touched on yet. And that is that just because you can measure something doesn't mean that that's the right thing to pay attention to. And so I think some of it is that when I think of optimizations, so often that's really focused on metrics. I think metrics are great. I'm a total data geek. That's what I'm all about. But I think sometimes when we think about truly optimization from sort of a big picture aspect, sometimes the things that we need to do to really achieve those outcomes are not about things that can be measured and aren't about things that are something you can quantify. And I think in one sense, really, my argument is that we don't need to quantify everything about our life, and try to optimize it from a numbers perspective. And sometimes, if you're trying to make your life better, whether you're trying to be happier, more productive, all of these things, the way to do it is not by paying closer attention to some of the things that you can measure in numbers, but paying attention to how you're feeling and some sorts of qualitative things. I think that there's sort of instinct to put more importance, put more into things that are quantifiable, when sometimes they're really important things can't be measured with a number. In my book, I talk about how there's this big rush going on to try and find the perfect metric for quantifying athletic recovery. And it turns out, when I looked very deep and hard about this, there isn't a single metric, and there are a lot of different things that can tell you a little bit here and there, that when it comes down to figuring out whether you're rested and recovered and ready to perform again, or recovered from a hard effort or a performance, really the very best metric of that (the best measure) is just your mood and how you're feeling. And that's a qualitative thing. It's not something that you can measure on a watch. It's not something that you can put a number on. It's more of a feeling. And I think that we tend to give less credence to feelings, and to things that are more of a qualitative quality. And sometimes those are the most important things that we can pay attention to.

SPENCER: Yeah, reminds me of this quote, “not everything that can be counted counts, not everything that counts can be counted.”

CHRISTIE: That's also called the McNamara fallacy—not everything that can be measured is important, or a variation of that. That's from the Vietnam War.

SPENCER: Yeah, exactly. Okay. Going through those different concerns you have about this, because you mentioned the metric one, like the things that you can measure. Let's talk about that for a second. My feeling is that with life experiments, often what I'm looking for is like, does this help me in such an obvious way that I'm really confident it worked, rather than trying to quantify it really carefully. There are some exceptions to that. For example, if someone starts taking an antidepressant, let's say because they're experiencing depression or anxiety. I actually highly recommend that, for every week, they take standardized anxiety and depression scales, just because it can be hard thinking about three months ago and being like, “Oh, wait, am I really better? I don't know.”

CHRISTIE: Absolutely. I totally agree, yeah.

SPENCER: So there, I would actually suggest people quantify it. But a lot of times with life experiments, I think, really, what you're looking for is just a really clear win, like an effect size large enough that you're just not going to mistake it for, “Oh, I don't know, it's borderline.” If you're at that level, then don't do it unless you enjoy it. If you're like, “I don't know if it worked.” So an example of a really quick win would be, for me, I always had problems with my shoulder. And so I started doing shoulder stretches and my shoulders improved. Okay, cool. And now I do them to prevent that problem. Alright, great. So that's a clear win, right? Another example is, sometimes I feel really groggy in the morning when I wake up. And I find that using one of those bright, SAD lights makes me feel more energetic when I'm groggy in the morning. And so I'm like, “Oh, okay, I tried that.” There's a cost to that. I did buy one of the things. Actually, I think my mom gave it to me. But you try it, you're like, “Oh, yeah, that does make me feel more energetic. Okay, I'm gonna stick with it. If it doesn't, okay, I'm not gonna use it.” So when I think about optimization, that's the kind of thing I'm doing. I'm sort of always looking for an opportunity to try something new. And I try to have at least one experiment going in my life at a time, and I'm just looking for something like, was that a clear win or not?

CHRISTIE: Yeah, I like that approach a lot, Spencer. And I think that we're not speaking so differently here. While I was at a bookstore, I encountered so many people who would come up to me and say they're doing 30 different things. I mean, literally, sometimes 30 different things to try and improve their recovery. They're taking this supplement, and they're taking that, and they're doing this massage gun and that thing, and the Squeezy pants, and the cryotherapy. And they're doing all these things and spending all this money, and time and energy on all this stuff. And yet, they have a high-powered job, they're working a lot, and they're not getting enough sleep. And so they're spending all this energy and time and effort trying to optimize and do all these hacks that are supposed to optimize their recovery, when in fact, those things, really, if they're working at all are adding up to just this tiny little sliver. Meanwhile, they're leaving sort of untouched these things that are much better, which is getting more sleep, maybe taking a rest day here and there, managing stress. And I think that's something that's really, in a lot of cases, underappreciated. The extent to which managing the stress in your life can just optimize everything. I mean, by managing your stress, you will sleep better, you'll be more productive at work, you'll be more present among friends and family, everything gets better. And that's one thing, but it's like it's optimizing stress, in whatever way that helps you do that, versus (I think) this approach that's just so endemic right now, which is that, “Oh, here's the special one weird trick you need to do. And if you do this, your life will be perfect.” And I think that's the kind of optimization mindset that I'm personally against. This idea that once I do these three magic tricks, my life will be perfect.

SPENCER: Yeah, and I think I agree with you that that's really rare, and actually, sometimes I'll notice someone try some new self-help technique, and they'll be way too excited about it because a week later, things will be much better. And I'm always like, “Oh, no.” Like, 90% chance that a month is not gonna be working. Just because I've seen it happen so many times, where you get this initial boost about whatever technique you're trying right now, and you think it's working, but usually it doesn't. That being said, I will say, every once in a while, I do think someone finds something life-changing. I've seen this occasionally happen with a diet change, like a friend of mine had just had stomach pain for years that she couldn't explain, and then switch to a different diet, and she's like, “Oh, my God, I don't have stomach pain, what the hell, this is insane.” Another example is I've seen this happen with psychedelics, where someone — I don't think this usually happens with somebody with psychedelics once, but like a friend of mine — who did psychedelics once. It seemed to create a positive permanent change that lasted for years. But this might be really rare, unlikely stuff.

CHRISTIE: Yeah. And I guess part of what I'm saying is, there's some really basic things. And by basic, it doesn't mean that they're easy to do. But it's like exercise, eating a balanced diet, getting enough sleep, and managing the stress in your lives. If you can do those things, your life will be pretty optimal. Getting some social interaction, you know, there are some extra layers you can add upon that. But it's really those things and the ways that you achieve that. There are different ways to do that. But I don't think that there's a magic way that works for everyone. And I think that's part of the danger, too. I think we have to sort of figure out, one of the big important things to making your life better is managing stress. But the way I manage stress might be different than the way you manage stress, Spencer, and that's really individual. And I think you're right about doing some experimenting. I like that approach a lot and trying different things. Because I think sometimes we do get into ruts, and we don't realize that there are different ways of doing things.

SPENCER: Yeah, that makes a lot of sense. To go to the sort of last critique that I think you had is about the sort of sacredness, like, maybe there's some things we shouldn't try to optimize. I guess I would push back on that as well. For example, in my relationship with my loved ones, I absolutely want to make those better. And I absolutely want to make sure we love each other as much as possible and have as much fun together as we can. However, I will say that I don't want to do that optimization most of the time. In other words, I want to sort of when I'm with them, I want to be present with them and just enjoy their company. But periodically, I do want to step back and say, “How can my relationship with this person be better?” And I will sometimes have conversations with friends like, “Hey, I'm just wondering, what have I done that's upset you, because I want to make sure I never upset you.” Things like that, like thinking about opportunities to improve your relationships.

CHRISTIE: Yeah, I like that approach a lot. And I think that's really approaching life from a thoughtful perspective, and always sort of being open-minded about things like, is there a better way to do this? I think that kind of mindset is really healthy and really helpful, and wonderful. I like that a lot. What makes me bristle more is this idea that there's this one thing that I can do, or that we always have to, at every moment, be making sure that exactly the thing that I'm doing right now is the exact perfect, most optimal thing. I think sometimes it's okay to just relax and let things unfold a little bit.

SPENCER: Yeah. In fact, I would argue that even if you're a super optimizer, the vast, vast majority of the time, you should not be optimizing.

CHRISTIE: Exactly. I love that we've sort of circled around to where, honestly, after having this discussion, I feel like we're kind of coming at it where I think some of it is that when I think of optimization, I'm thinking of a certain type and variety. And particularly, I've heard from so many people, when I go out, and I talk about my book, who are these serial optimizers and always looking for the next thing. And I think some of it, too, is not finding satisfaction. And it feels to me, Spencer, you and I are kind of on the same page about a lot of this stuff. We were just sort of maybe using different terminology for it.

SPENCER: Absolutely. Yeah. And I posted these events where I bring people together that strongly disagree on a topic. The way I do that is I have them fill out the sheet of controversial topics, and they say whether they agree, disagree, or neutral. And then I match them based on, like, one person strongly agrees with something, someone strongly disagrees, and I put them through this kind of structured disagreement format to try to get them to see each other's perspective and then kind of discuss why they disagree. And the funny thing is that half the time, they don't actually disagree. Like, one of them said they strongly agree, and one strongly disagrees on the sheet about the topic. But once they clarify their terminology, and explain their point of views, they're both like, “Yeah, no. Yeah, we agree.” ' [laughs] I think that's what's happening here. I think what you're calling ‘over optimizing' or ‘trying to optimize your life', I would refer to as ‘common mistakes when people are trying to optimize'.

CHRISTIE: Yeah, I think so.

SPENCER: Like focusing on that next hack. “The last 20 hacks didn't work, but that one's gonna solve all my problems,” right? Or trying to optimize in the moment, like when you're with a loved one and be like, “How do we make this more optimal?” instead of just being present and enjoying it. And every two months asking, “How can I be a better friend? How can I be a better partner?” etc.

CHRISTIE: Yeah, it's interesting. I'll go give a talk and explain to people that supplements really aren't helpful. Usually, you don't know what's in them, etc. There's all sorts of reasons not to take supplements. And people will say, “Oh, yeah, yeah, I get you, I hear you. But what about this supplement?” Do you notice? They just don't want to give up. It's sort of this mindset of always seeking that one special thing. That's interesting.

SPENCER: It's almost a form of delusional optimism that's like, “Oh, I'm just going to find that one thing that's going to solve all my problems.” And it's like, nah, probably not.

CHRISTIE: Yeah. But that sort of mindset is so common. And I think the more that we can get out of that, the better. And one thing I really like about the kind of optimization you're talking about is that it's really sort of open-minded, and it's experimental, where you're sort of saying, “I don't know if this is going to work, but I'm gonna try it and really pay attention.” And I think one of the things that I think is really important is just paying attention to what we're paying attention to. It's so easy to pay attention to the wrong thing. And I think, one of the things that's really common right now in the moment that we're in, in our society, is that we preferentially pay attention to things that are measurable, to things that are data, that are metrics. That stuff can be really useful, but sometimes it's not useful. And sometimes, just because you can measure something—in many cases in some of these wearable technologies, the measurements themselves are junk. And so making decisions based on that is really a mistake.

SPENCER: I think there's such a huge bias towards things that are produced by devices. I hear this all the time with, let's say, depression. People were like, “Oh, we need to invent a device that measures depression. And we can run on your phone and detect whether you're depressed.” And I'm like, yeah, well, you know what else you can do, you can just ask someone nine questions. And, actually, it does a really good job. These sort of device-based metrics are actually often much less reliable. People like them, because they seem objective because it's a machine doing it. But they're not really considering what is the false positive rate on that machine, right?

CHRISTIE: Absolutely. It's like an illusion of objectivity. When in fact, the biases just aren't apparent.

SPENCER: Another one is galvanic skin response. People use it to measure stress, and people really like it because it sounds really scientific. You're like, “Oh, we showed them the stimuli, and I had their galvanic skin response change.” There's just some stuff you can learn from that. The problem is your galvanic skin response can change because you stood up too quickly, or you saw an attractive person, or you're anxious, or you had an upsetting thought. It's really nonspecific to you. Whereas, if you ask someone, “How stressed out did that make you?” Okay, that's kind of lame because you just have to ask them, but it may actually be the best we have right now with our given technology.

CHRISTIE: And I would also argue that it's actually really important that you learn to read that for yourself. You don't want to offload that to some device. You want to actually be able to read your body and read your responses and really check in and know how you're doing without all of these external tools. Those things can be tools, I don't want to completely dismiss them. But I think one of the things that I say a lot when I'm talking about my book is that for athletes, the most important thing that they can do for recovery is learn to read their body and know what it feels like to be fully recovered, know what it feels like to be under recovered, know what it feels like when your body's saying, “No, I need more rest.” That's something that these devices can't tell you, and it really does take some time and trial and error.

SPENCER: I think another way to put it is that your body is a measurement device. And the more you pay attention, the more precise it becomes.

CHRISTIE: Absolutely.

SPENCER: I think about this with athletes, where they have to learn to differentiate different types of pain really carefully. That pain of “Oh, shoot, I need to stop moving right now, or I'm gonna damage myself,” versus “Oh, yeah, that hurts like hell, but I can still go another 20 miles.”

CHRISTIE: That's absolutely right. And I like to say, your brain is the ultimate algorithm. It's taking in all of those inputs, all those physiological inputs, all of it psychological and giving you...this is why mood is the best measure of recovery status. People think that's really squishy. What about mood? But, it really is a sort of sum total of all of those inputs.

[promo]

SPENCER: All right, let's switch topics now. And we'll go to a topic that I think is a favorite of both of ours, which is science. Now, you have this idea that you think that science is inherently slow. And I'm not sure that I agree with this. But I want to hear your pitch for the idea that science is inherently slow. So curious to hear your thoughts.

CHRISTIE: Yeah, I think my argument here is that the general public tends to have sort of a view of science as it being sort of this thing that turns things into truth, that it's something that produces answers, it produces truth. So you do a study, and now you know something. And my argument is that, although it is true that how science proceeds is with studies and experiments and things like this, to really understand things and know things and get to sort of establish truth is a much slower process. No single study will ever give you a definitive answer. Most of the answers that we're seeking aren't that simple, actually, to begin with. And even seemingly very simple questions can have very complex answers. And so, I think our human tendency is to want to jump on the first result that we see. We see this all the time in studies that get a lot of play in the media. They're sort of a novel result that sounds really interesting, so we jump on it. But it turns out that, although that finding may be correct, it's only correct under the set of 15 different circumstances, or whatever (at this temperature, at this time of day, with these types of people, etc, etc.) But we want to generalize, and we tend to overgeneralize things. And so my argument here is basically that the hallmark quality of science is that it's provisional. We always need to be open to new evidence. And it's a process. Science is not an answer, it's a process, a process of uncertainty reduction. And so I think there's a tendency to want to extract more certainty out of studies than we really can. And we see this a lot in some of these psychology studies that are now showing to be irreproducible because something was true under this one set of circumstances, but it can't be repeated because it's dependent on that. And so, you have to slow the process, you have to keep fiddling around to see under what circumstances this is true. What are the important factors? And I think that most people don't appreciate how complicated it is to get even a seemingly simple answer.

SPENCER: Yeah. So I think that there are kind of weird exceptions where science goes really quickly. An example might be, Einstein comes up with a theory of special relativity, puts it out in a paper, turns out he pushed the whole field forward, like a great deal. Now, it took a while to be sure he was right. People had to do careful studies, and you really needed multiple different teams to do these experiments, to really confirm it over a period of years. But that is sort of the leap forward kind of science. And I'm wondering, is that just like a weird counterexample that is just sort of rare in your view? Or is that not the sort of science you're talking about?

CHRISTIE: I don't think that that's even really a counter-example. I mean, Einstein may have put that up very quickly. But that was based on a lot of thought that he had given to it a lot of work. And then there was an incredible amount of work that came after to confirm and to say, “Okay, we can trust this.” And that work has happened. Now, we can go back and say, “Okay, he was right, and that first paper was right.” But it could have also gone the other way, as well. In this case, it didn't. But my point is that it takes a lot of work to confirm things. And it's very rare that you have this one eureka moment where all of a sudden we understand everything. The COVID vaccines are kind of an example of this, where those vaccines came together very quickly. But they were based on technology that had been in the works for a long time. Now, I think that being in the pandemic, and just the important nature of humans needing to get those vaccines helped scientists to really speed that up a lot faster than it would have happened otherwise. But you see that it was still a slow process. And when you look back, and you see what made that possible, that was slow. It was not fast. It came together because of the groundwork that had already been laid over many years.

SPENCER: Right. So it's sort of like there can be this sudden leap forward in terms of a theory being put out or something like that. But it's still a slow process to validate that theory. I think, for instance, science took years, maybe many years, to really be super confident the theory was correct. And even with the COVID vaccine, where the first vaccine was put out remarkably quickly, it still took a huge amount of time to develop the technology that enabled that, but also a lot of time to make sure that it was safe and try to really validate it. I think from that point of view, I agree with you. It's not done once the thing is put out there, there's just so much more work to be done to make sure it works or that it's safe, and the theory holds up, and so on.

CHRISTIE: Yeah, that's right. And I think that it just takes time to really interrogate the question from many angles. The more certain we can be about something, that requires more and more work. And so you start off, and you have a result, but there's a lot of uncertainty surrounding it. And so, you look at that uncertainty, and you sort of turn it over, and you test it, and you look for how this effect works under these other circumstances. And so you test that, and you get an answer, and then you try again, and then you try under other circumstances. And it's really a process of playing around on the edges of those uncertainties in order to get to what we might consider established truths now.

SPENCER: The way that the media is incentivized seems to kind of work in the opposite way, where there's this incentive towards reporting on the newest study, like the latest thing that came out, rather than say, “Okay, well, that's cool. But let's wait three years to see if that holds up.”

CHRISTIE: Oh, absolutely. And one thing that is also important to note is that very rarely does the media go back — and scientists, too, let's be clear — and say, “Oh, wait, you know that finding that got a bunch of headlines two years ago? Now, we've done a couple of other studies. And, you know, it wasn't wrong, but we thought it applied everywhere. And now we see that that's sort of an edge case, and probably not the norm.” Those stories tend to not get done. It's the first sexiest, most counterintuitive finding that gets all the media attention. But as you sort of continue to learn about it and find out, usually those effect sizes go down with more research, and you realize, okay, it looked like it was really big in the first study. But there are actually some pretty good reasons why those initial studies often look like the effect is larger than they turned out to be after more research. And so, you have to be patient if you want to get to sort of more established knowledge. But we're really antsy, and all of the incentives, not just for the media, I mean, everyone wants the clickbait, they want to get readers. And so they want the headlines that are going to be attention-grabbing and all of that. But journalists and scientists and institutions also have a lot of incentives to put out science that's really attention gathering, and all of that. And so, there's just an entire incentive system that rewards bold, sort of outlandish findings. And even if you put out those initial studies and they don't hold up quite as they initially look, people are still rewarded, both the scientists, the institutions, the journals, and the journalists.

SPENCER: Now, you mentioned how there's this replication crisis, a bunch of things aren't replicating. And you also mentioned generalizability. I just want to point out that in order for work to be useful, a series of things have to happen. So the first thing is, the work actually has to be correct. If you were to redo the study on a very similar population, you should be able to get the same answer. And my best guess is that in social science, at least, something like 40% of studies would not replicate from top journals. So, you already maybe lost something like 40% right there. Then after that, it usually needs to at least have some degree of generalizability. Because usually, you don't care that it works on this exact type of person in this exact setting. You need some degree of generalizability to make it useful. And then the way to make it science in the sense of trying to find patterns that then you can apply elsewhere, right?

CHRISTIE: Yeah, that's right. You don't want to know that this only applies to white, middle-class college students who just ate lunch and are sitting under fluorescent lights in a college university lab, right?

SPENCER: Exactly, exactly. But then, even if you have those two things, I would add a third on there, which is this sort of importance criteria. Which is like, yes, it can be reproducible and generalizable, but does it matter? Either because the effect size is so small, or because it's just about something that's totally pointless or irrelevant. And I think a surprisingly small percentage of things make it through all three of these hurdles. I'm curious to hear, how much science do you think is getting through these three hurdles?

CHRISTIE: Well, it depends on what field of science you're talking about. In social science, that may be different than in, say, life science or physical science. It's very specific to the field of study. And one thing that I've noticed — I write about a lot of different kinds of science, and — each field of science sort of has its culture and its set of ways of doing things, and what's acceptable, and what kinds of research practices are considered okay, and what's acceptable and what's not. And it's interesting that these vary so much between fields. So, for instance, while I was working on my book, I was reading hundreds, probably up to a thousand studies on sports performance. What I found is that in that field, it's very, very common to have extremely small sample sizes, like on the order of a dozen people is pretty typical. If you know anything about research methodology and statistics, you know that it's almost impossible to get reliable answers from such small studies. But for various reasons, and some of them are very good reasons, it's just very hard to get large sample sizes for these sorts of studies. And so, the small studies have become the norm, and for a very long time, they were acceptable. But this created a problem because it just means that they're producing a lot of unreliable results. There's a movement underway now to try and change some of this and to change the culture here and to change some of the acceptable methodologies. But for a very long time, that was considered okay. And, if you were to go look at something like medicine, you could never get a drug approved because you studied it in 12 people. That's just not large enough, and the sort of standards are a lot different.

SPENCER: You get it disapproved for twelve people if like three of them died, but you definitely couldn't get it approved.

CHRISTIE: Exactly, yeah. Before the COVID vaccines were approved, there were thousands of people that were tested.

SPENCER: Yes. So, what other elements of research methodology do you think are really important that people should be more aware of?

CHRISTIE: Well, I think what people need to understand is, the answer you get really depends almost completely on how you ask the question. So what I mean by this is, the way the study is set up is so fundamental to what kind of answer you're going to get. And so, really, methodology is kind of boring — I think a lot of people are prone to skipping over it, — but it really matters how that question was measured. And there have been some really fascinating studies showing this. So there's this really interesting study that came out in 2019. It was basically this giant crowd-sourced experiment. So they had something like 15,000 subjects, and over 200 researchers or so. Basically, they were given these hypotheses and asked to test them. Basically, it's like, “Here's your hypothesis, go at it, find an answer.” And what ended up happening is people went about trying to answer this in all sorts of different ways, using different methodologies. And the answers they got just varied wildly, they were all over the place. The research teams could decide how to test them. And so for instance, one hypothesis, seven groups found evidence in favor of it, six found evidence against it. And this is pretty typical. Basically, all of these results were all over the map. So, for instance, one of the things they were looking into was about whether people were aware of having implicit negative stereotypes. And so, there are a bunch of different ways that people tested this. And depending on how they tested it, they got different answers. There's also a really great study that came out a couple of years before this, where they looked a bit similar to the crowdsource experiment. People signed up, everyone was showing their work. So, everyone has an incentive to get the “right answer.” In this case, everyone used the same data set. So they weren't even using different methodology, but they had the same data set, and they were just analyzing it in whatever way they thought was best. And the answers were all over the place. Now that said, if you took them together, there seems to be an answer. And the question, in this case, was, do soccer referees give more red cards to dark-skinned players than to light skinned ones? And so they took real data, and each data team took that data and analyzed it however they saw fit. And so the answers varied pretty wildly. But if you took all of those answers together, it looked like on average, they did. The effect size wasn't huge. But if you looked at just individual results, there were a few that showed that referees were extremely more likely to give red cards to the dark-skinned players. And this was a very large effect. And there were some that had no results, where they said, no, there's no effect. And this is all using the same data. And so the takeaway here is just that methodology really matters. And it's really important. And so it's not just important to try to reproduce individual studies, but it's really important to ask the questions that we're interested in different ways and using different methodologies to see whether they stand up. Because otherwise, what you can end up doing is getting answers that are sort of a relic of the way that you're measuring things and the way you're asking the question.

SPENCER: Yeah, I love those papers. And I've looked at them both carefully. And I think it's really interesting to think about why they actually get different answers. My opinion on that, having gone through them, is that sometimes it's because the question is actually really vague. It might not sound vague, but then when you actually try to operationalize it, you realize there's a vagueness to it. And so what's actually happening is different researchers are actually answering subtly different questions. Sometimes I think what's happening is, there may be a better way to test the hypothesis and there may be worse ways, and not everyone's choosing the best way to do it. An example might be in the red card study, there's a question of what you should control for, right? Should you be adjusting for, let's say, what country it's in? Or should you be adjusting for the skin color of the referee? Or, I don't know what factor, I don't recall what factors were in the dataset. But there's a real question there. But there actually may be a better answer to that, depending on what data is available and the nature of the question. Some choices there are probably better than others. So I think those are two of the big factors going on why people get different results.

CHRISTIE: Right. But I think there's a fundamental issue here that's really important to understand. And that is that answering any question, first of all, you have to define what that question is. And different people may have different definitions. So you need to make sure you're really talking about the same thing. Because a lot of times, what happens is someone does a study to answer a question, but it's not really answering the question as most people see it, or it may be very specific. For instance, in a lot of psychology studies, they'll be asking people online to fill out surveys and things which may not properly reflect the real world. But there are all sorts of definitions that you have to do and ways of thinking about what's the best measure of this thing. With the red card study, it's pretty easy to define what is a red card, is it given? But there are a lot of other things that we're interested in studying, where there isn't an obvious way to do that measurement. So, for instance, at 538, we put together an interactive that allowed people to try out p hacking, which is a way of sort of fishing around in your data until you find a significant p-value. One of the things that that interactive lets you do is see how we are going to measure. How are we going to answer this question? And so, the research hypothesis was that the political party in power affects the economy. And so that seems somewhat straightforward, maybe. But immediately, you get into this question of okay, well, what do you mean by political parties in power? Is this the presidency? Is it Congress? Is it state legislatures? Is it state governors? There are all different types of ways of answering that question. And then what do you mean by economy? Is it GDP? Is it inflation? Is it wages? There are all sorts of different ways of measuring these things. And again, how you measure the question, and really strongly, determines what kind of answer you get. And so, you need to really be not just refining the methodology, but I think, also refining your research question. Okay, we want to know this, but we can't answer that super broadly. So how do we refine it? And how do we do it in a way where we can really measure those things and be sure that we're measuring the thing that we care about? That's another thing that I see a lot with some of the studies that are out there, and that may be misleading is that the things being measured aren't good ways of answering the question, or at least maybe aren't satisfying. And you sort of hinted at this Spencer earlier.

SPENCER: Yeah. When you read it in the headline, like, “Oh, political party influences the economy, or doesn't influence the economy or whatever,” it's not immediately apparent that there's so many different ways to operationalize that, and that there are all these different researchers' degrees of freedom and making that choice. And that, in fact, the answer might be totally different based on what we choose. So I think that this is really common. And actually, I'm sure I've mentioned this before on the podcast, but my favorite way to read academic papers is I read the abstract to decide whether to keep reading and then I immediately jump to the Methods section to read what they actually did. Because I want to know what did they actually do. And then I read the results before I read their interpretation because they might say that they study XYZ, but I want to see, what did you actually do? What were the results? Okay, only then do I want to see how you interpreted that.

CHRISTIE: Absolutely. Yeah. And I do the same thing. The methods are the first thing that I read after the abstract. It's amazing to me how often the conclusions in the abstract really aren't supported by the data, or they're sort of the just-so story that's being told to explain the data, which is different than having really proven that point. But yeah, I'm with you. I think it's just so important to know what they actually did. And so, with our economy and politics study, maybe the question shouldn't be stated, like, “This party is better or worse for the economy,” but, “Having X number of governors in this party has this effect on inflation in those states,” or, “Having this political party as president has this effect on GDP or correlation.” It'd be very hard to prove causation here, but then it's more specific. But instead, so often, our tendency is to want to really make it as broad as possible, when in fact, the study didn't study that broad question. It studied a very specific question. And in fact, every study studies a very specific question. What happened under these very specific circumstances, using these very specific measures?

[promo]

SPENCER: There's also an incentive for the people writing the paper to make it sound really interesting, right?

CHRISTIE: Oh, yes.

SPENCER: Because they have to get accepted in a journal. And, essentially, one of the main criteria is, is this an interesting finding? And so when they're describing what they found, the description is way more interesting than the actual thing itself when you kind of read in detail.

CHRISTIE: Absolutely. Yeah, that's exactly right.

SPENCER: I remember one paper I was reading on whether people who had experienced trauma in childhood were less likely to explore, like if they have the option to explore. That sounds pretty cool, sounds pretty interesting. Can you guess what the experiment actually was? It was impossible to guess, but that's sort of the point. I'm curious about what you think they did.

CHRISTIE: I was guessing it was like a questionnaire of probably Mechanical Turk, who said, “Did you experience childhood trauma?” Probably no definition of what they mean by that. And then there was a video game where they had different rooms, and they were testing whether they explored more rooms or less.

SPENCER: That is an extremely good guess. Yeah, it was an apple-picking video game where — it actually seemed like the most boring game ever — you see a picture of an apple tree and a certain number of apples, and you decide, do you pick an apple or do you go to the next trade? And then it's like, “Oh, people with childhood trauma switched trees more often.” It's like, okay, that's not as exciting as your description. I'm not saying that has no scientific value or anything, but it's not exactly what you would have assumed (well, maybe it's what you would have assumed) but it's not what most people would have assumed, right?

CHRISTIE: Yeah, I saw a study the other day that was saying, it was correlating. It was like, “Oh if you've lived abroad, you're much more creative and thinking more openly or something.” And I thought that sounds interesting. And I've lived abroad several times in my life. And then I go and look at the study, and they basically ask people to take tests. I can't remember now even how they measured creativity, but it was something that was just sort of laughable, like, “Do you paint?” (I can't remember.) And then, “Have you ever lived abroad?” And then it was also interesting, because they said, “Oh, but spending time abroad didn't affect it.” And so you're like, okay, you're just starting to really parse the data to find something too, but it really didn't seem nearly as interesting once you saw what they actually did.

SPENCER: Yeah, I think it's just a really common phenomenon. When you dig into details, you're like, “Oh, okay, well, if this holds up, I'm just like, it's not that interesting,” where the abstract makes it sound kind of sexy. So before we wrap up, the last thing I want to ask you about is just intellectual humility in science.

CHRISTIE: I think intellectual humility, which is basically just sort of the understanding, or the concept, that you know that you might be wrong and sort of holding that thought as you proceed, is so important. And really, it is the scientific ideal. But science is done by human beings. And I think we're just not very good at this. It's hard, it's hard to keep an open mind. It's hard to keep open the possibility that what you know might be wrong, or that you need to update. And I think this is something that, as science proceeds, it becomes more sure. Science is a process of uncertainty reduction. And so, you get to a point where you're feeling much more certain than you did initially. And yet, it's important to always have that humility and to understand that there may be exceptions, or there may be things that you don't know. And I think so much of this is really just holding space for that uncertainty, but also understanding and kind of identifying, “Okay, here are the things that we're less certain about,” and having some humility about that, so that you can be open to learning something new. It's interesting, I think the things that we're sure about, that's not the interesting stuff when it comes to scientific discovery. Discoveries really take place in the outliers, in the parts of uncertainty. And so, I think it's really important for us to really make peace with uncertainty and understand that it's a fundamental part of the scientific process, that it's not a reason to mistrust science. But in fact, it's a fundamental part of the process.

SPENCER: Yeah, I feel like one of the few times I've read one of your articles, and I disagree with you is, I felt like you're letting scientists off the hook maybe a little too much. Like by saying that's so fundamental to the process, like maybe it's not as fundamental if we do science a lot better. You know what I mean? Maybe if we do science a lot better, we can actually be a lot more confident, and we're just not doing as good as we could be. What do you think about that?

CHRISTIE: Oh, I mean, we could, certainly. The better our methodology and all that, the more certain we can become. But there's always uncertainties left. Science can never eliminate uncertainties. It's interesting. So you think of gravity as settled science, and it is. Yet, there's a lot that is still mysterious about it. There's a lot we don't know. And I think it's just really important to keep that humility about it. And I think that's different than letting scientists off the hook like, “Well, they're just uncertainties, and that's okay.” I think what scientists need to be sort of held to, is being more transparent about those uncertainties, and being more forthright about them, as they work to reduce them and to address them. But I think it's very important to understand that we can never eliminate uncertainty. And so, I think this is a really good thing to understand. It's sort of a red flag when you have some guru, or some scientist saying, “I can offer you absolute certainty about something.” I think that's a sign that not only are they lacking this humility, but they are not recognizing those uncertainties that are there. I think any good scientist will be very quick to point to those things and to not oversell their findings. But, again, it's done by human beings, and a lot of human beings lack humility, sometimes.

SPENCER: Last question, do you think that it's more that people are self-deceiving, or that they're sort of exaggerating, and they know they're exaggerating?

CHRISTIE: I think we start to believe things. And it's just (I mean) we have to, right? We get evidence, and we come up with an explanation for it. And the more that we do this, the more sure we are about things. And maybe they don't hold up. And so you have to leave room for that. And you have to be open to that. But I think the tendency is to just cling to what we've thought because we've invested in it. And if you're a researcher who's done multiple studies on something showing a thing, and then it doesn't reproduce. And it turns out that it's a relic of the methodology, or there's some other issue or problem with it, it's hard to swallow. I think that's understandable that you're not going to be excited about overturning that research. But we have to do that.

SPENCER: Christie, thanks so much for coming on. It's really fun.

CHRISTIE: Oh, it was a pleasure talking with you. Thanks for having me.

JOSH: How do you balance exploration, like learning and reading and browsing, versus exploitation?

SPENCER: I assume here exploitation means not exploiting people for your benefit. I mean, it's the computer science term of exploitation, meaning executing on strategies that you know work well. And so the classic example here would be like, suppose you go to a restaurant, and I suppose you've been one other time before, and you had a dish there that was delicious. Do you reorder the delicious dish, knowing that it's very likely to be delicious again? Or do you try something new that may even be more delicious? And then, if you found something more delicious, you could then exploit that later by ordering that in the future. And there's a number of different variables that sort of influence any given situation and whether you should be doing more exploration versus exploitation. One is early on when doing your thing, you should be doing more exploration. So, when you are first starting your career, you should be exploring more options. When you're a young person compared to someone who's a year away from death's door, you should be doing more exploration because you're gonna get to benefit from an exploration longer. So that's one factor. Another factor is sort of how dangerous the environment is. In an environment where there's like an occasional choice that just kills you or has really, really bad outcomes, then exploration has a significant cost. Whereas in a safe environment where you can explore without incurring a significant chance something really bad happens, then exploration is less costly. So that's another factor. And I think a third factor just has to do with how much upside you think there is in that domain. So let's suppose you've already found dishes that you think are near the peak that you could enjoy a dish and you just can't really enjoy a dish more than that. That's going to push you towards exploiting what you figured out more. Whereas if you think that, hey, maybe there are dishes out there that are like 10 times better than anything I've ever tried. Well, then that's gonna push you more to exploration.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: