CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 245: Could the placebo effect be bullshit? (with Literal Banana)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

January 16, 2025

Is the placebo effect bullshit? Are "open-label" placebos just as effective as "closed-label" placebos? How do placebos differ from dummies? Is the placebo effect just a kind of scientific-sounding "woo"? How does social priming differ from word priming? Why is it important in research to have both placebo and no-treatment groups? What is the Hawthorne effect? What is the John Henry effect? When is it useful to express effect sizes using Cohen's d? If there's not a placebo effect, then what's really going on in cases where it seems like there is one? Is meditation a kind of placebo treatment for mental states? How can researchers believe that people's mental states are important and yet that the placebo effect doesn't exist? What is stress-induced analgesia? Does the nocebo effect (if it exists) provide reason to think that the placebo effect exists? Where do psychosomatic effects fit into this picture? What have animal studies found about the placebo effect?

Literal Banana is literally a banana who became interested in human social science through trying to live among them. After escaping from a high-tech produce delivery start-up, she now lives among humans and attempts to understand them through their own sciences of themselves. Follow Literal Banana on Twitter at @literalbanana.

Further reading

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Literal Banana about whether or not the placebo effect really exists; misattribution, effect sizes, and the replication crisis; and what we can learn about the placebo effect from animal studies. This is Literal Banana's second time appearing as a guest on our podcast. If you didn't catch her first episode but want to hear more of what she has to say, then check out episode 106. Before we get into the conversation, here's a quick note: The first part of the conversation focuses on evidence against the placebo effect; but after that first part, Spencer plays devil's advocate, and they spend some time discussing evidence for the placebo effect. So if you want to hear a case made for both sides of the issue, make sure you stick it out past the first part of the conversation. And now here are Spencer and Literal Banana.

SPENCER: Literal Banana, welcome.

LITERAL BANANA: Thank you.

SPENCER: Is the placebo effect bullshit?

LITERAL BANANA: I really think it is.

SPENCER: Now, that's quite a shocking claim. There's very few things that are as believed in science as the placebo effect.

LITERAL BANANA: It was a surprising conclusion for me to come to, precisely for that reason. It's just kind of in the background as this thing that exists. We know that placebo-controlled trials are a thing, and the way that you see if a treatment is effective is you test it against a placebo arm. I think that use of placebo as a blind, as a sort of noise-mimicking device in a randomized controlled trial is very strong and powerful. That's a major innovation. But as for the placebo having some kind of healing power, I do think that is fake, and the existing science doesn't really support it. The parts of science that do seem to support it do so in a very limited way.

SPENCER: So let's clarify that, because you made a really important point there. When you're doing a study and you want to really carefully control it, you can divide people into groups. You get one group that gets the drug, say, an antidepressant, another group that gets a placebo, which might be a sugar pill. That's one use of the word placebo, referring to the placebo group, the people that get the placebo treatment but that's not necessarily the placebo effect. So the placebo effect, do you want to define that for us? What is the placebo effect exactly?

LITERAL BANANA: It's a little bit hard to define. What I take it to mean is it's kind of a paradoxical thing. It's maybe an inert substance, a substance that doesn't do anything like sugar or starch, but that still manages to produce an effect. It could also be some kind of practice that's a placebo. It doesn't necessarily have to be a pill or an injection, although it could be fake acupuncture or a sham version of a surgery, but something that's not supposed to have any healing power, and yet, within the placebo effect paradigm, somehow it does have healing power. Sometimes it's supposed to be through the power of suggestion. That was the traditional view that it was really a mind-healing effect, that the mind was convincing itself to get better. That's somewhat challenged by so-called open-label placebos, where they're supposed to work just as well if you know they're a placebo. That's kind of how I would define it. There's a cool distinction — if you want to hear a distinction — in the 50s, Sir John Gaddum was writing, and he wanted to distinguish a placebo from a dummy. He said, "Well, there's this placebo thing that might have healing effects through suggestion or something. Then there's this thing that we use in randomized controlled trials, which is the dummy. This dummy pill is just there to mimic everything that's going on with the trial, except for the actual substance of the drug. Everything's the same. The researchers can't tell the difference. The subjects maybe can't tell the difference." But dummy pills aren't particularly known for pleasing people. The idea behind placebos means something about pleasing. It's not clear who is supposed to be pleased by it, but he was encouraging a distinction between the placebo effect — as we're talking about it, the part that I think is bullshit — and the usefulness of the dummy, although he was still talking about potentially having psychological effects from a placebo. That's kind of what I argue is not the real part. I think placebo-controlled trials are great and should continue, and are one of the best ways to get information about treatments, but I don't have much hope for a powerful placebo effect.

SPENCER: I usually think of the placebo as improvements in outcomes that are caused by the belief that you're going to improve, whereas the nocebo, the opposite, would be a negative effect on outcomes based on the belief that you're going to get worse. So it's the belief that causes it. What do you think of that definition?

LITERAL BANANA: Certainly part of it, and different people have proposed different aspects of the placebo as being the important thing. Is it the belief that matters? Is it the ritual that matters, swallowing the pill or something? Is it the doctor-patient relationship that matters, the sort of feeling of being cared for which maybe goes along with being prescribed pills or injections? I don't think it's really clear, and even up until the past couple of years of placebo research, it's still kind of being debated what is the allegedly effective part of the placebo effect, and what kind of experimental design would be able to capture that, to decide if it's the ritual, the interaction, the belief, the suggestion. I think it is pretty difficult to tease out because the main way you might tease out effects in normal treatments is with a placebo group. But if you can't really do that with a placebo.

SPENCER: You'd have to vary all kinds of small details, like whether the doctor paid attention to you and whether you believed it worked, and so on.

LITERAL BANANA: Yeah, it's very subtle and I would expect, as a cynic to the whole enterprise, that if it was done well, everything's pre-registered, the analysis plans pre-registered, everybody's honest, probably the effect is very small, very close to zero, only on self-report outcomes. But would it be because of the suggestion? I don't know. Would it be because of some kind of doctor-patient relationship? To me, the core of it is role-playing and politeness, that the placebo group may report that they're doing a little bit better because they've been given this communication that, "You're given the substance and it's supposed to make you better." So I think it's kind of only polite and playing along to report that you got somewhat better. It does seem that subjects do that even when it's an open-label placebo. At least in some studies, not all of them, if they say, "Well, here's this open-label placebo, here's why it might work even though it's nothing," and then they report, "Oh, yeah, that worked," in some studies, and it doesn't do anything in other studies. So it seems to vary a lot.

SPENCER: If the placebo effect was caused by the belief that you're going to improve, then, interestingly enough, open-label trials where they say, "I'm giving you a placebo," would work if people believe placebos worked. Because if you believe placebos work, then you're good.

LITERAL BANANA: Although I often see the claim, and I can't drill down to every study that's done this, that it doesn't seem to depend on placebo belief or expectation that much, that it might be completely independent of belief or expectation, which, now that I think of it, might be separate things you could measure. You could ask a survey question like, "How much do you believe in the placebo effect on a scale of one to ten?" And maybe a separate question that might be correlated or might not be is, "How much do you expect this to make you better?" I think some studies find that there's an effect of expectation, and some don't, and it's kind of all over the place.

SPENCER: Can you tell us the story just briefly of how you came to believe there is no placebo effect?

LITERAL BANANA: I'm very interested in what you call the replication crisis in the social sciences, and trying to look underneath what is the commonality between all these things that turned out to be fake that I used to believe, a lot of people used to believe. One underlying factor that's probably too broad, but I think is an interesting stab, is what I call automaticity: the idea that humans are kind of automatons who can be controlled by subtle changes in the environment. Something like social priming, changing the color that a text is printed on can change how well you do on a test or something like that. I find the idea that there are big psychological effects that are predictable and the same in everybody, pretty much and measurable, kind of questionable, and I would lump it under the automaticity banner. Some of the early papers in this, one of them was called, I think, "The Unbearable Automaticity of Being," kind of a copy of "The Likeness of Being." To me, it seems a little bit woo that people are controlled by all these subtle factors in our environment, that if we see the color or if we hear a word that reminds us of being old, we'll walk a lot slower or something like that. I think the placebo effect connects to that; to me, it seems like a sort of, I don't know, woo psychological effect that seems plausible because there's a long history of science behind it, but maybe that is about as real as the priming science.

SPENCER: Social priming, which is different from, say, word priming.

LITERAL BANANA: Exactly, exactly. Yeah.

SPENCER: Where social priming involves things like getting someone to hold a warm coffee cup and then they act more warmly, or you get them to do a puzzle involving words with old age, and then they walk more slowly. That is probably the most dramatic failure of the replication crisis, where virtually none of those studies have replicated. I don't even know if a single one of them has replicated, as far as I'm aware

LITERAL BANANA: I found a single one.

SPENCER: Oh, you did? You found a single one. What was it?

LITERAL BANANA: I don't remember exactly, but I remember reading the claim that no priming study has been replicated by a different lab, independently replicated by a different lab. So that's the challenge; if the same lab replicates their own effect, do we really trust that? But I did find one. I still don't think it's real, but I heard the claim, and I was like, I wonder if that's true. At least as of, I think, a year and a half ago, it did seem to be true, except for this one study. I think they have pretty poor replications, other than the same lab repeating.

SPENCER: And what's the one?

LITERAL BANANA: The title is "Thinking About Neither Death Nor Poverty Affects Delay Discounting, but Episodic Foresight Does." It's a very subtle and complex replication of one out of three things that they sought out to replicate that I'm not super familiar with that body of priming research. I think it might have to do with mortality salience, which is part of terror management theory, which has had a terrible time in the replication crisis. But it looks like these guys in 2022 got an independent replication of something.

SPENCER: All right, so something that seems to replicate?

LITERAL BANANA: Yeah, I think it rounds to zero. So I started coming up with different theories because a major focus of research for me has always been depression treatments, both pill-type therapies, drug therapies, psychotherapies like talk therapies, and studies on exercise and stuff like that. The placebo groups in antidepressant trials tend to do very well. They tend to have big pre-post differences; at the beginning, they're doing very badly, and then later on, they're doing much better generally. It's very similar between the placebo group and the treatment group, but they usually find a pretty small difference between the two that is statistically significant. So it's not zero. One theory I had early on is that the waiting list was a particularly bad control because people who are trying to qualify for a study may exaggerate their symptoms. If you tell them, "Well, you're on the waiting list, maybe we'll get you treatment at some point," the social role that you're kind of asked to play is to keep exaggerating your symptoms to stay unwell so you qualify for the trial. Whereas if you're given a treatment, it may work or it may not, but that's kind of the signal, the communication, that that's all you're going to get. I was wondering if exaggeration had something to do with the placebo effect, especially versus a waitlist condition. Interestingly, it seemed like it was almost the opposite because, at least in antidepressant trials, this isn't necessarily true in other medication trials, but both drug and placebo effects are larger when the doctors are doing the rating, basically when the researchers are doing the rating. If there's exaggeration, it's probably more the researchers exaggerating the pre-post difference than the individual patients doing that. I think that could vary in different trials of the same drug even, but that was one of my early theories that ended up not working out, but I found something more interesting.

SPENCER: So then, how did that take you to the skepticism you're at now?

LITERAL BANANA: One thing was just wondering where the current scientific consensus was, and doing some research, I got a hold of the — let me see, let me say his name — Hróbjartsson and Gøtzsche.

SPENCER: I'm really glad you pronounced that, because you just saved me from having to try. So thank you. You took one for the team.

LITERAL BANANA: Hróbjartsson — I'm sorry to anyone from any Nordic country whatsoever — but they were doing, and it's really rare to find placebo skeptics. I felt I had this kinship with the few people doing any kind of placebo skepticism, but they published some pretty shocking results basically showing how small the placebo effect is. When you compare placebo to no treatment groups. There's a 2004, 2010, I think there's another one at some point, but they all kind of find the same. If you look at clinical trials where they have a treatment arm, they're trying some treatment, whether it's a drug or some other kind of treatment, they're trying a placebo arm, and then they also have a no treatment arm. So that's a pretty fancy three-arm trial. That's not every trial. Often a trial will just be two arms, or even it's just an open-label trial, and they just have one arm. But this was gathering all the treatment trials they could find that had three arms so that they could compare a placebo to nothing. What they found was that the placebo versus nothing was really small. In terms of pain scale, the most recent meta-analysis of pain treatment trials found the placebo effect is three points on a 100-point scale better than nothing, I believe. One of them found a six-point difference. That's basically like, if your pain is on a one to 10 scale, that's like a 0.3 or 0.6 point difference. Usually, maybe that would matter, maybe that would potentially make you feel better. But it doesn't actually seem to be measurable. It seems like the minimum difference in a lot of studies between a little bit better and not any better is at least close to one point on a 10-point scale, so 10 points on a 100-point scale. So it seemed like on pain, it was too small to be much of anything. They found that on objective outcomes, there was just nothing. Anything with a lab test, it was just nothing. I believe one of the meta-analyses was significant, just barely, but in the wrong direction. So placebo actually did worse than no treatment on laboratory measures. I believe that's just noise. That's just how things work out, because there's not a lot of trials. But that was the first really strong evidence I saw that maybe it's just this really small effect that's only something to do with surveys.

SPENCER: To make sure everyone understands, we're talking about trials where you've got some people getting a placebo treatment, like a sugar pill, and some people being told they're getting no treatment, "Just hang around. Don't do anything different than normal, and we're going to measure your symptoms." So they're measuring symptoms in both of those two groups and comparing them. I want to explain for a moment why that design is so important. Imagine you're running a regular trial where you're giving some people a real treatment, so there's not a placebo. You're giving real treatment, like an antidepressant, and you also have a placebo group that's getting a sugar pill. Imagine the sugar pill group shows big improvements. A lot of people want to jump to the conclusion, say, "Oh, look, the placebo is so powerful. This is great." But the problem is that that thinking actually doesn't work, because there are a bunch of effects that can cause the placebo group to improve that are not the placebo effect. For example, many disorders tend to improve on their own. If someone has a cold, they're very likely going to feel better in a week or two. If they have depression, many people will recover naturally from depression over time. So if you give a bunch of people a placebo treatment, there's a good chance they're going to improve anyway, and so you can't give the placebo effect the credit for that improvement.

LITERAL BANANA: Exactly. And that's the point of the placebo blind: to see how much of this is from the treatment versus how much of this is just natural improvement over time and other things. People get better over time. It's episodic, as you said. Also, sometimes researchers might be slightly exaggerating the scores at the outset, maybe just to qualify more people for the trial, so it'll appear to go down over time, even if it doesn't actually go down over time. There's a lot of things that are not caught in just a pre-post analysis, like before minus after that are caught in comparing a treatment to a placebo, or less so, but still a little bit, comparing a placebo to nothing.

SPENCER: It's sort of the way the reason we need a placebo in a study to compare an intervention is to make sure that the intervention is really doing what it says. It's not due to other variables. Similarly, if we're trying to measure the effect of the placebo, we have to compare against something. We can't just completely compare it against itself. We need a no treatment condition as a kind of quote, unquote placebo for the placebo, if you will.

LITERAL BANANA: That comparison can never truly be blind. We don't really have a way to blind that. The way we blind things is with a placebo. So what's left is, I think, a necessary difference between a placebo and a no treatment condition, which is, people may be a little more polite when they're given a placebo. They may want to report, "Oh, yeah, I got pills. I guess I got a little better. Not much, still a pretty small amount, but it's not implausible that it would be a little bit."

SPENCER: I think the key thing there is that there are a lot of reasons that people who get a placebo treatment might improve, like we talked about natural improvement, but there are other reasons too, like regression to the mean. Maybe you're more likely to enroll in a study when you're doing a little worse than average, but then you might regress back to the average and feel better. You can also cut off effects; if you only enroll people that are exactly above 10 points on a depression scale into your study, then anyone who's having a particularly bad day will be more likely to be accepted into the study, because they're just above the cutoff for someone having a particularly good day will be just below, and then they'll regress to the mean. A lot of these reasons why someone might improve in the placebo group, it's not the placebo effect. One more reason I want to mention that I think is really interesting is what's known as the Hawthorne effect. When people know they're being studied, they tend to behave differently. They're tracking all these variables about themselves. They're thinking about what they're doing. You can imagine you knew that a bunch of doctors were going to know everything you eat. "Are you really going to eat the whole pint of ice cream? Or are you going to think, maybe not?" There are some situations where I'm almost certain that something like a placebo effect exists. One of those is with bodywork or reiki, where someone perceives themselves as using chi or channeling energy to change something about a person or to heal someone. If you talk to people after going through those sessions, some people don't feel anything, but some people feel a really intense experience having gone through it. While I don't believe that they're using chi or channeling energy, I do believe the self-reports that people felt something intense, felt like something really remarkable happened during the session, which is a psychological experience. In other words, I believe that intense psychological experience is generated by the body worker doing what they're doing. Maybe it's caused by more than just the belief in the bodywork. Maybe there are a lot of other elements to what the body worker is doing that help create that psychological effect. Another case where I think that something similar to the placebo effect occurs, perhaps it is truly a placebo effect, depending on how you define it, is with panic attacks, where it's known that people will be elevated for some reason, the heart might be racing, their lungs might be working, and then they'll notice sensations. For example, they might notice their heart racing, and they might wonder why it's racing, and that fills them with anxiety, which then makes them think that something is wrong with them. That belief that something's wrong with them makes them anxious, which makes their heart rate increase even more, which makes them convinced that something's even more wrong with them, and it can spiral completely out of control, elevating them to an incredibly high level of anxiety. I think that effect is real, just as I think the body worker effect is real, where bodywork or reiki can have intense psychological effects on people, where they really feel something. But is it a placebo? I don't know. I think it depends exactly on how you define placebo.

LITERAL BANANA: I think it's especially interesting with open label placebos. One of the meta-analyses, I think from 2023 or maybe 2022 on open label placebos, had a section in the discussion where they talked about what the participants were like super excited, and they described the project as crazy, and they were just really interested in participating. If you get a group like that who's just really gung ho, they might be really keen to tell you that the placebo did great, because they think the study's so cool. So, which is interesting, but I don't know if you're really measuring the placebo effect in that case.

SPENCER: Yeah. And so that kind of points to limitations of this design, where you pit a placebo group against a treatment group, like this meta-analysis that you're referencing. There are two limitations that I see. One is that people might do socially desirable responding, where they optimistically respond saying they feel better than they do, and that's not really a placebo effect, because just the way they're reporting, they're not actually feeling better. Maybe they're trying to please the doctors or make the researchers happy, or whatever. The other is that I worry that if you believe you have a treatment, it might change the way you interpret your symptoms; you might be more likely to notice improvements than you would otherwise, and that might differ between the treatment group and the placebo group, creating a kind of bias there.

LITERAL BANANA: Although noticing, I mean, that's at least a good thing, if you notice better things, that's mindfulness or whatever.

SPENCER: Yeah, it could make you happier. You might notice improvements more than you would otherwise. And maybe that's actually a positive effect.

LITERAL BANANA: That would be on the positive side of placebo for me. Yeah, this is sort of a tangent. Have you heard of the John Henry effect? It's the opposite of the Hawthorne effect?

SPENCER: No, what is that?

LITERAL BANANA: I'd never heard of this before. I read it in a paper about three days ago. It's the tendency, if a group knows they're the control group, they might work even harder just to show that they can. I've never heard of that before, but apparently it has a Wikipedia page.

SPENCER: Interesting. So they try to show that they're doing better or show that they're doing worse.

LITERAL BANANA: I get that they're doing better, I guess. They're "The Little Engine That Could" or something. I don't know, I've never heard of it before.

SPENCER: You're not going to give me a treatment, but I'm going to prove to you I can get better on my own. Because I can get better on my own kind of thing.

LITERAL BANANA: Yeah, exactly. It didn't make a lot of sense to me, but apparently it's an official thing.

SPENCER: It's got a Wikipedia page.

LITERAL BANANA: Yeah, exactly. It's real.

[promo]

SPENCER: All right. So you read this meta-analysis, the results are sort of not very convincing that there's a strong placebo, but they do seem to show that there's some placebo effect. It's not saying that there's zero.

LITERAL BANANA: Yeah, exactly in very specific conditions, exactly when it's a self-report instrument. And so it's asking people, "Are you doing better?" It's not any kind of objective measurement, "Oh, that's a little fraught." It's not a lab test. It's not some kind of objective test that couldn't be kind of up to opinion, and it's only on continuous measures. So placebo effects don't show up in something where there's just a binary measure, like, "Did they get better or not?" It's a kind of situation. So the authors kind of generously, I think, say, "Well, maybe this is because continuous scales are more sensitive," but they also say, "Well, maybe it's just that it's easy to bias that it could be picking up kind of a false positive on a continuous scale that wouldn't show up in a binary scale." So I think with those limitations, that it's only on self-report measures, and it's only on relatively small effect in continuous measures, there is a statistically significant placebo effect, even in these three-arm trials of treatments, but still pretty small and fully explained, I think, by what we've been talking about, politeness, role-playing, response bias and factors like that. There's no extra phenomenon needed to explain it besides that.

SPENCER: So let's talk about effect size here. So I had the latest version of the meta-analysis, because they keep updating it. And this is the 2010 one, which I think was the last one. And so for continuous variables, non-binary variables, my understanding is that for patient-reported outcomes, the effect size was about 0.26, and that means about a quarter of a standard deviation increase in outcomes relative to no treatment. So it's a little hard to interpret, but it's a fairly small effect. And then for observer-reported outcomes, like a doctor reporting, it was about half of that, at 0.13 standard deviations, whereas with the binary variables, it was barely an effect at all. The pooled effect was a relative risk of 0.93. So in other words, the placebo group was about 7% less likely to report the bad outcome, let's say. But if you look at the 95th percentile confidence interval, it's like 0.88 to 0.99, so it's almost all the way up to one, where one would be no different from the no treatment group. So yeah, very, very barely an effect, maybe there's a slight one there.

LITERAL BANANA: One interesting criticism that I saw that came out of that, is that it's a small effect. Everybody would call it a small effect. And if you look at it on numbers on a pain scale, it looks like a kind of nothing, nothingburger. But a lot of the treatments have effects in that range. So it's like, "Yeah, it's really small, but the treatments aren't any good either," at least if you limit especially to the studies that these authors wanted to focus on. But I think that's more of a criticism of the poor efficacy of certain available treatments than it is a bolstering of the placebo effect. But I thought that was an interesting criticism.

SPENCER: I've seen an interesting discussion of this on the Astral Codex Ten blog, and a lot of things that we would all say work, when you put it in these terms of standard deviations from the mean, they look like pretty small effects. It's kind of baffling, because you're like, "Yeah, but I've taken that drug, and I know it works; it's just obvious." It hits you over the head, for example, sleeping medicines. I think part of what's going on there is that sometimes it's just hard intuitively to map these Cohen's d into how things feel.

LITERAL BANANA: Yeah, this is why I think we should use simple effect sizes.

SPENCER: But how do you do that when you're combining lots of different trials together. Are all measuring things differently?

LITERAL BANANA: Maybe it doesn't necessarily make sense to combine all those together, but I definitely think it's easier to understand a simple effect size and decide if it has meaning in your situation, whereas an average of a whole bunch of things, like for the open label placebo trials, that could be something like how much disgust do you report when looking at a picture of something that's allegedly disgusting that maybe people don't care that much about, versus pain or nausea or symptoms like that. For subjective things like pain and nausea, you're talking about points on a scale, but that's at least something. For sleeping pills, minutes to sleep onset I think is meaningful, but I think standard effect sizes are often used for hiding things, although I do think they're important for noticing when effects are too big to be believable. It's a framework that I think has value, but I generally am a fan of simple effect sizes for things that can be compared. And I agree, sometimes when you're trying to mix together a lot of disparate stuff, there's no simple effect size that would work, because it's all on different scales.

SPENCER: So just to clarify that a little bit for our audience, imagine that you're measuring the effect of a treatment on pain. You give people a zero to 10 pain scale: how much pain do you feel? That's a nice way to measure it because we can say the drug increased or decreased pain by one point on the scale or two points on the scale. It's kind of intuitive. But then suppose you're trying to combine that with another trial where they didn't measure pain at all. They measured something else, like the level of cortisol you experienced due to stress. You have that measured in milligrams per whatever — I have no idea what the units of cortisol are — but it's like, "Well, what the hell? How do you combine those?" If you're trying to say placebos work, you want to combine those into something to make them comparable so that then you can talk about them together, pull them. The most common way to do this is what's called Cohen's d, or the standardized mean difference. What you do is take the number of points of pain in the intervention group, subtract the average number of points in the placebo group, and then divide by the standard deviation, basically how variable the pain rating is. That gives you a number that's unitless. It has no units. When you've done that, you now have Cohen's d, and then you can do that for each trial, and they're all unitless, so they can, in theory, at least be combined, although what that combined number means exactly is unclear.

LITERAL BANANA: It's interesting sometimes to tease those two components apart: the absolute difference between the groups and the standard deviation, because sometimes, when I've seen really enormous effect sizes, it's because they have an implausibly small standard deviation. You look at some instruments, and in this study where they have a huge effect, it's really clustered around this one value, but in a typical study, it'll be pretty widely variable. So, the standard deviation is really small, then you get a huge effect size in a standardized sense.

SPENCER: That's very interesting. When I look at trials like this, these big meta-analyses, I tend to get worried that lots of different things are being combined that really shouldn't be combined. One of my biggest concerns is that a lot of studies are just really low quality, and mixing together a bunch of low-quality stuff with high-quality stuff is actually bad. It's less likely to lead you to the truth. You're better off literally just throwing away the bad trials, the ones that are pretty poor. Don't mix them in; just get rid of them and then see what's left over. Often, I like to just look at all the remaining trials, just look at the high-quality ones. Literally, just look at them one by one. Don't even combine them, or if you combine them, combine them and also look at them one by one. I obviously didn't have time to read all these papers or anything like that, but I did go through the meta-analysis. I looked and said, "Okay, simple thing: if a trial had less than a hundred people per group, I'm just going to throw it away." Obviously, I think there are some trials that have less than a hundred per group that could still be valid, but when you're getting down to 20 people per group or 15 people, it's ridiculous; that is just garbage. There's no way you're learning very much from that. I think that's a reasonable heuristic to at least eliminate a lot of the low-quality ones. When I did this and looked at the trials for continuous variables, this is comparing the placebo against no treatment to see if the placebo beats doing literally nothing. There were nine continuous variable studies that had at least a hundred people per group, and only one of them was statistically significant. That blew my mind. One out of nine. I was like, "Oh my God, that's bad."

LITERAL BANANA: You found better evidence than I did. I didn't do that analysis. That's pretty cool.

SPENCER: If you average those together, you find a very, very small difference in favor of the placebo, but it's very small. This suggests that a lot of the driving of that difference from the placebo actually working is coming from the tiny trials, really small trials. Okay, so then I did the same thing for binary outcomes. It turned out, coincidentally, there were also nine trials. Actually, none of them are in common, so it's a completely different set of nine trials, and only two of them are statistically significant out of the nine. Again, that's pretty shockingly bad. When you average them together, you also get a really small effect size. Looking at it that way made me further convinced that, "Oh man, this meta-analysis is pretty brutal on the placebo effect." It doesn't fully eliminate it, but it seems like if you believe this meta-analysis, if you assume they didn't mess it up. They really did it properly and are not manipulating us or something, which I don't have a reason to believe they are, it feels like it leads to one of three conclusions. Conclusion one is that there's no placebo effect; it's actually total nonsense. Conclusion two is it's real, but just very small. I do think that this evidence is consistent with a small placebo effect. You sometimes find it in trials, but often you really need a big study to find it. The third possibility is that it's actually just really heterogeneous, meaning maybe the placebo effect works in specific situations for specific disorders. The reason you don't find it in many of these larger trials is that in a bunch of disorders and settings, it just doesn't actually work, and the ones that are finding it that one out of nine or two out of nine, those are the settings it works. That's why you can get a pretty big effect size in some cases because it works in those settings.

LITERAL BANANA: Yeah, that's a challenge to generalizability if it does work in just a few settings and maybe a few conditions. But that's another thing. The criticism I was talking about is that the placebo effect is as big as the treatment effect if you only look at things that they think of as amenable to the placebo effect. Maybe it works for some things. It seems mainly, if you drill down, it works in self-report scenarios, but there might be further details. There might be situations where it could be effective, just not in most tested situations. As for the meta-analysis situation, Andrew Gelman calls it "garbage in, garbage out." If you're just including a bunch of low-quality trials, if you have enough of them, you can get a big effect size, even if you have good trials to average them with. We start from the meta-analysis of the three-arm trials, like a treatment, a placebo, and nothing. Those researchers aren't very motivated to find a placebo effect, so their placebo effect is 0.26, something like that, a small effect, probably driven by small studies and questionable research, but who knows? Not very consistent, and definitely small. But when we move to placebo-focused studies, the people who are actually studying just placebo versus nothing in various ways, then the effect shoots up to well over one, sometimes 1.7 as a meta-analytic effect. That's just one on pain. If you just look at the placebo studies, suddenly it gets enormous. That seems to be this other body of information that we have to contend with: people specifically studying the placebo and coming up with these enormous effects.

SPENCER: And so one theory is that they're just using contexts and outcomes where the placebo actually works and it's heterogeneous. The other is that there's some kind of weird researcher bias.

LITERAL BANANA: Yeah, or they're really effective at evoking it, and the people in the other studies just can't evoke it, but something, who knows what, their research is more sensitive to it. They're better at evoking it. So it's just that the three-arm trials weren't good enough of getting a placebo effect. The other theory is it's too big to be real. It's driven by studies from one lab, maybe very few labs, who always get these huge effects, and then everybody else gets very small effects. So when you average them together, it's still a pretty big effect.

SPENCER: I think I have a guess. But what do you think is true?

LITERAL BANANA: So the meta-analysis that finds the effect size over one, they're dividing it between clinical samples and healthy samples, and between lab-induced pain while stick you in a machine and burn your arm or something, or put a tourniquet on your arm and make you squeeze a squeezy thing, or put your hand in freezing cold water or something. And then also chronic pain samples, their effect sizes, those are all well above something like a Cohen's d of one, and sometimes I think the highest was 1.73. A lot of the studies they're including are ridiculously enormous, like Cohen's d over seven or one of them. One of the labs I'm most suspicious of, they're including over half of the studies in one of the first placebo-focused meta-analyses from this one lab, and they're absolutely massive. I've talked to one psychology researcher, and he said he'd never seen an effect larger than seven, but I think we were on Twitter a couple months ago, we were joking about an effect of five that we thought was hilarious, but these effects are kind of too big to be real. Four points on a 10-point scale? That's better than heroin.

SPENCER: Yeah. Like smallpox vaccine kind of effect size.

LITERAL BANANA: Whereas, it seems when I find a high-quality placebo-focused trial where they're really being careful to tell you everything they're doing, they're telling you about the randomization, telling you about the blind, telling you about the position the subjects were sitting in, and where we stored our data. Those studies tend to find small, non-significant placebo effects. They're not finding these Cohen's d equals 7.29 or whatever.

SPENCER: So do you basically think it's just too good to be true? There's just no way that's a real effect.

LITERAL BANANA: Yeah, which doesn't mean it's fraud. I think there are a lot of ways that you could be fooling yourself as a lab. If your subjects are really placebo believers, and if you're using the students and employees at your own lab and your own university who are maybe already invested in the placebo belief, maybe you get huge effects from that. Either people know if they're in the no placebo condition to just tap out really early from the pain trial, or maybe they just hang on really tight. Some of the big placebo effects have been shown to be fraud, but it's just people fooling themselves. That's definitely something that happens in the history of science.

SPENCER: Just to remind the listener, one of the challenges here is that even these small placebo effects that we find, this is not a perfect study design because it doesn't adjust for bias in the participants or bias in the researchers. For example, if participants just want to please the researchers, they may report feeling better when they don't actually feel better, and we can't correct for that here. So if anything, you might think that even though the small findings on average might be an overestimate,

LITERAL BANANA: Yeah, it's quite small. I think it's very easy for me to believe that there's an effect of 0.26, and that's made up mostly of politeness or other kinds of response bias. Sometimes it's called demand characteristics, I think is the fancy economics term for it. But just people acting differently, and a placebo is a communication. I think people are generally pretty good at communicating. Participants in an experiment may act differently because they're in an experiment. An experiment is also a communication. It's a context for people to be in. So they're figuring out how to act as a good experimental subject, and people have different ways of enacting that. I think the small measured placebo effect could be down to that. It could be worse. It could be totally made up, but I think it's plausible that it's small, real, and pretty much made up of response bias.

SPENCER: So I want to make sure that I play the role of devil's advocate here and push back for a little bit and see if I can make any headway on the argument that there is a placebo effect. I do think that the meta-analysis is fairly convincing that, on average, across all of these things, the effects seem pretty small. But here's my first argument for why there seems to be a placebo effect. I know so many people throughout my life who have taken some random supplement, some of which I think there's pretty much no way the supplement works, like homeopathic remedies, for example, where you can literally do a test of the compound and find there's zero of the compound in it; it's literally a sugar pill.

LITERAL BANANA: You would hope so, because it's like cyanide or something.

SPENCER: Yeah, exactly. Some of them literally are horrible poisons, but thankfully, they dilute them until there's none left. So that's very nice of them. People are completely convinced that these treatments work for them; they absolutely swear by them, and they'll take them for the rest of their life. If that's not placebo, what on earth is going on?

LITERAL BANANA: I have kind of a theory about that. It's somewhat half-baked, but I think it's actually probably healthy because we don't have a lot of control over whether we feel better, especially with something episodic like depression or chronic pain. You kind of know the next bad time is coming at some point. I think if you get something that makes you feel like, "Oh, I happened to be taking this when I came out of the last episode, so that's what caused it. It made me get better." I think having that feeling of control is probably really healthy and adaptive, like, "Oh, I can fix it the next time it happens." I can relate to that. I feel like I've been burned a lot imagining I had a lot of different times, so I find it hard to form that belief, but I can imagine that belief happening, and it's probably healthy. I think it's probably good to have hope. But I don't know if I'd call that a placebo effect. I don't know.

SPENCER: Do you think it's a form of wishful thinking or sort of trying to give yourself a sense of control, but it's not that it's actually making you feel better?

LITERAL BANANA: To be clear, I have no problem with it. I think people should do whatever they want, and if they feel like they have a treatment that helps them, then they shouldn't care what any banana thinks about it. They should just do it. But, yeah, I don't think inert things have effects on the body or whatever through the power of belief.

SPENCER: Obviously, one explanation of this is supposedly about the fact that so many people take things that don't work and believe they work. But another possibility is a misattribution thing, where so many conditions either just tend to get better.

LITERAL BANANA: Yeah, that's kind of what I'm getting at.

SPENCER: Oh, okay, I didn't realize that was what you were saying, because I thought you were saying they really want to believe it works, and then it gives them some psychological comfort.

LITERAL BANANA: It's kind of like, to me, the misattribution would be coming from I was trying this when I happened to come out of this bad episode. So this must be what cured me. So that's the misattribution I'm imagining happening, which I think could be healthy, could make people happier for a short period of time, at least.

SPENCER: I think they call it the "after this, therefore because of this" fallacy, or there's a Latin version of that, that's like, you took the treatment, then you felt better. Yeah, exactly. So therefore it caused you to feel better.

LITERAL BANANA: Post hoc ergo propter hoc, right?

SPENCER: Exactly. Thank you for the Latin. So you take the treatment, you feel better. You attribute feeling better to the treatment. And, of course, many conditions get better on their own, so if you take a treatment when you're feeling bad, there's a good chance you'll feel better after. But even conditions that don't get better on their own may well have a fluctuating outcome. So you might happen to feel better just by chance the week after you took it. And you attribute that, and you think this helps me. And then maybe you get a little bit worse, and you're like, "But you don't know now, is that because the drug didn't work, or maybe I'm having a bad week now? The drug did work; I'd be having an even worse week." And then you kind of just convince yourself it works.

LITERAL BANANA: Yeah, until you get a hold of enough evidence that you feel like it doesn't work anymore, or you find something else that's better, or something like that.

SPENCER: Another argument that I have in favor of placebos working is that I really believe our mindset going into a thing can have dramatic effects on our experience. I'll just mention a couple examples of this. One is I used to practice this meditation until my partner didn't want me to do it anymore, where I would go out in the freezing cold, it was like 20-degree weather with no coat. I would try to separate the experience of extreme cold, which is painful, from suffering, by having the right kind of mindset. In a sort of meditative mindset, I would have brief periods where I was no longer suffering, so I could feel the cold. In fact, in a way, I was probably experiencing the cold even more deeply than normal, but I wasn't suffering. I wasn't labeling it as bad. I was just kind of in that experience. It's kind of remarkable, because you're in this freezing cold, and it sucks, but suddenly, "Oh wait, it doesn't suck anymore, and I'm fine." What is that, other than just a mental maneuver? If someone telling you something works could change the way you're looking at it, maybe that helps. To give another example, many people have the experience of being at the gym, and maybe their muscles are burning, but because they have such a positive attribution of, "I'm doing a good job, that means I'm working hard." They seem to not be suffering in the way that they would if they experienced that burning randomly.

LITERAL BANANA: I think that's a real thing. I definitely think people have some amount of control, especially in the short term, over how they perceive this particular pain. Can I make it more intense? Can I make it less intense? I actually, as skeptical as I am, have experienced hypnosis in a crowd, and it was definitely a distinct mental state. I have no skepticism that that mental state exists for some people, and I experienced it very strongly. I don't mean to say there aren't unusual mental states, and those mental states absolutely can change how you feel about things. I don't know if you can extend that to the placebo effect as it's practiced, mostly because placebos, in my experience at least, aren't intense enough to do that. Usually, studies of mindset trying to do things with mindset are similar to trying to do things with placebo. Usually, the studies are, "Well, it had an effect on the self-reported outcome, but not so much on anything objective." It seems hard to generalize — let me put it that way — I don't think the placebo effect is a generalization of that phenomenon we're talking about, of how good it feels to be sore from doing a lot of cardio or lifting a lot of weight, and that being different from just general pain or soreness or tiredness.

SPENCER: I would actually feel really bad if listeners came away thinking, "Oh, my perception of things doesn't affect how I feel about them." I very much think it does, and I think that's a powerful thing we can wield in so far as we learn to understand how the way we look at things makes us feel differently about them. For me, I try to be mindful when I drink my first sip of iced tea in the morning, because I really enjoy iced tea, and I can double or triple the pleasure by just being in a certain mental state of mindfulness and gratitude during that sip of tea. That's like a free doubling or tripling of the pleasure. That's an amazing life hack. I'm not really doing anything other than just looking at things differently.

LITERAL BANANA: Yeah, I barely understand people who are into "no pain, no gain" type of exercise, because for me, it's just for the physical pleasure of it. I think that might be a mindset thing that might be just not working out to the point of suffering.

SPENCER: But a funny thing in my own case is that I do tons of self-experiments. I generally have one to two self-experiments going on always, and I'm rolling them over into a new self-experiment. So I'm experimenting all the time, and the substantial majority of the time, my conclusions, whatever I tried, didn't work. I'd say at least 80% of the time. I started to think maybe I'm immune to the placebo effect. It's kind of nuts because I'm constantly concluding things do nothing. I was talking to my friend about this, and she's like, "No, no, it's just because you're too skeptical. If you believed in the placebo effect, you'd be getting all these amazing gains." I'd be like, "Oh, man, am I missing out because I'm too skeptical?" Honestly, it crossed my mind whether it's ethical to do an episode about this, and I think it is because I generally think that believing the truth is most important, and I want to provide evidence so people can believe as many true things as possible. But if we lived in a world where the placebo effect actually was pretty good and we convinced people it wasn't, that would actually kind of suck, right?

LITERAL BANANA: I agree. Yeah, it's important whether it's true or not, which I think there should be more high-quality science to figure out whether it's true or not, and to potentially throw the studies in the garbage that belong there.

SPENCER: Well, but I'm making an additional point, though, which is that...

LITERAL BANANA: Yeah, it would matter.

SPENCER: Yeah. If the placebo effect is real, and if we convince people it isn't real, that could actually harm them because they could get less benefit from it, right?

LITERAL BANANA: Yeah, if it's a function of belief and to the extent that it's real, yeah.

SPENCER: What do you think the strongest argument in favor of a placebo effect being meaningfully strong is?

LITERAL BANANA: Thinking about having migraine pain, I think it's pretty clear that from moment to moment, you can change it. You can make it less severe for a few seconds, and certainly you can lean into it and make it more severe. But I don't think that lasts for very long. I don't think that's really a long-term solution, having constant energy to reduce the pain by a slight amount. So I think the part of placebo that's the strongest argument is what we've been talking about, just that mental states are real. Mental states are kind of the most important thing that's real. And whether or not mental states have any effect on physical reality or reflect something about physical reality, they're important in and of themselves. Maybe they're the most important things, and it's hard to do science of mental states. I don't think we do good science of mental states. I think we do a lot of surveys where we miss a lot of the experience and the phenomenology of how people feel. And I would like there to be a good science of that. I don't think it's there, not that there's zero.

SPENCER: I'd love to see a study where they do something like ask people to put their hand in cold water. That's painful, obviously, with consent, et cetera. But what they do is they would intervene randomly to cause different types of mental states prior to putting the hand in the cold water. You could have some that are more like placebo effects, like, you give someone a sugar pill and say, "This is going to reduce the pain." You could have others that are just about, "We only want you to focus on the sensation of the cold." Or others that are about getting them psyched up into, like, "You're tough, you know, and see it." And then they could, at the end of the day, not only see how long they stay in the cold, but also how much physical pain they experience. And then they, like, "What would you have to be paid to do it again?" To see which mental states affect our perception of pain.

LITERAL BANANA: There's an interesting body of research that does this with stressing people out before they give them a pain trial, so making them do a public speaking task or making them do a whole bunch of math problems that are just slightly too difficult to do quickly. One of the findings is that people report less pain when they've been stressed out.

SPENCER: I think that's interesting. Yeah, there's focus on the pain.

LITERAL BANANA: There's a name for it that's in my article. But yeah, that has a non-intuitive name for the area of research. But yeah, stress-induced analgesia, that's what they call it.

SPENCER: The article that Literal Banana is referencing is a case against the placebo effect, which we'll link to in the show notes. Another thing I think about making a case for the placebo effect is anxiety. Because while there's direct anxiety, there's also a kind of second-order meta-anxiety that a lot of people have. Imagine you're about to go on stage to do public speaking. On the one hand, you might feel anxious because you're going to screw up. On the other hand, you might worry, "Oh my god, what if I become really anxious on stage?"

LITERAL BANANA: Yeah. Self-maintaining anxiety. I've heard of that. Yeah, anxiety causes anxiety.

SPENCER: Exactly. I think a lot of people who have a lot of anxiety know their performance is affected by it. It's like, "This is a real thing. You're scared you're going to become scared, and that's self-fulfilling." So imagine someone comes up right before you go on stage and gives you a pill, and they're like, "Don't worry, this is clinically proven. This will prevent anxiety; you'll be fine." And then, yeah, it would totally give me a boost of confidence if I believed them and probably reduce the chance I feel anxious and psych myself out. So I could totally believe in that context that would work really well.

LITERAL BANANA: Yeah, possibly. So there's this other example, the placebo effect that people use. I think it was interesting. Little kids might hurt themselves and freak out because they are little kids. That might be one of the first few times they've ever hurt themselves. They don't know if you bonk your head, "Is that a big deal?" If you fall and skin your knee, "Is that terrible? Has it happened to them before?" So the idea is if you put a band-aid on it or kiss the boo-boo to make it better, then they calm down. What they're doing there is seeking information, and they've been given information that it's not a big deal. If they were gushing blood, they would get different information, presumably. I think anxiety might be a form of that where you're lacking information and seeking information, and if you get a hold of information that things are fine, and if it's trustworthy and convincing, someone you trust is saying there's no reason to be anxious, everything's going to be great in some way that's believable. Maybe it's, here's this pill that will help you. I think that's more information seeking than a placebo effect.

SPENCER: That seems very plausible to me. I also think that part of the purpose of sadness is emotion, in my view, is to communicate, "I need help, I need care." Part of crying is just saying, "Hey, can you come give me care?"

LITERAL BANANA: Have you seen that Kevin Simler article?

SPENCER: I don't think so.

LITERAL BANANA: Yes. I think it's called "Tears" on Melting Asphalt. It's about crying as uniquely human behavior that other animals don't have, that is mostly involuntary, but that's basically, literally, cry for help. "I'm in so much trouble. This is a costly signal. I can't just fake it. If you come to my rescue," he uses the phrase "friendship at a discount. My friendship is on a great deal today if I'm crying." I really like that article.

SPENCER: Nice. We'll link that in the show notes.

[promo]

SPENCER: One more thing I think about, if I'm trying to make a case for the placebo effect, is the nocebo, flipping it around, and think about, imagine how you'd feel if you were convinced you've just been poisoned. Let's say it's fake. It's not actually poison. But you're totally convinced you just swallowed really dangerous poison. I think there's a very real possibility that many people in that case would start to feel sick or start to notice some sensation in their body and be like, "What is that?" Then maybe they start to feel anxious, and then they notice the anxiety and be like, "What is that?"

LITERAL BANANA: Yeah, I think that happens a lot of the time with cannabis. People are having very intense sensations and not knowing how to interpret them, and that gets sometimes channeled into anxiety or fear or having a heart attack. So definitely, even, and that's an actual substance. It's a little harder to imagine that with a completely inert substance, but I think that's a scenario that's dealt with in fiction enough that it's believable to me that you could, as you were talking about attribution, notice things that you wouldn't have noticed, "Oh, I'm slightly nauseous, so my head is starting to hurt, or is my breathing normal?" I don't know. Things that are normally unconscious that you're taking inventory of.

SPENCER: Exactly. I think that our sensations in our body are a lot weirder than a lot of people realize, because very few people spend the time to pay attention to them. Unless you do meditation of certain types, or yoga of certain types, or whatever, you probably haven't spent five minutes just analyzing, "What exactly does it feel like in my body?" And when you do that through meditation, you realize, "Wow, it's pretty wacky." Different people experience it differently, but for me, there are all kinds of shifting sensations. And when I pay really close attention, maybe I'll feel weird feelings in some parts of my body. And if I thought I'd been poisoned and I wasn't used to paying attention to my body, and I'm like, "Oh God, do I feel okay?" And then I noticed one of those weird sensations, I'd be like, "What is that?" You start feeling afraid, and then fear actually kicks off legitimate changes physiologically that you can experience. When you have a lot of fear, you might feel tension in your stomach, or you might start to feel nauseous from the fear.

LITERAL BANANA: Yeah, and one of the criticisms of one of my few placebo skeptics was conflating the placebo effect with psychosomatic effects. And I think it would be completely wrong to say there's no such thing as psychosomatic effects. Why would we even have emotions if they weren't going to change our bodies to better deal with the situation that the emotion is for? Assuming an evolutionary view, if emotions were inert, that wouldn't make a lot of sense to me. I don't know how better to explain it, but that doesn't mean that an inert pill or injection is the same as a situation that would give rise to a strong emotion.

SPENCER: Yeah, I think psychosomatic effects are so interesting. My view is that there are different types of them, but usually people don't think about them in a very carefully prescribed way. So we're kind of lumping things together. I think actually a lot of people find them sort of mysterious, like psychosomatic illness. "Are you just confused about your illness? What is happening?" But what I would say is, the kind of psychosomatic effects you're talking about are ones caused by emotions. I've actually experienced that. I once went through a very traumatic kind of period of my life where I started having weird physical symptoms, and I really had no idea what was going on. I was convinced that there was something physically very wrong with me. I went to a bunch of doctors. Finally, I went to a doctor who was like, "You know what? I think you're just experiencing a lot of anxiety." I was like, "No way, no way. I'm feeling physically sick, like I'm ill." Then I went to Burning Man the next week, and I felt completely cured at Burning Man. I was like, "Why did it happen?" Then I started thinking, "Maybe this doctor is right, maybe this is." It turned out that this traumatic event put me in such a state of elevated anxiety all the time that I think it was just literally all those anxiety hormones, or all those anxiety chemicals coursing into your body, literally just creating physical symptoms. You can feel sick to your stomach. I was getting tingling in my fingers, which I had never experienced. And it was literally just the effects of strong emotions. So that's one kind of psychosomatic effect that can be real.

LITERAL BANANA: That's not something I'm a skeptic of, to be clear.

SPENCER: Right, totally. But also, I think another example of a psychometric effect I've seen is where it seems like what happened is they had some kind of injury and then they started using their body differently to avoid reinjuring it or causing pain. Then the injury heals, but the way they're using the body doesn't change, and that strange way they're using the body starts causing all kinds of weird effects, like other pains or stiffness.

LITERAL BANANA: Maybe in connection with the Alexander Technique?

SPENCER: Right.

LITERAL BANANA: I think I've heard of that too, of having an injury, adapting to it, the adaptations no longer necessary, but you don't have any signal to get rid of it and go back to normal.

SPENCER: Yeah. So someone I know had really bad wrist pain when typing. My guess is this was due to a genuine injury, probably repetitive stress injury. But they had all of these things they would do, like wear wrist braces all the time and be really delicate. It went on for years, and then at some point they were just really fed up with being in pain all the time, and they were just like, "Screw it. I'm just gonna pretend my wrists are not injured. I'm gonna push through the pain," and almost totally cured their injury. I think that's a case where it's just like, at some point, something about the way you're babying it can perpetuate feelings of pain and stiffness.

LITERAL BANANA: One kind of semi woo that I will defend is sometimes in yoga, maybe you're supposed to visualize light coming out of your fingertips, and that doesn't mean that there's actually light coming out of your fingertips. It just helps you get into the right motion or position. It's a framework, a metaphor for how to be aware of your body, how to think about what's there, how to sense what's there. I don't really have a problem with that. I think the problem comes when people conflate this as a useful tool for thinking about your body position or feeling your body versus this is reality. There's actually light coming out of your fingertips. I have mixed feelings about that stuff, but I think something like that, where it's made up, but it's obviously made up and it's useful for something, I don't have a problem with that. But taking it seriously as metaphysics, I think, is where it gets silly.

SPENCER: I may have mentioned this on the podcast before, but I'm an extremely amateur mixed martial artist. I just do it for fun. There's this funny thing that happens where, as you get better and better, you start being able to channel energy through your body more effectively to do harder punches and stuff like this. You start to get almost a feeling of that energy, an appropriate, receptive feeling of the energy coming through the floor and up your leg and out through your arm. I can completely see why someone would give that a name, like chi or something. Then be like, "Oh, yes, you must channel the chi."

LITERAL BANANA: Or your chakras or whatever.

SPENCER: Totally. You need to pull the energy up through the ground and out and send the chi out through your arm. That sounds like total gobbledygook, but it's actually just referring to the feeling of what it's like to throw a really good punch.

LITERAL BANANA: If it makes you more effective, I think it's very useful. You can use it and have it be useful without changing your metaphysical beliefs about chi or whatever. I'm fine with things like that as tools. I don't feel any need to act well, actually, at them, but I feel like part of the reason these things are so hard to talk about is we don't have a good science of unusual mental states. The science of unusual mental states tends to want to go to, "What does it look like under an fMRI? What if we put a psychic on an EEG? What I want is just kind of descriptions of it. Someone looking through a lot of different interviews and trying to figure out the essence of something or some common themes, which is more like phenomenology or ethnomethodology research. It sounds more wishy-washy, but I think that's realer. I think that's losing less of the information.

SPENCER: You wrote this piece that got a lot of attention, and obviously a lot of people believe in the placebo effect. Almost everyone does, in fact. I did a Twitter poll to prepare for this podcast. It's very scientific [laughs] because I was just curious to what extent do people really believe in it? Is this as ubiquitous as I think? The way I designed the poll, I asked, "If a doctor successfully convinces a chronic pain patient that a pill that's secretly a placebo sugar pill will greatly reduce their pain. How much do you think, on average, the belief in the pill will cause the patient's experience of the pain to be reduced over the next month?" Only 6% of people said they didn't think that there would be less pain. In other words, that it wouldn't work at all. The most common answer, which was 41% of people, said they thought there'd be slightly less pain. Basically, it seems almost everyone believes in at least a little bit of a placebo effect, but the most common answer is that they think it's a small effect. Because almost everyone believes it, I'm wondering how people react to this article you put out?

LITERAL BANANA: Good question. Some people just said, "That's a really good article. I still believe in the placebo effect." Some people changed their minds. I don't know how many people actually read all of my novella links. I should point out one of the scientists in one of the FMRI papers that I mentioned doesn't agree with me that the FMRI evidence is subject to response bias. The neurological pain signature is, if you put a lot of people in an FMRI machine and cause them pain, whether that's burning pain, freezing pain, or pressure pain, this is the part of the brain that would light up. What's perceiving nociceptive pain would be this neurological pain signature. That apparently is not activated by placebos. So there's apparently no effect of placebos on the basic perception of pain as reflected in the brain. One of the criticisms I had of the EEG stuff is this measure, the late positive potential, specifically between one to six seconds, seems to be different between placebo and no treatment for perceiving disgust or distress from gross images. We're not allowed to see what images they're showing people, but snakes and spiders and stuff like that.

SPENCER: So this is just to clarify the study design here. They're giving some people no treatment, and they're giving some people placebo treatment, and they are comparing the FMRI. Essentially, the changes in blood flow in the brain while they're perceiving disgusting images, and looking for differences.

LITERAL BANANA: Yeah, where is it increased? Where is it decreased?

SPENCER: Yeah, right.

LITERAL BANANA: So there's no signal on the neurological pain signature, the sort of nociceptive perception. There is an EEG signature that shows a placebo effect on this measure of disgust or distress that, from other studies, seems to be completely under voluntary control. It seems like people can just decide to change it, kind of like you can decide to talk or decide to move your arm. There's this other brain area called the stimulus intensity independent pain signature, the SIIPS, that seems to me — I'm obviously not a brain scientist; I'm a banana — it's sort of like a higher level than the nociceptive pain, the NPS neurological pain signature. It's the part of the pain response that's not related to how bad the pain is or to nociceptive pain. To me, that seems like a higher level like cognitive or affective. That seems like the kind of thing that maybe you could just decide to perceive differently, maybe to be polite after having been given a placebo. I'm not sure that kind of brain imaging proves that it's not response bias. I think it may be completely consistent with it being response bias, but one of the authors of that paper thought that I was misinterpreting it, and it actually is evidence that it's independent from response bias. So just one response I had.

SPENCER: But you still held your ground on that. You think they're wrong.

LITERAL BANANA: Yeah, I think. I don't know much about that particular brain network, but it seems from relatively small studies, to be something that's on a higher level, maybe more voluntary, than certainly the neurological pain signature. So I think it's still consistent with my very extreme theory that it's just response bias. But I just wanted to put that out there.

SPENCER: Any other critiques you received that were noteworthy?

LITERAL BANANA: The process of finding all the critiques was arguing about it on Twitter for a month. I haven't been paying as much attention to it since, but I was paying a lot of attention to posts about how the placebo effect is real? What about this? A lot of people would come back with really good arguments, like, "What about animal studies? What about brain imaging studies? What about the endogenous opioid system?" That's how I figured out that I needed to look into those areas, what's going on with those areas of science. I feel like I took the critiques that I had around me and put them into the article. I haven't had to update a lot since then, or I haven't tried to update a lot since then.

SPENCER: That's an interesting point about the animal studies, because what's really nice there is that results can't be reported bias. Yeah, because they don't know about placebos or anything like that.

LITERAL BANANA: Yeah, it would be a great demonstration.

SPENCER: And so what has been found in animal studies?

LITERAL BANANA: Obviously, you can't ask an animal what kind of pain they're in. And they can't report on a survey, right, unless they're Mister Ed that can tap their hoof, no.

SPENCER: They also can't really believe that a treatment is going to work as far as we know.

LITERAL BANANA: Exactly. They can't have suggestions. So how do you do that? They do it with conditioning. I was very happy to find this great meta-analysis and replication attempt from a master's student whose last name is Swanton, trying to do one of these conditioning studies on rats, while evaluating all the other research that's been done on rats. One of the big problems is, how do you measure their pain? I did find one sort of modern, open science paper, a multi-center study evaluating one measure of pain, which was how much they burrow. Apparently, if the rats are in pain, they burrow less; you can measure it in grams of displaced matter, the little shavings that they're housed in. They did find an effect for that pain measure, but they mentioned interestingly that they had a really hard time maintaining the blind because the chemical they used to induce pain that they'd inject into the little rat paws was yellow and kind of viscous, so they could just tell which were the experimental rats and which weren't. I think only two of the labs said they could maintain their blind, so that's kind of one problem. A lot of the pain measures have a lot less evidence for that. It's not clear that a rat or a mouse that does this behavior is necessarily in less pain. The other thing is that they seem to have a difficult time replicating it. They do this conditioning study where most of them just use morphine. They expose the rat to a painful stimulus when it's been administered morphine. They kind of get used to that, and then later on, when they're not on morphine, they expose them to the same stimulus with a saline injection, and allegedly, sometimes they do less, like hind paw withdrawal; they do less of the pain behaviors, supposedly.

SPENCER: So the concept that the stimulus the rat comes to associate with morphine. They get the stimulus, get morphine, get the stimulus, get morphine, and then for the placebo round, they give them the stimulus, but no morphine. The rat's brain somehow is going to predict that they're going to get morphine, so effectively, it acts like a placebo.

LITERAL BANANA: So that's the theory, at least. Apparently, it doesn't always work out that way, and especially in the early trials, they would just experience withdrawal from the morphine, so their pain would get worse, and they would get results in opposite directions. There's an alternative way of doing it, which is conditioning the rats to a particular housing unit with very high visibility graphics, potentially with smell or flavor; they might put vanilla in the water with sounds they might be playing their favorite music or whatever. They learn to associate this particular housing unit with low pain; maybe they'll put the pain apparatus, maybe it's a hot plate or something on low, so they get used to, "Okay, in this area, I'm not in pain." Then on experiment day, they turn up the heat or whatever, and then they're supposed to experience less pain, less pain behaviors, because they're sort of used to experiencing low pain in that environment.

SPENCER: So they kind of expect low pain in that environment. I always feel bad for these animals. I know you better do some really good research so that we actually learn something, because otherwise you're just torturing animals; you better get something valuable out of this.

LITERAL BANANA: But apparently it's really hard to replicate. This master's student that I was following, it was my favorite novel, three trials, increasingly doing more and more, adding more and more to the setup to try and establish a conditioned placebo effect, could not replicate the experiment, and she had one of the largest numbers of animals of any study in her meta-analysis. She rated all of these previous studies, and this has to do with the placebo effect in rats. The studies that she rated as higher quality, there were only a couple, and neither one of them got a significant effect, a significant placebo effect. So it seems there she did get a meta-analytic effect that was larger than zero and once was statistically significant. But it seemed like that may be driven from these smaller, poor-quality studies, that the good studies, the studies that seemed to do adequate blinding and reported everything that was going on, were much less likely to get any effect. So to me, the upshot seems to be, it's not clear if there's a replicable placebo effect in animals. The other thing is, I'm not sure that's what we mean by the placebo effect, because I like to describe the conditioning thing as gaslighting. You can do it with people too.

SPENCER: So you're saying they figured out how to gaslight animals?

LITERAL BANANA: Yeah, but except that doesn't really work that well.

SPENCER: Yeah, I'm not fascinated. If that worked in animals, a new procedure for inducing placebo in humans that might actually work differently or better or worse than other placebos.

LITERAL BANANA: The way they do it in humans, so for example, let's say you're in a conditioning procedure, you're getting electric shocks. The green light is your placebo. If the green light is on, then your shock is supposed to be you're supposed to experience less pain. What they do is train you and they tell you the shock will always be the same sort of objectively. When the green light's on, they'll give you a lower level shock, and when the green light's off, they'll give you a bad shock. They'll be asking you to rate your pain throughout all of these, and of course, you'll rate the green light one lower because it's a lower level shock. After you've been trained, maybe for a few sessions, maybe even over different days, then they'll give you a test, and in the test, they'll change it up, and it'll be the same whether the green light is on or not. People usually will make a mistake early on. If the green light's on, they'll report that their pain is lower until they very quickly figure out that something's different, that it no longer has any meaning. I'm not sure that that's what we mean when we say there's a placebo effect, that people can kind of be tricked for a short period of time to guess wrong. That's what conditioning looks like to me.

SPENCER: That's really interesting. It's almost like, in the first trials after when they're doing the real test, they're trying to indicate how much pain it is. Partly they're reading the pain off of their own internal sensations, and partly they're like, "Oh, it's green lights. It's probably not that much." So they're literally learning to use it as a predictive signal, in addition to just reading it off their body. But then eventually, they're like, "Wait a minute, those signals aren't matching," and then they stop.

LITERAL BANANA: Exactly. So it kind of reminds me of this famous, shockingly replicated, sort of notorious food study, where they had these bowls, and they were having people eat soup out of these bowls, but unbeknownst to them, the bowls would automatically refill themselves out of a pot, and people would eat more soup if their bowl never emptied. I think again, partially because of politeness. You don't want to necessarily leave a full bowl, but they're using cues outside their body as well as their internal feelings of satiety and fullness. So that's something it reminds me of.

SPENCER: No, that's a good point because they're like, "Well, how much have I eaten? Let me look at the bowl. Uh huh, I haven't eaten that much. Okay, I guess I should eat more so I'm not hungry later." That's really funny. And I could just imagine giving a really playful infinite soup bowl. And they just see.

LITERAL BANANA: And they just keep eating. They did find higher levels of nausea. And the infinite soup bowl people, so they at least had some effects from it.

SPENCER: Literal Banana, this has been a fascinating conversation. To wrap up, I can just give my final takeaways, and then I'll give you the last word to share your final thoughts. Does that sound good?

LITERAL BANANA: Please.

SPENCER: My thinking on all of this is that there might be a small placebo effect, mainly on subjective outcomes where we're self-rating things on continuous scales, like how much pain we feel on a scale from zero to ten. I think it's probably really small on average. It could vary for different outcomes and contexts, but I think it's probably really, really small on average. I wouldn't be shocked if it was literally zero, but I would be a bit surprised if it was literally zero on average. I also think, and I want to emphasize this again, the way you perceive things and your mindset matters tremendously. It can really make the difference between enjoying food and really enjoying food, or finding something painful and not finding it painful at all, or not suffering at all, at least from the pain. I think our noticing of internal experiences, the way we process them, and the context we give to them really changes our overall experience of things. But that's different than saying, "When you're given a sugar pill, you're going to feel better." Okay, so those are my overall thoughts.

LITERAL BANANA: Yeah, I pretty much agree with that. I don't really disagree. I do think the small effect is more about politeness than about mindfulness of body states. What I'm mostly concerned with is these studies that are getting absolutely massive placebo effects and everyone just treating them as if they're a normal part of the scientific record. I think a small effect on self-report measures is pretty much what I think there is. I think it's more to do with role-playing, demand effects, and polite communication than it is to actual healing or actually feeling better, but it's very difficult to drill down on that. It would be interesting to study that, to try to figure it out, and people have come up with creative ways of doing that, but they're almost too creative. It's difficult to figure out how to distinguish those, but I don't think the evidence that has been offered so far to distinguish them is very strong.

SPENCER: If you had to make a bet, is there literally zero placebo effect, and it's all just social desirability responding and things like that, versus there is some super effect?

LITERAL BANANA: I would say zero healing effect and social desirability or demand characteristics responding.

SPENCER: See, I think that's where we differ. I think probably there's some effect on the way you perceive things. It makes it more pleasant. You literally experience it more pleasantly because you think that some things will work and then, I gave the example before, I would especially expect to see something like that with anxiety, maybe pain as well, whereas some other things, I wouldn't expect that effect to exist. Literal Banana, thank you so much for coming on. Thanks for this fascinating discussion.

LITERAL BANANA: Thank you for having me. Thank you so much.

[outro]

JOSH: A listener asks, "What evidence or other information or argument would update you towards moral realism being true?"

JOSH: That's a very interesting and tricky question. One thing that comes to mind is if we had a strong evolutionary argument: why humans would evolve to not just have some kind of moral sense, but to have an objective moral sense that matches some true moral sense that would be independent of all species and all culture. That would, I think, strengthen the argument in favor of moral realism. Whereas if our moral sense just evolves because of contingencies about our particular environment, then that seems to weaken it. Another kind of argument that I think could potentially be persuasive, although I think it's very unlikely to be possible, is if someone could truly make a logical a priori argument for what's moral and what's immoral that doesn't rely on any premises we don't know. Again, I don't think that's possible to do; but if it were, that would be potentially quite persuasive.

JOSH: Even though we don't know what may cause pleasure or pain to other organisms, is there some universal truth to the claim that suffering is bad?

SPENCER: Well, certainly you can have direct introspective access and notice your own suffering and say, "Well, that suffering is bad to me." And then you can generalize this and say, "Well, there's nothing special about me though. What is morality about if not something that's bad to me?" And so you can generalize and say, "Well, if my suffering is bad to me, I can make a reasonable inference that each person or each agent's suffering is bad to them, then why not just say suffering is bad universally?" I don't think that argument quite holds. There's a lot of appeal to that argument, but I think there is a little bit of a leap there. Like, well, what do we really mean when we're saying it's just fundamentally bad? When we go from that step of like, it's bad to me, I can also infer it's bad to each individual; therefore, it's just bad, full stop. Like, what is that gaining? The fact of saying it's bad, full stop. What is the actual claim being made if the claim is not "It's just bad to each individual"? So I think the logic doesn't quite work, although a lot of people want it to. That being said, I will add, I think most people do have an intrinsic value of reducing suffering, especially for themselves and especially for their loved ones. But a great many people have a value of decreasing suffering for the world broadly. They don't just care about their own suffering; they would be happy to have suffering go down for everyone. And so I think that's a really good argument.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: