Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
August 11, 2022
How can we form good habits more effectively? What roles do reward and punishment play in the habit formation process? And what roles should they play? How should we structure our daily schedules around new habits to maximize the likelihood that they'll stick? If our goal is to do 100 push-ups a day, it's often easier to start with 10 and increase the difficulty over time; but at what level of difficulty should we start, and how quickly should we approach the target difficulty? How does willpower connect (or not) with habit formation? Why should we care about animal consciousness? When it comes to estimating how much good specific interventions will do, are bad estimates better than no estimates at all?
Dr. Jim Davies is a professor of cognitive science at Carleton University. He is the author of Imagination: The Science of Your Mind's Greatest Power; Riveted: The Science of Why Jokes Make us Laugh, Movies Make us Cry, and Religion Makes us Feel One with the Universe; and Being the Person Your Dog Thinks You Are: The Science of a Better You. He co-hosts (with Dr. Kim Hellemans) the award-winning podcast Minding the Brain. Learn more about him at jimdavies.org or follow him on Twitter or Facebook.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Jim Davies about the psychology of habits, animal welfare and animal consciousness, and comparing charity evaluators.
SPENCER: Jim, welcome.
JIM: Hi there.
SPENCER: Great to have you on.
JIM: I like your podcast, I'm happy to be a part of it.
SPENCER: One topic that we can all relate to is the challenge of forming new habits. There's so many things that we want to do, like eat healthy food, exercise regularly, and so on. And yet, many people struggle to do this, so let's start by talking about how people form habits more effectively.
JIM: Sure, sure. And I'm glad we're talking about habits because if we're talking about self-improvement, in general, I think people actually underestimate how important it is to curate your habits. Sometimes people just think that if they just focus on it enough, they can change or something like that. But when your mind is otherwise occupied, you will rely on your habits. So, it's really an important part of being the kind of person you want to be to try to get your habits in place. And doing that is helped by understanding how habits work. Habits get triggered in five different ways — and I made a cute little acronym, HABIT, for that [laughs] — the first is the ‘H', humans you're around. That's the company you keep. Sometimes if people are doing too many drugs or whatever, they'll be advised to not hang out with their drug buddies, right? Because it can be triggered by that kind of thing or an activity you've just engaged in, when you are trying to get rid of a bad habit or put in a new habit that you are very aware of what these triggers are. So yeah, ‘H' is humans you're around. ‘A' is activity you are engaged in, so if you have one bite of food, that triggers the habit of eating the next bite of food (for example). ‘B' is for bearings, like where you are. So being in a certain location, like getting into bed triggers sleep, that kind of thing. ‘I' is internal state, so being hungry might trigger eating, right? And ‘T' is time of day, like people will go to the bathroom, take a shower, have coffee or do many things triggered by a certain time of day. And all habits, things that you'll do on autopilot, are triggered by one of these five things.
SPENCER: Right, so stepping back for a moment. The way that I think about this is a habit formally is something that is queued automatically. In other words, you're going about your day, there's some kind of stimulus, and then we have an automatic response, which is the habit. And so, I think what you're categorizing here are the different sorts of stimulus. Is that correct?
JIM: Yeah, that's right. Habits are the things that you will do if you don't override them, like your conscious deliberation or something. And so, even if your mind is thinking about something completely different, if one of the triggers for habit shows up, your body and mind will just engage in that habit without even being able to decide to do it, for example.
SPENCER: One thing I find kind of confusing about this question of habits is it seems to lump together two things that are sort of considered separate, which is one of these automatic behaviors that we do thoughtlessly. For example, if every morning as soon as you wake up, you immediately walk to the bathroom. You might develop a habit of going to the bathroom first thing when you wake up in the morning, right? That's technically a habit. But then there's this other thing, which you might call a routine, which is like, “Oh, every evening, I go to the gym five days a week.” And the reason I say it's not necessarily technically a habit is it may not really be cued subconsciously. It might always involve some kind of active participation, where you have to think about the fact that you're going to gym and put your gym clothes on and so on. So, it may not be technically a habit but usually people put it in the broad category of habit.
JIM: I would say that it's also a habit. It's a more complex one. The habit doesn't have to be unconscious, it's just that it can be unconscious. So, for something more complex like going to the gym or going to work or something like that, it's actually a longer series of habits. [laughs] So, they're chained habits. If you go and work out with your buddy — say three times a week at night — after that is a habit. One way you'll know that it's a habit is that you don't actually decide to do it. When you make decisions, you're weighing options. “Should I do it? Or should I not do it? Or what should I do tonight?” If you find yourself not thinking that at all, and just finding yourself getting your clothes and putting them in a bag, and going out whatever, then I would say that a big complex routine is actually a chain of a bunch of habits.
SPENCER: That's the way I look at it. I tend to think of it as a habit that is subconsciously triggered. And then there's this other thing which you may call routine or default, which is the thing that you will do unless there's something else that intercedes, like every Sunday morning my natural thing to do is walk into the kitchen and make myself a cup of tea. But it's not so much because it is triggered subconsciously, and I do it without thinking about it. But it's because that's sort of what I want when I wake up. And unless something interferes with that, that's what I'm going to do naturally. But anyway, I think it's a little bit of a semantic question of how we want to divvy it up.
JIM: I don't break it up that way, and I could talk to you about why. But ultimately, I don't think it matters much. Because when we're talking about changing and curating our habits, you would do the same actions for both of those things [chuckles]. The way we interface with those things is the same. It's just a different way of looking at it.
SPENCER: Got it, cool. All right. So now we have got this categorization of triggers for habit, H-A-B-I- T, so what can we do with that?
JIM: Well, one thing is, people always want to know how to get rid of bad habits. And one of the interesting things that psychology has found is that you can't really get rid of bad habits. They're kind of always lurking and ready to be triggered. So what they found is that what's better is to replace them. So what you do is you have a bad habit, you identify the trigger. And then what you do is you create a new habit that responds to the same trigger that competes with the old habit. The example I like to use is donut. Let's say that you have a donut every day at two o'clock (this is something I used to do). And you might think to yourself, “Why am I having a donut at two o'clock?” And it might be that you're triggered by the time of day. In which case, what you might do is, “Okay, if it's two o'clock, I'm going to take a walk.” So what you do is you have to use your conscious thought and willpower to not engage in the habit and engage in the new habit and try to do the same thing every time. Not just, “I'm going to do something different every day at two o'clock,” but “I'm going to do this specific thing.” And then over time, that new habit becomes stronger than the old habit. And that's what we would call breaking a habit. That's the most effective way to break an old habit.
SPENCER: It sounds like what happens is, over time, we start to associate some trigger with a response that we think is harmful, like eating a doughnut at 2pm or doing drugs whenever certain friends are around or whatever. And then you're saying basically, what you want to do is essentially create a new trigger [sic]. A implies C instead of A implies B, so that when that situation occurs, you're doing something else instead.
JIM: Exactly. And then it can be more complex too. So with the donut thing, people eat donuts for different reasons, maybe they're hungry, maybe they're bored, or maybe they have low energy. And so a more effective way to combat the bad habit is to replace the bad habit with something that tries to satisfy the same thing. If you're hungry, then maybe eat something helpful at two o'clock. That's your new habit. Or if you're tired, maybe take a brisk walk at two o'clock, or maybe have a cup of black coffee, something more helpful. So you make a game out of it, right? [laughs]. Think about, “What are my bad habits? What are they triggered by? What am I actually trying to get out of it?” And then try to replace it with a better habit that is more in line with your values.
SPENCER: Sometimes people talk about rewards as being intricately linked with a habit. What's your view on that?
JIM: They're important for creating habits. But once the habit is established, people are pretty immune to reward and punishment. They will just engage in the habit and the reward system part of the brain is separate from the habit system. And the reward system can overcome the habit system, but the habit system doesn't need the reward system to do what it does. So that's why people will sometimes eat long after they're enjoying it, right? [laughs] Sometimes people will eat and they're no longer hungry, they're not even enjoying it anymore. They just can't stop. They're sort of compulsively eating or something like that. Driving home is another good example. Say one day, instead of driving home from work, you need to go to the pharmacy and pick something up. The reward would be to go to the pharmacy and pick something up. But if your mind is thinking about something else, your body will find itself driving home, in spite of being punished for doing so because you have to make another trip now to go to the pharmacy. So there's not a lot of reward in the moment when a habit is triggered. But of course, reward is very important for habits forming in the first place, which we can talk about.
SPENCER: Yeah, it seems to me the rewards help habits form faster, like it's very easy to form a habit of eating candy every day, because the candy is rewarding. It's much harder to create a habit of eating cardboard [laughs] every day or something like that.
JIM: Yeah, exactly.
SPENCER: But I think that's not strictly necessary. Because if I think about martial arts, and I do extremely amateur martial arts, and when I practice a move over and over again, it's not clear to me that there's a reward. But eventually it kind of becomes ingrained where I can get a trigger response just by repeating it in the face of a certain trigger over and over again. It starts to become automated. It's like my brain is like, “Oh, okay, well, that's what you want to do when this happens.” Then, eventually the brain will just start doing it.
JIM: Yeah, that's right. Habits can be formed by reward or by just pure repetition. Certain things that don't have any reward or any obvious reward anyway — and people have done this experimentally — you can create a habit in somebody just by making them do it over and over again. So that's interesting, but this is why journalists sometimes ask me, “How long does it take to make a habit?” If it's double sugar, double caffeine coffee, it's really easy to make a habit [laughs]. And if it's doing burpees, it's much harder. So like, the rewards can completely make some habits really easy to form. And that's why they can get away with you. You just like, “Oh, I'm gonna have a donut today.” “Oh, that was good. Maybe I'll have one tomorrow.” And then next thing you know, you've got to have it. That kind of thing doesn't happen with things that are unpleasant [laughs]. So that's where you've got to use your willpower in the early stages to make sure the repetition happens enough that it actually does become a habit, and you don't need to think about it anymore.
SPENCER: Yeah, sometimes people want to put numbers on things like, “Oh, how many days does it take to form a habit?” I tend to think more in terms of repetitions than days. So in martial arts, in one session, I can practice a habit many, many, many times. And, yes, it's probably true that there's some value to breaking that up. Practicing 1000 times in a row is probably not as good as practicing 2 to 50 times on four different days. Still, the more repetitions, the better. So, I'm curious if you have thoughts on this idea of purposely doing reps as a way to train habits faster.
JIM: Well, when you're talking about martial arts, or playing piano or something, these are very specific kinds of things, where it's actually a sequence of moves that you're doing that have to happen one after another in the same order. What's going on there is you can think of it as one big habit, or you can think of it as a multitude of habits that are chained together. But the reason I think about it that way is that — let's say you're doing a martial arts move, where if someone were to stick a knife toward your belly, what would you do with it? The trigger should be when somebody tries to stab you with a knife. That's what the trigger you want it to be. Most of what you're practicing, though, is which move follows the last move, and — the initial trigger is something that only gets practiced sometimes more infrequently. I think the best way to do it would be that — sure, you have to learn how to do the move, you have to have that so you don't have to think about each, “Well, what's next, what's next, what's next?” But it's also important that — the habit gets triggered when you actually get stabbed with a knife. Someone's coming at you with a knife in a bar, if you've only practiced in your training gym, then it's less likely to be triggered in that other situation. If you're talking about an initial trigger for something like martial arts, sometimes it's good to practice responding to the correct trigger, in as many and the biggest variety of situations that you can. You see what I'm saying?
SPENCER: Yeah, that makes a lot of sense. It's a really good point. I think about this in training blocks and boxing. If all you ever do is just practice the motion of blocking, you're not gonna be very good at blocking. You have to actually practice with someone swinging a punch at you — hopefully, not full force because you're gonna get brain damage. But someone's swinging a punch at you without too much force, and then you do the block. Then they do it again, and again, and again; and there has to be variation. If they're trying to punch you exactly the same way every time, that's also not gonna work, because the trigger is gonna be too narrow. You want the trigger to be broad enough that it's going to encompass what you're realistically going to see in a fight. So that when a punch is thrown, you immediately go to block without having to think about it.
JIM: Yeah, I got a funny story about that. In police training, they were training policemen to disarm people with guns, right? They worked it, they wrapped it over and over and over again. But the way the repetitions worked was that your training partner would hold up a fake gun, and you would take it out of their hands (that was successful), and then you want to practice again, so you'd hand him the gun back. And a cop did this. And then in the real world, somebody pulled a gun on him, he took the gun out of the criminal's hand and then immediately handed it back to him.
SPENCER: Oh, no. That's awful. [laughs]
JIM: Let's talk about the power of habit ignoring reward, right? [laughs]
SPENCER: I think a problem with actually a lot of martial arts is the way that they're practicing is not necessarily going to work in a realistic setting, because the triggers are not realistic. And now of course, if you're just doing it for fitness or sport or whatever, that's totally fine. But insofar as you're doing it for self-defense, there has to be an element of realism so that the triggers reflect the real trigger you're gonna have in real life. Otherwise, you're just not gonna have the right triggers installed.
JIM: Yeah. And I find that different teachers in different martial arts do this better than others. Sometimes it's very ritualized and you're barefoot. And I've heard of women's self-defense training programs, though, where they actually have you carry grocery bags, and someone tries to grope you while you're carrying grocery bags, and you have to practice dropping the bags. The situation can be different in terms of where you are, but also what you're doing and who you're with and all this kind of stuff. If you're really interested in self-defense, it is good to try to broaden your practice with different forms of the trigger. Now, if you talk to really good martial artists about the very few times in real life when they've had to use martial arts, the way they talk about it really reveals that it's habitual. They barely realize what they did until it's all over, and that's the situation you need to be in and that's why somebody who's not trained at fighting is really at a big disadvantage because their cognitive system isn't fast enough to think about everything that they would have to do. Meanwhile, your training partner is operating on a habit with really effective stuff.
SPENCER: So let's talk about ‘implementation tensions', sometimes ‘notice taps' or ‘trigger action' plans. Can you tell us what are they? And then how can we use them to help form new habits or correct bad habits?
JIM: So earlier, we were talking about replacing bad habits with new ones that have the same trigger. And so you do that with implementation intentions. And what that means is that you very specifically say to yourself what you're going to do during that trigger. You can say to yourself, you can write it down, you can talk to your friends, whatever, but there has to be a conscious, very explicit, very clear intention. Every morning — Monday, Wednesday, Friday — I'm going to go to the gym and do half an hour of working out. That is one level of specificity. You can also say what kind of working out it's going to be, so it can vary a little bit. But what you want to avoid are extremely vague [laughs] intentions. People screw this up with their New Year's resolutions. They'll say, “I'm going to exercise more,” or “I'm going to eat less or something,” rather than something specific that is tied to a trigger. “If I have dessert, the next morning, I will run.” That's much more doable. It's understandable. If you failed to do it, rather than “I'm going to exercise more,” how would you ever know — at the end of a week, for sure — whether you actually exercised to satisfy the goal that you'd made for yourself, right? The implementation intentions, ideally, are either trying to create new habits or replace old ones, where you have a trigger that you're aware of, and a specific action that you're going to do in the presence of that trigger. And having that as a conscious thing, you'll forget the first couple times, or you won't have the willpower to carry through with it for once in a while. But the idea is that with an explicit idea of what you need to do, that gives you a leg up so that when those triggers happen, you actually do engage in the activity. And over time, if it works out, alright, it'll be a habit, then you can start working on something else in your life.
SPENCER: So what tends to go wrong when people try to form new habits?
JIM: The biggest problem is I think it's vague. And they're not really specific with what they're going to do differently. Wanting to be better at this or that, it's got to be more of a specific thing. ‘Exercising' is vague, ‘doing sit ups' is less vague, ‘doing 30 sit ups' is less vague still, ‘doing 30 sit ups every day at 9'. The more specific you can get, the easier it is to do it. When you're done, you don't have to worry about it. I think that's the biggest problem with it. Also, people have goals that are way too hard sometimes. If you can break up a major life change into smaller chunks that are achievable over time, and you slowly build up toward it, then that makes you feel better about yourself. It makes you not want to abandon the entire enterprise when you don't live up to your own standards.
SPENCER: Yeah, I think it can be really useful. Starting with a habit that's so easy that it's very hard to fail at it. Like, let's say your goal is you want to do 100 pushups a day or something (it's quite difficult). But right now, you're doing 0 push-ups. Then maybe, the first habit you should form is doing 5 push-ups every morning when you wake up or something like that, right? And if you set the goal at 100 push-ups right away, that's extremely easy to fail at. But 5 push ups — assuming you can do any pushups at all, and that's not too many for you — then that should be achievable, because it's only going to take you less than a minute or two. And you should be able to do it every day. And then once you have that achieved, now you can say “Okay, now I'm gonna try to do 10 in a day,” or whatever.
JIM: Yeah, that's exactly right. Even better, I would say if you start really small like, “I'm gonna do 1 push-up a day. And every Monday I'm going to add 1 pushup.” That's even more specific. Week 2, you do 2 push-ups a day. And in 10 weeks, you're doing 10 a day. But if you're in it for the long haul, if you're actually trying to change yourself, not just this month, but you want to actually develop a different way of living, then you have all the time in the world. [laughs] You know what I mean? And it's so easy to do 1 pushup a day. You feel like an idiot saying, “Oh, I don't feel like doing my push up.” [laughs] You can also talk yourself out of doing 100 pushups more easily than you can talk yourself out of doing one. And you can build it up really slowly and feel good about it the whole time.
SPENCER: People might think, “What's the point of doing one push-up?” But building the habit, the habit itself doesn't care whether it's 1 push-up or 100. Once you have the habit of doing one consistently, now you're always having a time every day where you're doing push-ups, and now it's about extending it. You already have the habit formed. Now you need to just tack on to the end of it, which I feel is vastly easier than building a habit from scratch where you have nothing there.
JIM: I think that's a great way to put it, and I never really thought of it that way. But what's really smart is that when you're doing just one push-up a day, you're not doing it because one push-up a day is going to improve your health in any way. If you're doing it in service of a habit, that actually will help your health down the line, so I think that's a great way to think about it.
SPENCER: Every morning, I have a sort of habit container, by which I mean that I have a sequence of habits I do every morning for my health. The first one is drinking a tall glass of water. And then I have a whole bunch of them after that. The idea is that it's very easy for me to add new things to that because I already have that habit container every morning, I know I'm going to do the sequence. Now it's just a question of what is in that sequence. And I can change that over time. I think something that's really useful is if you have a certain moment in your day where you do your healthy habits, then you can experiment with putting different stuff in that slot.
JIM: Yeah, I think it's really smart. And also, I should say that it's good to do it in the morning, because for most people, the morning is the most routine part of their day. What I mean by that is, you might do something different on different evenings. And during the workday, depending on your job, you might not be able to do self-improvement. You might be too busy (you're like a server or something). But the rest of the day is much more chaotic. And the morning is a great time to develop habits. Because most mornings are almost exactly the same. It's very rare that you have something strange going on in the morning. When you're on a trip is a good example. I write every morning. I get up every morning, make myself a Vietnamese coffee and start writing for hours. And I don't have to think about it. It's a total habit. But, when I go on vacation, I have to decide to do it. It's the craziest thing. It's like it makes so much sense. But the normal trigger — I have the time trigger, but I don't have my house, I don't have my kitchen, the way I'm used to it. All of those things contribute to the habit actually working. — When I'm on vacation, I find myself having to decide to write. I might find part of my mind trying to talk me out of doing it. You might have experienced this on a trip, or your listeners might have. When you go on vacation, it's not as easy to engage in the exercise that you did so effortlessly at home. And then if you spend too much time away from home, by the time you get home, the habit might need to be reinforced again.
SPENCER: Yeah, so what I do for my own morning habit container — because I don't travel that much — I just say, “Okay, I'm off the hook when I travel.” So, I just very tightly associated with my home. But for people who travel a lot, that actually could screw up their habit. And I think that to that point, if you think about creating a strong habit, missing days can be really bad. If you think about it, you're trying to create an association between if A occurs, I do B, but then if A occurs a bunch of times and you don't do B, every time that happens, you're kind of weakening the strength of that connection. I think in general, thinking about trying to create as many reps as you can, so as many practice sessions of A leads to B as you can, and then not missing. When A occurs, you try to do B every single time, so you don't weaken the strength of that connection. I think that's useful advice.
JIM: Right. And every day is fine for things you want to do every day. But if you look at the scientific literature, there's so many things they say you should do every day that you do nothing but those things (you wrote in the journal every day, exercised every day, meditated, and did everything). [laughs] One way around that though is to have specific days of the week, or the month or something. I'm going to do aerobic exercise on Tuesday, Thursday, and I'm going to do strength exercise on the other days or something like that. Or if you're just too busy and you want to meditate, you're like, “I'll meditate every Sunday night.” What we don't want is just saying “I'm just gonna meditate more,” or something like that. Then you've got specific days, and then you can set up your environment to help you. You can put it in your calendar, you can ask your smart speaker to remind you, every Sunday night at six remind me to meditate, and your smart speaker will tell you time to meditate. The other thing that I think is really important is that (we're talking about building habits) to the extent that you can externalize that is even better. — I talked about donations. The best way to be a good person is donating to the right charities. So, if you've got an automatic payment into the charity that you set up with your bank, then you don't even need to make it a habit, you've just externalized the entire thing. And you can focus on other stuff.
SPENCER: Right. Savings is another good example. You can set it up so your money from your income goes directly into savings. And you don't have to have a habit because it just happens without you thinking about it.
JIM: It's similar to habits in that what you want is to be able to think about whatever you want. And when your life is just moving along in the default way, it's helping you toward your values and your goals, whether that's through an automated computer program or building habits, or getting your friends to nag you about something. It's all about getting the default right.
SPENCER: How does willpower come into play here? I think a lot of people try to get themselves to do what they want themselves to do by exerting willpower. I'm of the attitude that while sometimes it's necessary, it's often a mistake. I'm curious to hear your thoughts on that.
JIM: I think people rely way too much on willpower. Your willpower is a precious resource, and trying to spend it on hopeless things is not advisable. Certainly, if you're trying to do something that you don't like doing, you have to just have some willpower. — I kind of injured some muscles when I was playing squash and I'm supposed to do these stretches and core strengthening exercises often, and I hate doing them. I hate doing them. And so, trying to get a habit is very challenging for me. What I've done is my wife walks the dog for an hour in the summer, in the morning. And I said to myself, “Okay, three days a week, these three days, I'm going to do my stretches while she's walking the dog.” If she goes back early, then I get to stop my stretches early, and I'm really happy. But it does take willpower. Even then, she'll go for a walk and the habit part is that it pops into my mind, “Oh, it's time for exercise.” But then I'm still like, “Oh, my God, I don't want to do it. Can I maybe not do it?” And I have to really force myself to do it. — So that's where willpower comes in. If you're doing something that's just hard and unpleasant for you, you have to use willpower to get yourself to do it. Hopefully, if you do it regularly enough, and with the same triggers, it becomes easier over time because habit starts to take over. Relying on willpower doesn't work, though, because your mind will always be distracted. If you're going to use willpower, here's the donut again. So 2 o'clock, “I'm not gonna eat donut to 2 o'clock anymore.” Okay, so 2 o'clock comes around and you're like, “Alright, I'm not gonna have a doughnut,” and use your willpower. “I'm gonna resist the donut.” You're proud of yourself. “I resisted the donut.” And then five minutes later, the trigger is still basically there, “Oh, should I get a doughnut? No, I'm going to resist.” You resist, and then 15 minutes later, you've resisted 12 times, and you're so proud of yourself. You feel like you deserve a doughnut. But if you're not thinking, you might just go for a walk with somebody and end up buying a doughnut and then regretting it later because your mind can't be focused on everything that you need to do to make your life better. That's another reason why it's important to get the habits in line.
SPENCER: That example you gave of your own exercise, it sounds like you have very specific exercises you need to do for rehabilitating the injury. But I think with a lot of the kinds of habits we want to form, we actually have some more flexibility. Let's say your goal is just to be healthy by exercising. That doesn't bind you to any particular type of exercise so much. And you can really start to think, “Which kind of exercise doesn't require as much exertion of my willpower?” If you're the sort of person that loves to be outdoors, maybe you want to do outdoor activity where you actually just enjoy it. Or if you like team sports and that's motivating for you, maybe you sign up for team sport. You can find ways to help form a habit by finding an activity that achieves your goals, but with relatively less exertion of willpower.
JIM: With exercise, I think it's very clear. For me, I just can't stand exercise. I was very lucky to find that I enjoy playing squash and to me, it's just a game and I can get my heart rate up and be sweating, and just be having a ball. Unfortunately, after COVID, people thought it was a bad idea to put two people in a box breathing heavily, so I haven't played squash in two weeks. But that's exactly right. What are some habits that people might want?
SPENCER: Exercise is one example. Something like healthy eating; there's also a lot of new ways to eat healthy, right? You do have some flexibility there to try to choose the forms of healthy food that you actually enjoy and don't find choking it down. I do think that it's pretty common that whatever we're doing, there's usually multiple ways to do it. Now, maybe at work there are certain habits that you just have to do, you don't have any flexibility and you just kind of hate it.
JIM: Trying to maybe reach out to friends more or something like that. That's another place where you can sort of be creative about what is the least friction to get in touch with this friend, or the activity I can do with this friend that is the most fun and not make it hard on yourself. The more unpleasant the thing is you're trying to make a habit of, the harder it's going to be, the longer it's going to take and the more willpower it's going to require.
[promo]
SPENCER: Let's talk about animal welfare and animal consciousness. To just set this topic up for us, Jim, why should we care about this?
JIM: I think that people kind of care about animal welfare. They want to think that animals are being treated well but unfortunately the way most of them achieve that is by being deliberately blind to the nature [laughs] of animals and their welfare. Most people believe that most animal species that we have — that are larger than bugs, I guess — have the capacity to have a good or a bad life in a way that's morally meaningful. So yeah, it's a really big problem. If you think about all of the animals that there are, and all the ways that they're interacting with each other, and with us, and how much they might be having good or bad lives, it's very possible that it dwarfs human welfare, in terms of just drawing numbers or whatever. We could talk about how we might decide that, but it's very possible that animal welfare is the most important moral issue that there is.
SPENCER: Yeah, I think one thing that's interesting to consider is just the massive scale of animals that are raised for food. For example, my understanding is that 9 billion chickens are raised for food each year in the US, and that's just a staggering number. That's more chickens just in the US for food than there are people on the whole planet. That kind of gives you a sense of scale. That's just one animal. If you think about fish and other animals, the numbers are vast. And if you say to yourself, “Well, maybe I don't care about animals that much." If you even assign a little bit of value to the lives of animals (like 1/20 or 1/100) the amount that you do to humans, you still get to pretty staggering numbers about how big a bigger problem this is. JIM: I think that's a great way to do it. If you say, “Okay, well, maybe a human life is more important that a human doesn't suffer than a chicken suffering the same amount.” But when you try to guess how many chickens — let's say a broken leg, a human has a broken leg, they suffer. Chicken gets a broken leg, they suffer. How morally (and this is something each person would maybe think for themselves) how many chicken broken legs would be as morally bad as a human breaking their leg? But the numbers have to be really vast to make a difference in how we act, considering — when I was in my 20s, sometimes I would eat an entire chicken for dinner [laughs]. I would just cook a chicken and eat the whole thing — How many chickens does a person like to eat during their lifetimes, or whatever? Just say that animals don't suffer as much as humans — which might be true — that doesn't mean we're off the hook. Because depending on how much more or less they suffer, the numbers are really, really great. It's even worse when we get to insects, but chickens are very relatable.
SPENCER: Right. One way to think about this is there's sort of two different axes. There's the relatability axis, like how much empathy do we feel for that animal and how much do we care about that animal. And one of the top things in relatability would be like a chimp, where people feel like chimps are very much like humans. They're really smart, we can relate to them a lot. Or, dogs because a lot of people have personal relationships with dogs and really feel like dogs are sentient agents (and in a lot of ways!) and are cute and all these kinds of things. So there's that, and then on your way down the bottom, you have maybe cockroaches or something like that, or maybe even further down, you want to go to bacteria or something like that. And then the other axis, here you can consider how confident we can be that this thing is sentient. That is, or maybe a better term would be, how confident can we be that this thing can suffer, or that it can have goals, or that it can essentially have any kind of moral agency or moral weight that we assigned to it. And you want to talk about that a little bit about this idea of moral weight and how you think about assigning it?
JIM: The word you use, sentient, is actually the word they use in animal welfare to refer to a being's ability to have conscious positive and negative states — suffering, joy, pleasant and unpleasant states. We have a question of which species are capable of sentience? Which species have sentience is like which creatures can have a consciousness or suffering or satisfaction with their life or something like that. And as you said, most people are very confident that other healthy adult humans are conscious. But we start to get disagreements even when you get to newborn infants, or dogs, and then all the way down. At some point, everybody will start to be uncertain about where their intuitions lie. But people's intuitions are based on ridiculous things. And they've done studies of people, [laughs] when they attribute consciousness to it. And Wagner did some great work on this, looking at agency versus feelings. And there's a bit of a tradeoff that the more agentive something is, the more they are active in doing big things — in some sense, the less we think they can feel (so people who are great movers and shakers of the world, for good or bad, like Mother Teresa or Hitler's something) — people think that they actually will suffer less.
SPENCER: That's counterintuitive to me because a rock has no agency, and I think people don't think they can suffer. In general, it seems like things were at no agency. A lot of times people think that they can't suffer or have very little suffering. I'm just a little confused about that example.
JIM: Yeah, it's not just a complete linear relationship or anything, but if you've got a being that, in general, we think is conscious, as it gets more agentive, it sometimes can reduce the amount of empathy and we think that it can have feelings. It is related to something called the myth of pure evil. There's this idea that there are bad people who do things to victims. The extreme stereotype of this is that the person doing the harming is extremely agentive. The victim did nothing, can do nothing, and does nothing but suffer. The feelings of the perpetrator are irrelevant or almost nonexistent and they just don't even cross people's minds. It's an interesting place where the extremes of these two factors kind of diverged a little bit. And we have this myth of pure evil, where we've got these feeling victims and active perpetrators.
SPENCER: Suppose we want to try to be scientific in how we think about this, we want to make some decisions around how much we should care about a chicken versus a cow versus a cricket. This is obviously a really difficult philosophical problem, but how can we begin to think about this scientifically?
JIM: There's a lot of disagreement out there. People will sometimes estimate the level of sentience in a species, for example, by using some kind of other criteria. This pretty much accepted that we can't directly know what any other being is feeling. A ‘Subjective state' is inferred from ‘Objective states' that we can measure. Sometimes people use brain size or cortex size or behavioral flexibility, or encephalization quotient — which is the relationship of the brain size to the body size — all these different measures give you (subtly) different results. There's a good correlation, but different results about whether fish are conscious, or how conscious this is. But that is what I might call a Muted Animal Theory, where we think that humans are, as far as we know, capable of 100% sentience, and then other animals have a lesser extent. If we go back to the breaking a leg example, if you break the leg of a human — let's say it counts for 100 — and if you break the leg of a cow, it's a little less because they're not as conscious. Now, not everybody agrees with this, some people think that non-humans are not conscious at all — they're not sentient — and they're not conscious of anything and there's nothing going on upstairs.
SPENCER: Just to clarify that. When you say non-sentient here, you mean something like there's nothing to be them, like right now, the way that if we look at a red apple, we experience internally this feeling or quality of redness, but they would not have that, there would be no internal experience. Is that right?
JIM: Yeah, they think they're just merely automata that go through the motions without any conscious experience. This is, I would say, a minority view among experts, but I guess about 3% of people in the field think this. These are people who think that..say Daniel Dennett believes that consciousness is a result of cultural memes that are transmitted by language — Dennis is a little slippery, and I don't want to pin him to anything in particular, but on the face of it — this means that any non-linguistic beings, non-cultural beings and beings that don't learn a language and a culture will never be conscious. I mean, that's a bit of an extreme view. But people who think that language is important to consciousness, or some detailed self-concept is important to consciousness might deny that any non-human animals are conscious.
SPENCER: Right, I've heard some people argue that consciousness has to do with having a model of yourself in your model of reality. What do you think of that view?
JIM: I was going to say that there's no good evidence for it. That's true, but there's not a lot of good evidence about a lot of stuff in this field. [laughs] There's just great disagreement about people's intuitions about this. There are people who say that every time you're conscious, you're conscious of yourself being in that state. And there are other people saying, “That's totally not true. That's not how I experienced it at all.” I'm leaning more towards that. I know about flow states, I know what it's like to be completely engaged in a movie and forget yourself completely. And to say that you're unconscious during the movie is certainly not true. But you're definitely not aware that you're watching a movie, you're not aware that you are experiencing those characters and those emotions. You're just feeling the emotions and involved in the story. It does seem to me that, on the face of it anyway, we have lots of conscious experiences that do not involve conscious awareness of ourselves. But there's some very smart people out there who disagree with that.
SPENCER: Right. Unfortunately, we're in the territory of philosophy where very little is certain.
JIM: Yeah. Going the other direction, there are people who say that animals suffer exactly the same amount as people. And there's no reason to think that the suffering of a hawk or a dog would be any different from that of a human being. I guess it's a minority view, but I think a lot more people think this than the unconscious animal theory, and I call it the ‘same suffering theory' (that's another idea.) Then there's a small percentage of people who think that animals might suffer more. Now, this is a weird thing to wrap your head around. I call it ‘Tinkerbell theory'. [laughs] — In Disney's Peter Pan movie, Tinkerbell is kind of a nasty character but she gets angry, and she's so consumed by the anger, that she just has no inhibition. And then when she's happy, she's happy. And the creators (I read) said that Tinkerbell is too small to contain any subtlety of emotion. And that's why I call this the Tinkerbell theory. — The idea is that if there's a snail or something, and the snail is suffering, it can do nothing but suffer. When you suffer, say, you stub your toe, or a great example is giving childbirth, many women experience enormous pain during childbirth. But the literal pain they're feeling is not as bad as the same pain as getting bitten by a crocodile, because it's part of something meaningful, and having a baby is this beautiful thing that you might have wanted for years, and it's meaningful, and you can attenuate the suffering — given that pain because of your narrative that you put around it. But a snail can't do that. So some people have suggested, we don't know, of course, but it might be that a snail's suffering is cosmic, like it's bigger than anything a human could even imagine. Which is, I really hope that's not true [laughs] because we're kind of screwed if it is. But those are sort of the theories as I put them out. There's this ‘Unconscious Animal Theory', ‘the Muted Animal Theory' (which I think is the majority of you, by a small margin), the ‘Same Suffering Theory' (which is that we're just like humans), and then the ‘Tinker Bell theory' (that they're suffering more).
SPENCER: So you think most people believe that animals suffer less than humans, but nonzero amounts?
JIM: Most experts think that yeah. And the reason I'm talking about this is because there's so much uncertainty in this, that I think a rational person — particularly a rational non-expert — to respect the uncertainty in the field should kind of have a slippery belief that is roughly informed by the percentages of the different scholars who believe these different things.
SPENCER: So you basically take the different scholars and their views, and you kind of do a weighted average, based on how common the views are?
JIM: Yeah, yeah. To the extent that you can try not to just act as though one of them is true. Let's say the muted animal theory, I've estimated around 60ish percent of the scholars in the field believe if you're making a decision — particularly for making a big decision, like you're in charge of a country or something, and you can change the laws about how animals are treated — rather than just picking the top one and acting as if it's 100% true, you make a policy that takes into account the spread of beliefs. And that would work given the probability of each theory being true, according to its credence.
SPENCER: I think it is an interesting approach. Because it's saying, “Look, we have this incredibly difficult philosophical problem we don't know how to resolve, that we're going to replace it with a much easier problem, which is surveying people who are experts in what they believe, and kind of combining those survey results.” Now, I definitely don't think this works in all scenarios. It wouldn't work if let's say the whole field was biased towards one answer, and not for good reasons, but just because of fad or something like this. Or maybe if it's just such a difficult philosophical problem, that's just way beyond human ability. And it's what views experts land on is kind of random [laughs], doesn't have much to do with the true answer, something like this. But for a practical solution, I like it, because it's probably better than what most people could do, if they don't invest much time and think about it anyway.
JIM: I think that you're right, especially the first part about when you have this survey, we kind of assume independence. But of course, there isn't true independence. If there was a really well, — let's take Peter Singer, for example. Peter Singer is a philosopher who is incredibly influential about animal welfare. If not for him and his influence, these numbers might be very different. A lot of people who we might survey know about Peter Singer, so they're all living in a world that knows about Peter Singer. To that extent, they aren't independently generated opinions, so that's a problem. But with the other thing that you say that it could be just random, I think actually, you still would want to use this method. Because, if the experts are picking random things, that is important information. It means that there is no data or good arguments out there to turn people one way or another. And that means that you should probably be extremely uncertain about the issue as well.
SPENCER: One interesting thing you can think about for a field is that if people in a field all agree with each other, it doesn't mean they're right. But if the people all disagree with each other, then you know that they can't all be right, so you know that a lot of them must be wrong. For example, we can apply this to religion. We can say, a lot of people disagree with each other about what the right religion is. We know that they can't all be cracked because their views are mutually incompatible (at least some of their views are mutually incompatible). And this means that at least a certain percentage of people have to be wrong, insofar as they have mutually incompatible views. Now, I tend to think that this philosophical problem is so hard with animals that I'm not sure how much I trust the experts on it, but I think trying to do a weighted average of experts is probably much better than just thinking about it for 30 minutes and coming to some view, although I don't know how [laughs] trustworthy that is.
JIM: In my enterprise, the thing that ties together the habit forming and the animal welfare (for any listeners who might be confused about it) is that I'm very interested in self-improvement. And part of self-improvement in my mind is not just being healthier and happier, but also being a better person. So, every thoughtful person who wants to be better, I think, needs to wrestle with these ideas of animal welfare. You have to do and not do something. You can't just throw up your hands and say, “It's unknown, it's too difficult to know anything.” Because what do you do with that? Do you not try to help animals? Do you try to help animals a lot? You have to make some decisions about how you're going to live your life as though something were true. So I'm just trying to figure out a rational way to deal with the uncertainty.
SPENCER: One thing I'm just mentioning about this approach is that as complex as this problem is, we're still only really operating under one moral theory, which basically says, “We need to consider how much these beings can suffer, or how much they can experience positive mental states.” It doesn't take into account things like, “Well, maybe it's wrong to end the life of an animal, even if you do it in a pain-free way.” So I'm just curious, do you have thoughts on other approaches besides sort of utilitarian one?
JIM: Yeah. But I don't really consider them very seriously. Because most people are utilitarian when it comes to animals. [laughs]
SPENCER: Is that true?
JIM: Yeah, yeah, it's true. If you talk to deontologists, or virtue ethicists or whatever, when it comes to animal welfare policy or thinking about animals in general, they tend to have a more utilitarian approach to it. Everybody can think in a utilitarian way. And when you're talking about whether a person stranded on an island is allowed to eat fish, they tend to not think about the fishes' right to have a life or something like that. But you can have this idea like a deontological approach to animal ethics, but I don't think it works very well at all, because you basically can't live, unless you're a jainist or something, you really can't live without killing.
SPENCER: I'm kind of confused about that, though. If you imagine a person on a deserted island, and they need it, the only way they can survive is killing animals. Suppose they don't have a way to kill them painlessly, the only way to fish is to create some kind of a line and the fish are gonna suffocate until they're hacked to pieces or whatever. In a straight up utilitarian calculus, if you assign those fishes as much value as you do yourself, you might just decide that utilitarian calculus says that the fish pain you're gonna cause of eating thousands of these fish is actually more than the benefit you're going to prevent yourself, right? I think, in that case, a lot of people will use a non-utilitarian way of thinking and say, “Well, it's okay to eat something for your own survival,” even if you thought it was not ethical to do in everyday life, where you could easily avoid it. They'd be willing to do it for survival purposes, like a special case.
JIM: Yeah, but this is the problem with deontology though, is that the more complex the rules get, the more it looks like utilitarianism. When you try to justify why it's not okay for me to get Kentucky Fried Chicken at normal times, but it is okay for me to kill fish with a spear if I'm starving. How do you justify that without something that resembles utilitarianism? I feel like the animal welfare stuff skews heavily utilitarian, and that's why I don't talk about it a whole lot. Because animal rights as such is more of a public face [laughs] to the matter than it is something that people actually really believe, I feel.
SPENCER: Do you think groups like PETA, adopt a kind of utilitarian way of thinking about animal suffering, where they are really trying to take actions that minimize suffering and maximize well-being for animals, as opposed to a more rights-based viewpoint?
JIM: I think that it's hard to figure out what PETA actually wants, because PETA is a huge organization with a million different arms. Part of their mission is to have PETA have a lot of money — all charities are kind of like that — so what they say and how they advertise to try to get money is one thing, and then what they are doing with the money is another. They can interact because they know that what they try to do with the money is going to affect donations and this kind of thing. So I think that they sometimes will talk about rights because laypersons respond to rights. And when I'm talking about that people are usually utilitarian when it comes to animals, I think I'm mostly talking about scholars.
SPENCER: Okay, because I was gonna say it's not my experience with lay people that they are mostly talking utilitarian terms. Okay.
JIM: Laypersons are just wildly inconsistent about what they think about animals, like they just haven't thought too much about.
SPENCER: Okay, let's assume for the sake of argument, just kind of use a utilitarian framework. Let's take your approach of weighting expert opinion, combining of experts. What does that get you to in terms of the value of different animal lives?
JIM: I talked about different opinions about animal consciousness and sentience. We got all of those different ‘who thinks what' but then there's also disagreement about how much animals matter, which was kind of surprising to me. But I'm kind of a very hardcore Hedonic act total utilitarian [laughs]. So for me, it just seems very obvious that if an animal is suffering 2% of the human capacity, then you just do the math and then the moral value falls out of that. But there's some people who don't believe that they think that even if let's say a chimp can suffer a broken leg as much as a human does, we still should treat humans better. So we've got these two things going on: how conscious is the animal, and then how much does it matter morally. And for some people, that doesn't line up. So for me, it lines up perfectly, and utility is the same amount of utility in a mouse or a human is of equal moral value. And some people just flat out disagree with that. So respecting that disagreement, you've kind of had these two different disagreements, and then trying to figure out how you should behave morally. If you respect the uncertainty of the field, [you] would take both of those into account. And so that's what I tried to do. When I was analyzing all this stuff, just starting to guess at what percentage of people in the field and came up with these ranges of how much different animals matter — chickens, cows and pigs and crickets, and that kind of thing — that's where I went with it.
SPENCER: So give us a ranked list, what are some...going from most valuable by this method down to less valuable by this method?
JIM: I looked at an elephant, just because I wanted to pick something that was bigger than a human with a bigger brain [laughs]. Elephants came out quite valuable, and it mostly follows what I think people would expect, in terms of point estimates. What I mean by a point estimate is, if you're just looking for a single number to use, you just take like the average, but actually, which average you use differs too, if there's a geometric and arithmetic mean and they come up with different numbers. But the main conclusion of this whole thing is that the uncertainty is really, really vast and the range between feeling nothing at all, and feeling as much as a human is like exactly 90% of the variance explained. What that basically means is that we really don't know. Now, when you have point estimates, you can come up with numbers. For chickens, for example, what our analysis came up with ended up with 0.002. What that means is that you multiply whatever suffering by 0.002 and that's how much suffering the chicken would experience.
SPENCER: And you're using a median there of the experts?
JIM: Yeah, that would be the median.
SPENCER: Got it. So if you're gonna prick a human with a pin — let's assume that picking a human and picking up a chicken with a pin are similar actions — then the amount that would matter to humans (if we call that one). And then the amount of matter for chicken would be 0.002 by these calculus.
JIM: Right. So, pricking a human with a pin would be just as bad as pricking some large number of chickens with a pin, if we consider those equivalent.
SPENCER: So, by that calculus, pricking a human with a pin for let's say, 10 seconds would be like pricking 500 chickens with a pin for that same amount of time.
JIM: Yeah, that's where I ended up with how we should weigh — because one thing I'm interested in is comparing how we can improve human and animal welfare and which one we should pick. So I kind of wanted to put it all in the same currency, so to speak. So that we could look at how we can do more good helping animals or humans, for example.
SPENCER: How does that compare to elephants?
JIM: The elephant thing, we might say that the analysis comes out somewhat absurdly, if we think that elephant lives are more valuable than humans. But the median, anyway, comes out to be a little less than humans. Humans and elephants are closer than any other species.
SPENCER: So what is it? It looks like it's 0.2 here for elephants. So, you need to hurt 5 elephants to have the equivalent of one human, basically?
JIM: Right.
SPENCER: Got it. Are there any that really surprised you here? What stood out?
JIM: No, I don't think I was particularly surprised by any of it. I guess I was unaware of the degree of uncertainty [laughs]. The thing that was most surprising to me was just how uncertain the whole thing is, and very, very smart people disagreeing wildly on these matters.
SPENCER: How does this change your own behavior? What do you do with this information?
JIM: So what you can do is you can look at the actions you take, and you can make calculations about how much help or hurt that you're doing, if you know how much these animals are — or you don't know [laughs] — but if you have some kind of a reasoned way to come up with an estimate, we should say, of how much they're suffering. So in the effective altruism movement, we have numbers that give us information about how much help we're giving for every amount of money spent. But up until (as far as I know) my analysis, nobody had ever been able to figure out a way to effectively compare between the three main areas of charity, which for effective altruists are human health, animal welfare and climate change. There are other ways of helping the world like politics and stuff, but that is even more uncertain than [laughs] anything else and so that's what they stick to.
SPENCER: Kind of just a flag there. I'm a little surprised that you mentioned climate change there. That was the third one you said, right?
JIM: Yeah.
SPENCER: Okay, because usually I hear the third one being more stuff around existential risk broadly, like threats to survival of the species or catastrophic risks.
JIM: Ah, yeah. So that is a big concern, but nobody has any numbers for it. Nobody has any way to calculate how much good you're going to do for a dollar spent to try to prevent existential risk.
SPENCER: I see. So, the ones you focused on comparing were health, animal welfare and climate change.
JIM: Yeah. You're right, though, that the existential risk is another huge pillar of effective altruism. I'm comparing charity evaluators, specifically. So, when we can put dollar amounts to how much good we're doing. And the way that they would do it is for human health, which is usually fighting malaria, they would say that you can save a human year of life for about $78. And for helping animals you can save 350 animals by donating $100. And for the fight against climate change, these numbers are extremely recent, and very uncertain. But what some recent economists have done is tried to figure out the human life cost of putting carbon into the atmosphere over the next 100 years. How many more people are going to die? How many years of life are we going to lose for all humans, taking into account everything they can from an adaptation to changes in technology and health. Once we have those estimates, we can look at the effectiveness of different charities in reducing carbon put in the atmosphere. And then we can say, “Okay, well, when we donate a certain amount of money, how many years of human life are we saving over the next 100 years?” And that's really recent stuff. So, what you can do is when you have all these numbers for animals — like a chicken is worth 0.002 suffering of a human — is we can say, “Okay, well for saving 350 chickens, what is the moral equivalent number of years of human life that you would have to save or that would count as saving?” And when you do that, you can look to see if your dollar is doing more good with the most effective animal charities, or the most effective human health charities, or the most effective climate change charities?
SPENCER: That's really interesting. You're trying to bring them all into a single unit, despite them being so fundamentally different. One quick question about the number you threw out for the Against Malaria Foundation. Was that $78 per year of life saved? Is that what you're saying?
JIM: Yes.
SPENCER: Because usually people hear more like a number in the multiple thousands. But I guess that's for an entire life.
JIM: Oh, it's $5,000 for a life. And if you look at the lifespan, it works out to about $78 a year.
SPENCER: Okay, got it. That makes sense. Cool. Okay, what happens when you do this analysis to try to compare across these different areas?
JIM: Yeah, I think the most important thing here is, if you believe that being good is mostly a function of your charitable donations (which I do believe), then what you want is to pick the most effective charity from the domain that is most effective. And what I found is that human health is the most effective one. So, when you look at, let's say, the least efficient one is helping climate change, where Cornell has these estimates — which are very uncertain — but the point estimate is that you save a year of human life for $5,108 by cleaning up the atmosphere. If you want to help the world by increasing the welfare of animals, you can save the equivalent of a year of human life for about $1,300. It's almost five times cheaper to do good helping animals than cleaning the atmosphere. But none of that compares to $78, right? It's over 10 times cheaper to help the world by preventing malaria than it is by even helping chickens and other livestock. That's what the analysis came up with. So, the numbers for climate change are $5,108 — that's the Clean Air Task Force, which is one of the most effective climate change charities. The Humane League saves the equivalent of human life for about $1,306 and then the Against Malaria Foundation and its other ones like it save a human year of life for about $78.
SPENCER: Yeah, it's really cool that you try to compare them and obviously it's a huge project.
[promo]
SPENCER: One thing I wonder about is, at what point do we say there's so much uncertainty that it's not worth comparing? [laughs] At what point are we like what the confidence intervals have to be? “Well, really, it's like, ah, yeah, we just don't know enough.” What's your thought on that? Is your attitude like even if there's massive uncertainty, we're still better off making an expected value estimate? Sure, that estimate will change a lot. Sure, that estimate will have orders of magnitude uncertainty, but going with the mean, is it still like a winning strategy?
JIM: Absolutely, that's what I think. Here's the thing, if you say, “Oh, we just don't know,” you still have to do something. And what's going to happen is if you throw up your hands and avoid numbers and avoid reasoning about it at all, and you avoid any evidence because on the grounds that it's too uncertain, what you're going to do is you're going to fall back on your default beliefs about the system, whether you're conscious of them or not, and act accordingly. So, no matter what you do, you are acting as though something were true. And so, by ignoring data on this kind of uncertainty, you're just going to merely go with your gut feeling and hope that that's better. And I think that's always bad. So, I have this somewhat unpopular view that a bad number is better than no number [laughs].
SPENCER: Yeah, it's an interesting question. When is it better to have no number? And I expect the answer to that might partly be around when a number kind of misleads you into being overconfident or something like that, right? If we say something like, “Okay, at the end of the day, we get these point estimates that it seems to be much better to save a human life using something like the Against Malaria Foundation.” Maybe it's 10 times cheaper. Ten times sounds like a really big number. But then if you were to zoom out and be like, “Well, yeah.” But if you look at the confidence intervals or uncertainty intervals around that, we're only 51% confident that's actually better than saving animals or something like that. Then, that sort of quoting it as 10 to 1 ratio might give us sort of the wrong level of confidence. Does that make sense?
JIM: Yeah. I think that these numbers need to be taken with the understanding of uncertainty. But I still challenge you. If you were to throw up your hands and say, “It's too uncertain.” Where do you donate? If it's so uncertain, then how can you choose animal welfare over human health rationally?
SPENCER: Maybe you can't. [laughs]
JIM: Right. That's why I'm saying, “Oh, we can't estimate how many chickens are being saved by the Humane League. So, I'm just going to donate to my local animal shelter.” Well, come on now. No matter how bad these numbers are, they're not so bad that your animal shelter is going to be a better use of the money if you just care about animals in general. But that's the kind of thing that can happen; people have these gut feelings about what makes sense and what's the right thing to do and everything. You're just going to fall back on those unexamined, uninformed gut feelings, if you abandon something because you think it's too uncertain. you can push back on me. Not everybody agrees, but I don't see a rational way out of abandoning numbers.
SPENCER: Yeah. I think, saying that we don't know how to compare X and Y doesn't mean we can't infer anything. I think there are a lot of cases where you can say, “Okay, these particular chairs are just not going to be very effective,” and you can be quite confident in that. But then I tend to think that you can get into the realm of ideas that all seem like very good, and have yet to struggle to compare them to each other. And just say, “Ah, these things seem really good. This other thing seems really good.” We may not be able to say exactly which ones are better with any confidence, but we know at least they're not in the bulk of things that are not very effective. I guess that's how I look at it a bit more. Let me use a metaphor here that maybe explains how I think about this a little better. I'm a mathematician, I love putting numbers on things. I love calculations. But I also know that everything that we claim as a probability is not really a probability. Imagine someone who's just very poorly calibrated. Whenever you ask them a question about whether something will happen, they throw out a number between 0 and 1, and they call it a probability, but it's actually just completely made-up bullshit. We could do an expected value calculation based on that person's probabilities. But what are we really doing there? We're just putting random numbers in. That's one extreme where the probabilities are just made-up numbers between 0 and 1; there's no reflection of reality. At the other extreme, we can do something like flip a coin 10,000 times and notice that half the time it lands on heads, and then the next time we flip it, we can say, “Ah, there's a 50% chance it's gonna land on heads, and it's really a probability in a real sense.” I think what happens is that in real life, we tend to be somewhere in the middle. Like a lot of real-life situations, we're not all the way at the extremes where the numbers are completely made-up. But we're also not anywhere close to the realm of flipping a coin, where we really know the numbers. And I think, to me, there's some point at which numbers get so uncertain that the expected value calculation no longer is meaningful, because the numbers we're plugging in are not real. They're not real probabilities, they are not real utilities. However, if you get far enough to the side, where they actually are meaningful real numbers, then now it starts to make sense. So, I guess that's my view. I'm curious to hear your reaction.
JIM: My question is, what's the alternative? If you ask someone to estimate a probability, that is not a random number. They're not picking a random number between 0 and 1. They are coming up with their access to their feeling of conviction about it, which does have information in it, right?
SPENCER: Yeah, in real life, it does have information. I was just trying to use a hypothetical example of someone who knows absolutely nothing about a demand. You could ask them for numbers between 0 and 1, but those numbers would essentially be random if they had no knowledge or information about the demand they're predicting. So that's one extreme.
JIM: Okay. In that extreme example, then asking somebody for the numbers is the wrong way to get the number, and you need a way that is more informed.
SPENCER: I'm saying trying to do an expected value calculation based on that person's completely made-up numbers doesn't make sense, because they're not real probabilities. I guess my point is that they're sort of these two endpoints where, on one end, expected value calculations clearly make sense, clearly the right thing to do. On the other end, they make no sense, because the things you're plugging in don't make any sense. They're not real probabilities. They're not real utilities. And then it's how far do you have to be on one end to the other before you start using this as a tool using expected value calculations?
JIM: Well, I think if their numbers are truly random, of course, they're going to give random results because of the garbage in, garbage out kind of thing. But I think that actual random numbers are much rarer than you think. And, if you've got a highly uncertain number, that is very different from a random number. And if you're using the best number you can get, even if it's highly uncertain, I just wonder how else you should reason about it, if not by using that number.
SPENCER: Yeah, so let me give an alternative; it's a good point. If we were to follow the expected value calculation, even knowing that it's incredibly uncertain, that would actually suggest that we should put all resources into that one solution. In this case, things like the Against Malaria Foundation, it would suggest don't give any money to animals, don't give any money to help climate change. Until the marginal impact per dollar goes down enough due to capacity constraints, that the suspected value no longer comes out with a head. Whereas, I would argue, I'd rather have more of a portfolio approach spreading across these different causes. I'm actually totally cool with the idea of, “We did this complicated expected value calculation, and we think this charity is better than the others. Therefore, we should put more money into it.”
JIM: Okay, that's what I was gonna say. Wouldn't you use my numbers to determine the spread?
SPENCER: I guess that maybe I'm misunderstanding you. But what's the justification of putting any money into other things, if it wouldn't just be lowering the expected impact by this way of looking at things?
JIM: Yeah, well, I wouldn't recommend it. If you want a portfolio approach, and you have to choose, you got, let's say, you got $10,000 to donate, how do you choose the spread, if not by using calculations that are akin to mine?
SPENCER: I think what I would want to do in practice is (I think the ideal scenario is) to have a logical argument based on what you believe is true about the world and also a numerical calculation and you want to compare them to each other. If I didn't have a way of explaining in a way that made sense to me, why the Against Malaria Foundation is so effective relative to these other cause areas, I wouldn't blindly trust the output of this sort of spreadsheet approach. Because I think that there's so much uncertainty going into many of those numbers that are being used to calculate that final output, that I have a pretty low confidence in the final result. So, I can bolster that by using multiple methods. I think really what I'm advocating for is two things: one, using multiple methods when we're very unsure. Don't just do it one way, but look at it from multiple ways. Two, instead of putting everything into the number that happens to come out the best in a spreadsheet, we actually spread it out in a portfolio, which is not what expected value maximization would tell you. Expected Value Maximization says if one thing comes out, even slightly better, even 0.01% better in expected value, put all resources into it until you start having limited capacity.
JIM: What I like about what you're saying is that you could actually test it in simulation. You could set up a charitable investment portfolio and put in the numbers, but also the uncertainty about the numbers, and then do like a Monte Carlo simulation, run it 10,000 times and see if you're right. Maybe you are right. Maybe the uncertainty does encourage a portfolio approach and you could probably show it. I don't think it's going to come out that way, but it might.
SPENCER: I think expected value maximization is a framework that says that you should never have a portfolio approach, right? It says that if you have a set of options, you should always put everything in the highest expected value one. I think that's just sort of implied by the theory, and I think that makes sense with small gambles. If you're playing poker against someone else, and you're making small bets. Let's say, you have a large amount of money, each hand you're betting a small amount, then yes, expect value maximization is what you do, and you're gonna win more over the long-term than if you don't maximize your expected value. But when you're talking about allocating all the funds, then suddenly, other things like ‘risk of ruin' come into account. Like in the poker example, the probability that you actually run out of money, you can't keep playing.
JIM: But do you agree with me that you could actually test this in simulation?
SPENCER: What would you be testing for? What would be the outcome that you're trying to maximize?
JIM: Well, let's just say that you were trying to maximize welfare. And there just might be a way to use all the uncertainty for all the numbers. And it just might turn out that spreading your charitable investment across multiple things happens to empirically end up with a better outcome after, say, 10 years, right?
SPENCER: Yeah, if we had really good clean measurements after the fact of how well something worked, then what you could do is you could try these different approaches. Then you could say, “Well, which approach actually led to the most well-being?” If all you care about is well-being, let's say we're working on that framework, then you could actually do that, and you could compare approaches.
JIM: No, I'm not even talking about empirical. I'm not even talking about empirical data, I'm talking about just running a simulation in a spreadsheet. Because all of these numbers that are uncertain have ranges, right? They have distributions, and you can sample from those distributions. For the animal charity, (some of your readers might be surprised to hear that) sometimes trying to increase animal welfare reduces it. Because if you put a cow on an ad, it sometimes reduces their desire to eat beef, and they eat more chicken, which does more harm. So, the range for how good the humane league is doing actually goes over the negative, slightly. The mean isn't over the negative, but part of the range of the confidence interval is, but it goes over that. What you could do is, every so often when you sample from it, it would actually make the problem worse when you donate it. So, what I'm thinking is that even just using the numbers and the spreads, you could maybe do a simulation and see if this portfolio approach is better.
SPENCER: There is a great tool for doing things like that called Guesstimate — which I recommend people check out — where you can basically put in your assumptions about the probability distribution of different variables, and then how they relate to each other like, “Oh, this variable is a sum of those other two, and so on.” And you can actually make Monte Carlo estimates of distributions. But what I'm talking about is actually a little bit different, because within the framework that you're using, and with the assumptions that you're using, your expected value maximization approach will maximize expected value. What I am saying is that there is sort of a model uncertainty to the whole problem, which is, in other words, there's a whole bunch of assumptions baked into these estimates. With those assumptions, you can model the parts that you know, you don't know. For example, you could say, “Well, we don't know if this value is 1 or 10, so we're going to make an assumption that sort of uniform distribution between 1 and 10, or whatever, or an exponential distribution from 1-10.” You can do that. And you can then do Monte Carlo simulation. But here's the problem. Well, what if it's not really a uniform distribution from 1 to 10? What if that's not the way it is in reality? What if there's a chance that the answer is 0.1 or 100? So, there's uncertainty that comes about due to the fact that you're making assumptions about the uncertainty. And then you can make it even further and say, “Well, maybe there's completely unknown unknowns.” You don't even know about to totally script the model. Maybe there's some reason why that charity doesn't work at all that you've never even thought about. Then that would completely change the calculations if you knew about that thing. When I think about models — models are obviously really useful and really powerful — there's uncertainty within the model. That's like uncertainty you can include to get confidence intervals. Then there's model uncertainty, uncertainty about the model itself. And it's very, very hard to know how to include that kind of uncertainty. Anyway, I start to get quite nervous when I start to think, “Hmm, maybe the model uncertainty, like the uncertainty in the model itself, is on the order of a larger than the confidence intervals within the model.” And the confidence within the model is already kind of huge. I think that's where I'm coming from. I don't know if that's clear.
JIM: I think all these worries are real. And I don't want to give the impression that I'm saying that the way I did it is the perfect way. I'm just wondering, you said yourself that the expected value thing is going to work out for me. I guess I'm still not clear on how you are justifying the portfolio approach. Why would you think that that would give a better result?
SPENCER: Maybe the way to explain that would be to imagine something simpler, like investments. Imagine that you have an approach to making estimates about investments to decide where to put your money. What you do is you go through 100 investments, and for each of them, you try to do a calculation of how much you expect to return you think you're gonna get. Now if you do a perfect job and there's no biases in that, you're able to kind of really consider every variable that you need to consider and come up with a reasonable probability distribution on each of those variables. And then you do Monte Carlo estimates, you'll find that one of those investments has the highest expected value of all. You would truly maximize your return if you put all your money in that one investment. That is actually how you get the highest expected value of return. However, I think most people's intuition is that there's something wrong with that approach. Putting all your money into that one investment, actually, you're probably not going to do very well over the long term, if you have that approach. Now, why is that? Well, it's because there's a huge amount of uncertainty. But I also think, more subtly, there can be weird biases that occur, like — for example, maybe there's a certain mistake in your way of thinking about how to invest. And if that mistake occurs, you might systematically overestimate the likelihood of certain types of investments doing well. Maybe there are certain unknown-unknowns that you're not even considering. And if you were to take them into account, you'd realize that actually what you think is the best investment is not, because if you took into account those unknown-unknowns, you'd realize that their expected value is a lot lower, and so on. So, I think in real life, because we can't fit all these things in our model, these really complicated scenarios, it should push us towards taking more of a portfolio approach. It also gives us the ability to iterate more, because in real life, if you're investing in just one thing, you don't have the same learnings. I think if you spread it out across your 15 best investments, and then you're like, “Ah, that's funny, that one I thought was gonna be the highest expected value, but actually failed miserably, what's going on there?” You kind of iterate on that. So hopefully, that explains a little bit where I'm coming from.
JIM: Yeah, I guess it does. Part of me wants to say that investing for your future is a different thing than charity, but I'm not confident about that. The other is that this portfolio approach should be part of the model that determines how to invest. It shouldn't be an extra number decision. It should be put in there so that it's part of the number. We're talking about diversifying your portfolio, does that mean that you invest in your cousin's biscuit business to the same degree that you invest in Amazon? These decisions and the spread of the portfolio in which things you even include in the spread, I think you'd want to be determined by numbers too. It just sounds like, to me, I would say that you're advocating a more complex model that gives you more subtlety and that you would expect would come out with a diversification recommendation, rather than you should just ignore the model and diversify sometimes.
SPENCER: Yeah, so what I would prefer in both the investment case and in this case, is that doing the numerical model can be useful. But then you also back it up with other ways of looking at the problem. You could say, “My numerical model says that this company is a really great investment. But when I break it down into the kind of key hypotheses, it just doesn't make sense to me as an investment on an intuitive level. Oh, that's interesting.” Basically, when you get to these really, really difficult problems like: a) looking at them multiple ways simultaneously to kind of bolster the strength of our confidence; and b) having a sort of sense that, “Hey, I'm probably going to be wrong a lot. And therefore, I kind of want to bake that into my process, the assumption that I'm probably making a lot of mistakes.” One thing, I think I'll just flag, that is misleading about my example of comparing it to investments is that in investments, you really have to worry about things like ‘risk of ruin' because in real life (for your own personal savings) you don't just care about the expected value. You want to make sure you have enough money to have a house and eat and all that. So, you can make the argument (I think it's reasonable to make) that if you're doing charitable giving, you shouldn't be risk averse. What should matter is the amount of impact you have, so I don't want to mislead people there.
JIM: Yeah. So if you're at a casino, and you're well off, and you say, “Well, I'm just gonna gamble 100 bucks,” then you would go. You would put the whole thing on the highest expected value — I assume you don't enjoy gambling for its own sake. — I feel like that is a little bit more like charitable giving where you're like, “Hey, I got this certain amount of money I'm gonna give it. And since I'm not getting any return, I'm not worried about it. I'm assuming I'm losing this money. The only return I'm getting is good feelings or whatever.” So maybe that would be relevant. Good thoughts.
SPENCER: Jim, before we wrap up, do you want to mention your podcasts or anything like that?
JIM: Yeah. So, if you're interested in these issues, my book came out in 2021, “Being the Person Your Dog Thinks You Are: The Science of a Better You.” And I'm the co-host of “Minding the Brain,” a podcast about cognitive and neural sciences that I co-host with Dr. Kim Hellemans.
SPENCER: Awesome. Jim, thank you so much for coming on.
JIM: You're so welcome. It was fun.
[outro]
JOSH: Are critical thinking and rationality techniques beneficial for everyone or only for certain kinds of people?
SPENCER: Well, it's interesting, because I'm a really big believer in trying as much as possible to just have true beliefs. In my own mind, I am bothered when I feel like I'm engaging in self-deception. So, I really feel motivated to stamp it out — not to say that I always succeed, but I really feel motivated to not deceive myself. A lot of people don't feel this way. A lot of people feel that they would be happy to believe something that was false if it made them happier. So, I think that is like a philosophical question of do you want to believe a comforting lie or would you not want to do that? Would you want to believe the truth in that case? So that's kind of the first question. And so, you can imagine a situation where someone is in, let's say, a really difficult cult. They're in this cult, maybe they were born into it, and they never really questioned it, and you taught them skills of rationality. Now, certainly in the short-term, that can make their life much, much worse. They might end up questioning the cult leader and maybe being punished in the cult, maybe being kicked out of the cult, maybe never being able to talk to their friends and family again and having to start over with no skills in society and knowing no one — that's really awful. On the other hand, maybe in the long-term, that's actually good for them not to be in a cult and build a new life outside of it. But it's a tough call, right? And what I say isn't necessarily that a person who's born in a cult is definitely going to be better, if they lose all their friends and family. It's a tough call. So, I'm definitely not going to say that literally everyone in the world is going to benefit from critical thinking and rationality. But I will say that the skills of rationality have a huge advantage, which is that the world is a certain way, and the skills of rationality allow you to see the world more clearly as the world actually is. The more clearly you see the world, the better you can optimize your goals. Because your goals are things you have to achieve in the world. So, as you understand the world better, you can use rational skills to better make plans and map out what you want to do and how to achieve those things. Now, that's not the only thing to do to achieve your goals in the world. There are lots of people who are not very rational, and they try to achieve their goals through other means. They sometimes succeed at that. I'm just saying that on the margin, rationality skills can really help you with achieving difficult goals in the world because they help you understand why things really operate.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: