CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 123: Ambition and expected value at extremes (with Habiba Islam)

September 22, 2022

Are ambition and altruism compatible? How ambitious should we be if we want to do as much good in the world as possible? How should we handle expected values when the probabilities become very small and/or the values of the outcomes become very large? What's a reasonable probability of success for most entrepreneurs to aim for? Are there non-consequentialist justifications for longtermism?

Habiba Islam is an advisor at 80,000 Hours where she talks to people one-on-one, helping them to pursue high impact careers. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that she qualified as a barrister and worked in management consulting at PwC specialising in operations for public and third sector clients. Follow her on Twitter at @FreshMangoLassi or learn more about her work at 80,000 Hours at 80000hours.org.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Habiba Islam about using Expected Value calculation in career decisions, ambition, and justice.

SPENCER: Habiba, welcome.

HABIBA: Hi there. Thanks for having me on the show.

SPENCER: Great to have you on. So I wanted to get started with the question of how ambitious should we be? Do you want to set up this topic for us?

HABIBA: Yeah, I think lots of people have many different kinds of career goals, speaking particularly within the realm of what you might want to do with your career. People have loads of different priorities, maybe they want to earn a certain amount of money, maybe they have family commitments and want to make a job that fits with those kinds of personal commitments. And then some people also want to do good with their career and maybe use that as one of the main levers that they can have to sort of have an impact on the world. And I think ambition, being ambitious with your career, can be relevant to many of these different things. And it might be helpful for some of them, and maybe not helpful for some of the other ones. I'm particularly interested in this question of how ambitious should you be if you're aiming to try and do good in the world. Ambition is often associated with — if you're seeking personal satisfaction from your career, you want to earn a certain amount or get to a certain status or something like that, then I think ambition is often talked about in that realm. But yeah, I'm interested in how ambitious would you be if you're wanting to do good.

SPENCER: Okay, let's start there. And maybe we'll come back to how ambitious you should be, even if that’s not your main priority in life. But let's start there. So okay, so you're a do-gooder, you want to improve the world as much as possible. Should you aim to be incredibly ambitious or should you have more modest goals for yourself?

HABIBA: Yeah, so I think — spoiler — I think the answer is maybe you should just be really ambitious. And one simple way of getting to that is to think about, if you’re wanting to achieve goal X, then there are a lot of reasons why you might want to shoot pretty high for trying to achieve the best possible thing that you can, you could go for. And so it's possible that, if you're trying to think about what career might be able to do good instead of settling for something that seems like you'd have a more certain kind of smaller amount of impact (like working in a smaller scale kind of area), maybe you actually want to try and take some gambles, take some bets on things that are — if they worked out, they'd go amazingly well, but they maybe have a smaller chance of panning out. So maybe it's worth taking some more of these risks with the things that you're going to do.

SPENCER: Well, it seems to me that if we could build a time machine, we could solve all sorts of problems. I'm joking, but there's clearly some kind of trade-off here, right? The more ambitious my goals are, the higher probability I'm gonna fail, all the way to the time machine. And then the less ambitious, the higher chance I succeed, but then I don't have as much impact if I do succeed. So how do you think about navigating that trade-off?

HABIBA: Yeah, I guess one concept you can use is thinking about Expected Value, which is this kind of probability times outcome, or the value of the outcome, as a way of getting to a number. And so you might think that, if this works out for you — so if something is vanishingly impossible, then the number might come out pretty small, when you times it through. Does that make sense?

SPENCER: Sure. But if we consider the fact that all of humanity might be wiped out and a time machine could let us go back in time and maybe prevent humanity being wiped out, you might get a pretty big multiplier and all the future people that will ever exist. I'm just being a little silly. But I actually think it's pretty hard to work through these things. What is the probability you could succeed at something crazy? What if there's only a one-in-a-million chance you could build a time machine. This Expected Value, so you should still try? Or is that just too nuts?

HABIBA: If it really was only a one-in-a-million chance that we could build a time machine, I want a million people to try, maybe. I don't know if I want to go completely that far down that road. Because it's possible that that is actually just incredibly valuable. My guess is that it's considerably less likely than that even. (Yeah, I guess what would I think here?) There is this bigger question about when you're trying to make estimates of outcomes. How do we deal with extremely, extremely small probabilities of something panning out, paired with extremely, extremely large values of how good the thing could be (or indeed, mostly I think how good the thing could be). Actually, this is quite a technical philosophical question. And my understanding is that there's not really a very satisfactory way of doing this. There's a kind of common thought experiment called Pascal's mugging (which maybe some of your listeners have heard about), which is this demon says to you, “Give me your wallet right now. Otherwise, I will create this extraordinary amount of suffering in the universe.” And then they use this extremely big number like a quadrillion to the power of a quadrillion years of suffering will be created with their magical powers unless you hand over your wallet. And you could say, if you're in this situation being mugged, you could be like, “Nah, I don't really believe you.” And then they could just come back with this rejoinder of, “Okay, what about if I put that number to the power of quadrillion again?” increasingly making this even bigger number of how bad the consequences could be. And at some point, even if you have a really very vanishingly low probability of thinking that this is actually a plausible threat, if you do just times out the probability by the badness of the outcome, then maybe comes a point where this straightforward calculation makes it seem like, maybe I should just hand over my wallet right now. But then, I guess when you're faced with this thought experiment, it feels like something's gone a bit wrong here. Some random person could just come in, say this script to you, and it doesn't really seem like the rational thing to do is to hand over your wallet in this situation, but...

SPENCER: “Hey, Habiba, I have an offer for you.”

HABIBA: Yeah, no one has tried this on me before. Yeah, this is a good example of practical rationality. You can just say it to me, and I sort of feel like, “I'm not gonna hand over my wallet to you, Spencer.” But is that irrational? Is that the wrong thing to do? I'm not sure.

SPENCER: No, it's actually an interesting thought experiment. Because if someone says that they're going to cause a quadrillion to the quadrillionth power years of suffering equal to torture or something like this, in order to have the Expected Value on that not be really, really, really terrible, you'd have to assign a probability lower than a quadrillion raised to the quadrillionth power, right? Can you really ever assign a probability that small? And so that's where we start really having issues. And yet common sense says, "Well, this is totally ridiculous. Clearly, this person is full of shit. You clearly shouldn’t give them your wallet. If you're doing Expected Value calculations, this person's going to take you for a ride.”

HABIBA: Yeah. And it's kind of unfortunate, because Expected Value calculations are basically the best thing that we've come up with as humans for how to deal with uncertain outcomes. It just seems like a good rule of rationality to be able to work out what's the right thing to do when you're dealing with uncertainties. But it runs into this problem when you get to the very extreme kind of cases. And my understanding is that there's no real — there were various approaches that people have maybe tried to take. Some people maybe just bite the bullet and say, “Yeah, we should just use Expected Value calculations to the end.” Some people maybe try and formulate other theories that might avoid this. Maybe one practical concession you can make is like, “Okay, I'm just going to use Expected Value in the vast majority of cases. But I'm just going to set, almost arbitrarily, a bit of a floor that when things go down below a certain very minuscule probability, I'm actually just not going to use this EV calculation approach anymore.” It's not clear how well-justified that is. But it seems like a pretty practically reasonable thing to do while we're still thinking about how best to approach this.

SPENCER: Yeah, it's sort of a safety measure. And I’ve heard some interesting approaches to trying to solve this problem. One approach is you say, “Well, maybe as the number goes up of the claim suffering, maybe the probability of it being true goes down, at least as quickly.” So if they say, “I'm going to cause a quadrillion years of torture,” maybe that's actually much more probable than if they say, “I'm going to cause a quadrillion to the quadrillionth power.” And so it kind of cancels out. But then it feels like, well, how do you know it cancels out? How do you know the probability goes down that quickly, and then you get into still an iffy situation.

HABIBA: Yeah, I also do wonder how likely that actually is, because if there's someone out there who has powers to be able to make a quadrillion years of torture, I'm actually just not really sure how much harder it is to do that than to do a quadrillion to the quadrillionth years of torture. Maybe they just have access to some extra abilities that actually are vastly more powerful than I have — if they literally have some access to omnipotence and are generally able to do anything, it just really doesn't make a difference how big the scale is. — But I'm not sure, I guess it's not really clear how reasonable these kinds of assumptions are when we're really that far in the extreme.

SPENCER: Yeah, another thing that I think about in these kinds of cases is that, when we throw out probabilities, what we mean by a probability can vary a lot. For example, let's say I roll a six-sided dice a bunch of times. I can say there's a one-in-six probability that it lands on a one. And that feels like a real probability. It's based on the frequentist account of what percentage of the time it will land, and we understand the properties of dice. Whereas if you say, what's the probability that this person who claims they're gonna cause quadrillion years of torture is correct, that number, that probability sign feels like it's sort of just made-up bullshit. And not to say that that's not a useful thing to do. I actually think it's incredibly useful to practice coming up with probabilities in everyday life. It's an incredibly useful tool and it hones the mind and creates all these good effects. It forces you to calibrate your beliefs and so on. But there's also a certain sense where the further something gets outside of experience, the more made up that probability is and then it's like, can you really just plug that into Expected Value calculation and just treat it as though it's a probability when, in fact, it's just a number between zero and one that your mind spit out?

HABIBA: Yeah. Interesting. I guess I haven't thought that much about how the interpretation of probability might affect things. I think the thing that you're describing is the second version. We had the frequentist version of thinking about what it means for a dice to have a sixth chance of getting a particular number. And then we have this other sense of, maybe the probability is actually just attaching to my subjective kind of credence in this being true. And that might be — my understanding is that that sort of matches more to like a Bayesian understanding of what probability actually is and so enables us to be able to use the language of probability when we're talking about things that — things like the chance that Biden will get reelected, which are slightly different types of problems to what's gonna happen with this dice roll. And yeah, this kind of subjective sense of what's my credence that this is actually a plausible — that this is actually going to happen, or this is actually true — is a useful concept, and it definitely allows us to use the concept of probability in more areas where I think the frequentist just falls down because it can't quite explain the meaning of this kind of using probability. But yeah, it's possible that maybe that leads us to run into problems with EV calculations.

SPENCER: Yeah. And so for those who are not too familiar, the frequentist account of probability says that, to assign a probability to something, you really have to be talking about a repeated event. Once you're talking about a repeated event, like a bunch of die rolls, then you can talk about the probabilities. Now you could say, “Well, in this repeated event, this is gonna happen one in six times or whatever.” Whereas the Bayesian says, “No, you can assign a probability to any belief and then, when you get evidence, you can update that probability.” And in terms of everyday life, I think the Bayesian account is much more useful because the frequentist account is just too limited. I want to be able to talk about things like the probability that Biden will win the election, that seems like a really useful thing to be able to talk about. And you can also train yourself to become more calibrated, or you can practice predictions like that. You could do it on Metaculus, you can use our Calibrate Your Judgment tool on Clearer Thinking to practice these kinds of things. You can definitely get better. And so that seems really meaningful. But then you go from the very, very meaningful and concrete dice roll to the somewhat fuzzier but still meaningful, “Will Biden win the election?” all the way to, “Is this theory of philosophy true?” or “Is this person who's going to do quadrillion utils of damage, are they telling the truth?” And it just gets murkier and murkier, what a probability really means in that case.

HABIBA: Yeah, this is making me think a little bit about what David Hume, the Scottish Enlightenment philosopher, thought about in terms of how you're supposed to respond to miracles. And I think I'm going out on a limb, trying to remember my Hume, but I think he definitely makes the case for, if you see something that looks like a miracle, you should just have way more credence in the fact that you've seen an optical illusion or maybe you've misremembered or you didn't quite get what was going on, rather than putting a bunch more credence in the fact that an actual miracle that breaks all the laws of physics has just happened in front of you. So I don't know whether there's a way of Bayesian reasoning actually giving you a way out in some of these Pascal's mugging case where — I'm not sure but it does feel like giving you the counterbalance of (but which is more likely to), how should I be updated based on just someone being able to bump up the amount that they're saying.

SPENCER: I think in the miracle case, Bayesian thinking does help. Let's, for example, say that someone you know claims to be psychic, and you're sitting in a cafe with them, and they're like, “Look, I'm going to prove it to you. You believe in empiricism, right? So I'm gonna prove to you that I'm psychic. In five seconds, a woman wearing a purple shirt and a man wearing a red shirt are gonna walk by with three Scottie dogs on leashes.” And then you're like, “Okay,” and then five seconds later, this happens. Now, if you think of it from a Bayesian point of view, the probability of that event is just so ridiculously unlikely, right? Wow, that actually is really, really impressive evidence that they were able to do this. However, there's an alternative hypothesis, which is that they staged this or that they, for some reason, know that, at a certain time of day, those people always walked by, but more likely it was staged. And so if you think about there's one hypothesis that this was staged, and after you witnessed that, that hypothesis goes up much, much higher in probability, because the probability of seeing this evidence, if it was staged, is very high. On the other hand, it is evidence for them having psychic powers, too. So essentially, what happened is, most of the hypotheses have been eliminated. And now you're just left with the psychic power one and the "They're full of shit and they staged this one.” And now you have to evaluate between those two. And of course, you might have priors on those, like if you don't believe in psychic powers, you might assign that a really low prior, whereas you might think people staging things — it's uncommon, but maybe it's not as unlikely as psychic powers or something like that. — Or if you're believing in psychic powers, you may favor that other thought. I think in the miracle case, it actually is a helpful frame, although I don't know if it gets us out of the Pascal's mugging case.

HABIBA: Yeah, I guess that was where I fell apart with my own line of thought. It definitely helps you. Surely in the situation where you're being “Pascal's mugged,” you should have higher credence in the fact that someone is just bluffing or that it's just way more likely that they don't, in fact, have these powers and they're just playing a trick or something like that. Does it actually get the probability that they might be telling the truth down sufficiently far to really bring the EV of giving them the wallet down? Possibly not. It just probably doesn’t actually help with the fundamental problem.

SPENCER: It's funny because, on the one hand, I really think the Expected Value is perhaps the most important theory in decision-making ever invented. It's just such an incredibly important concept. And yet, I actually think it's an extremely flawed theory. For this reason, the Pascal's mugging is one problem, but I think there are actually other problems with the theory as well. Any nonzero probability assigned to an infinite outcome completely wrecks the theory. And it doesn't mean you think that an infinite utility is likely. But if you send any nonzero probability, the whole theory goes to shit. So that's the second problem. There are more problems as well. But those are two of the ones that philosophically show us, “Hmm, there's something weird going on here.”

HABIBA: I’m completely with you that there are some problems with it. It does not apply to every possible case that we can come up with. I think the thing that we can fall back on is, it just does seem like a practical and useful thing to be using in the vast majority of cases that we’re going to be bumping into in our day-to-day lives. But I am really interested to see how this field develops. It really wouldn’t surprise me if in 50, 100 years' time, people are carrying on thinking about these kinds of things, that there will be a new theory or a new idea in this space that we just haven’t come across, which maybe changes the field a bit.

SPENCER: That would be super cool. Something I find a little frustrating is that sometimes people take Expected Value as self-evidently correct, whereas I think that’s not true. I think you still have to justify why you’re using Expected Values. And it's interesting to think about, when there’s low stakes situations (like you’re betting a small amount of money relative to your total savings, things like that), you can actually put forward a very strong theory of Expected Value of why you should maximize Expected Value. You can actually prove mathematically that you will get a really good long-term outcome if you maximize Expected Value. But then going from there to these weird cases where the probabilities are made up or these cases where the things at stake are very large (like where you're betting your entire savings, for example), then I think that's where a lot of the problems come in. But I think, for low-stakes everyday things where the probabilities are not completely made up, I think we’re on really strong footing.

HABIBA: Yeah, I think it doesn’t even have to be only very low-stakes things. I think it’s still a reasonable thing to use as a guide to thinking about — if you make some significant commitments with your life, like — thinking about which career choice is going to pan out best for you, or which city to live in, or things like that. I think it probably is still a very reasonable thing to be using even for things that are pretty big high-stakes things for your own personal experience. But unless someone can provide a better decision-making criteria for those cases, I would be inclined to think that Expected Value still works for those.

SPENCER: I think I agree if the person is in a multi-shot game. In other words, they’re not betting their entire life on this one bet. Let’s say, someone is still gonna get to do a bunch of career transitions and if this doesn’t work out, they’re not gonna have their whole life ruined. And furthermore, if they’re not doing this sort of time machine type thinking, they're not like, “Well, there's a one in a million chance I'm gonna succeed, but the Expected Value is so high, I'm gonna go for it.” If the probabilities are within reason (let's say at least 1%) and you have enough bets you can make, this is not your only shot, then I think I agree. But outside of that, I don't know if I feel comfortable recommending maximizing Expected Value. What do you think?

HABIBA: I'm just wondering, what do you think someone should do in that case instead? If we can make a concrete example — someone's wondering whether they should just move to San Francisco and start this new job with a different community and this maybe opens up a bunch more opportunities, and it will change the course of their life massively, versus they could stay in London and be doing this other thing. And maybe they want to think about the chance of the rest of the course of their life going in different directions based on this big decision. I don't know if this feels like a good kind of thought experiment to you, but I'm wondering, in this case, is this the kind of thing that you think that they shouldn't be using Expected Value calculations to even frame this discussion or frame this decision, and should they be using something else?

SPENCER: No, I think it's probably fine to use Expected Value because I expect in that case, they're probably like, “Well, I think there's at least a 30% chance this will go really well. And if it doesn't go really well, I can go do something else after. I can move back.” I think that's fine. But if someone's like, “Hey, I think I have a one-in-a-million chance of building a time machine. I am almost certain I'm gonna fail. I think it's only a one-in-a-million chance. But I've done the Expected Value calculation and it says I should build a time machine.” I actually think that if people were hardcore "Expected Value-ists," (if that's a phrase), we would have a bunch of people doing things like building time machines, and I'm not sure I could advocate for that. I'm not saying they're definitely wrong, but I'm not comfortable saying they should do that.

HABIBA: Yeah, I think one thing that feels more plausible here is — so it seems like it is rational to do things that require collective effort. And you might think that what seemed like very reasonable things that fall into this category (like voting or going on a protest or signing a petition). Or if you think that many different entrepreneurs throughout the world are trying to start up the next new big unicorn or something, and it's very unlikely that any one of them is going to succeed. But maybe if you think that this kind of endeavor of trying to create progress is a good thing, it's kind of good for some people in the world to be trying this kind of thing, even if individually, it's not as likely to pan out. And in these cases, I feel like if I switch from thinking about the Expected Value from an individual space to thinking about, collectively, what seems like the right thing to do, I think that people should be — society needs everyone to be voting. Society only needs a bunch of people to go on protests, even if the chance of that particular person turning up in the streets isn’t really going to change, the chance of that changing a policy is maybe kind of low. And in these cases, maybe these are "common sense-y" cases where Expected Value might, if you look at that, maybe look like the chances are pretty low. But actually, people are quite comfortable considering those things as reasonable things to do.

SPENCER: Well, certainly the entrepreneur case, I think, is really an interesting one, because I think it's actually more like the ‘Will Biden win the presidency?’ case than the die roll case, or the, I'm going to call it, "the quadrillion years of suffering" case. So if you think about entrepreneurship, it's kind of in the middle, because we have a lot of statistics on entrepreneurs. It looks like (the number's not perfect), but looks like something like 10% of startups succeed. We can get to do some kind of back-of-the-envelope math about what the Expected Value is. It looks like, for someone who seems well-suited to it, has the right skills, etc, it seems like it's pretty high Expected Value. So that, I'm totally comfortable for people doing Expected Value calculations in that range because I don't put that in the ‘crazy range’ where the probabilities are completely made up and they're really, really tiny or things like that.

HABIBA: I think the thing I was trying to get to was a bit more about the community and coordination thing. But I'll come back to that in just a second. Would it seem different if I put entrepreneurship aside, if I said instead, something like aiming to do pure maths research with your life. Maybe you could be John Nash and come up with something epoch-shaking, or maybe you won't, but there's a bunch of people in the world who are aiming to do academic research in pure maths, and maybe the chances for each of them don't look so good. SPENCER: I think what you’re getting at is that, if you consider your own life, if you're trying to maximize the good you do, and let’s say you take a one-in-a-thousand shot at something that would be really, really important so the Expected Value is really good, you might say, “Well, it's only a one-in-a-thousand shot, that seems really bad.” But if there's a thousand people, you’re all taking that one-in-a-thousand shot, quite likely, one of you is going to achieve it. And so, as a collective, the thousand of you are actually going to cause a tremendous amount of good even on a per-person basis. So looking at the collective, it's much better than if you look at the individual.

HABIBA: Yes. So the point I'm making about the collective is partially a psychological thing about — I can understand where people might feel like they don't want to make this low-probability bet on their own career (like becoming the next John Nash is very unlikely), but a bunch of people doing it seems like collectively, psychologically — maybe you get a bit more comfort from the fact that "I was part of this endeavor, and the endeavor was worthwhile." So I think it's partially a psychological thing. And the other thing that I was trying to say was also, "Well, it just seems doing some of these things actually just seems sensible." Maybe I'm just giving them as counter-examples of, "Well, it just seems kind of fine for people to go and become mass researchers or go and become entrepreneurs," and wondering how that fit with your model, because maybe some of these things on their own actually just don't seem very likely to pan out in the most exceptional way, like trying to become president or something. People do these kinds of things all the time and maybe they seem fine to try for.

SPENCER: Yeah, and I think it is fine to try for. But what I would add as a caveat is that it should be one of these things where a lot of other paths open up to you. Like, "Oh, I tried to become president, but I'll probably fail, and then I'll end up doing something else in politics," or whatever, right?

HABIBA: Yes, that makes total sense. So I think, from your personal perspective, if you're trying to do something very risky, it's a really good idea to make sure that you have fallback options. I guess I talk to people a lot about wanting to pursue high-impact careers. And I think if they're wanting to do this thing, where they're shooting for something that's quite unlikely, I want to make sure that they build up skills that they could do other things with if it all fails, and maybe they could also have a Plan B, a Plan C, a Plan Z, for things that they could switch into, and they don't ruin their life in the process. That definitely seems like a good thing to have a safety net. But I think if you have those safety nets in place, then maybe it actually makes sense to try and go for some of these more moonshot type things. I'm interested to circle back to this question of, psychologically, does it feel more like a reasonable endeavor to get behind if you're part of this community, you are trying to do something together, and you think that, "Well, it's not gonna work out for all of us, but it works for some of us.” I feel that makes it more palatable and makes it more (maybe) noble even, if you're trying to aim for good.

SPENCER: I totally agree. And I would say, especially if the community is supporting these people, having their back, not just like, "Oh, you tried the one-in-a-thousand moonshot and failed so you're screwed." Like, "Oh, no, okay, well, that was cool that you tried that. The fact that you failed, that's fine, we expected you to fail. But now you can do other — you can try your next one-in-a-thousand shot." I think that what begins to worry me is that there are very few people that can really do something that they think has a 99.9% chance of failing and legitimately try in a real sense. Does that make sense? Like where, "Oh, I'm almost certainly going to fail at this exact thing I'm doing and yet, I’m going to still try really, really, really hard and not give up easily."

HABIBA: Yeah, that sounds right to me. That sounds like — if that's a salient fact to you, when you're trying to do something, I can't really imagine how you could manage to get yourself really motivated and work really hard.

[promo]

HABIBA: Maybe two things that you might have there that might come to mind a bit here are, number one, doing things where you're not putting yourself in a category of 99.9% chance of failure. Maybe it's more like 60% chance this isn't gonna work out or something. So that's a little bit more palatable. And then there's another thought that — actually this is not mine, this is Julia Galef’s in “The Scout Mindset,” — talks about being able to separate being well-calibrated about how likely your thing is to succeed from having the kind of psychological resilience and motivation to work really hard at it. And it is possible to separate those things. So she gives examples of some famous entrepreneurs, like Elon Musk and Jeff Bezos, giving low probabilities that their projects would actually succeed, but still being actually able to throw a bunch of effort behind them and actually managing to make them work.

SPENCER: Yeah, and I recommend listening to that episode of this podcast with Julia, if you're interested in that. She goes through those examples and they're really fascinating. Circling back on this a little bit, from my point of view, I often suggest to people that if they want to be ambitious, a good range is something like 10 to 50% chance of success. It's healthy, ‘you're a risk taker’ ambition. And I'm wondering what you think about that. I recommend people don't go to things they think have a 1% chance of success. And that often, like the interesting stuff, you're not gonna have a 90% chance of success; that's just unrealistic. It's probably going to be a higher chance of failure than that.

HABIBA: Yeah, that's an interesting rule of thumb of range. I don't think I actually have one cached in my mind. But that seems pretty reasonable to me. I guess one thing that makes a difference is what it is that you're measuring the chance of, and I think you can very easily think about your career, or the thing that you're sort of aiming for in many different ways. Maybe you have some proximate goals of "I get tenure," or "I get to set up this organization and it doesn’t fold after two years," or something. And then you maybe have some stretch goals around. There are other ways that success might actually look more like, "I publish a paper that changes the field," or “My organization becomes a billion dollar company," or something. I guess I don't have a view on where the sort of 10 to 50% chance needs to be in relation to those different outcomes. Maybe you want to do something where there's at least an outcome that you'd be reasonably psychologically happy with in that kind of range. But there's at least this chance that it goes amazingly well, which could be smaller than that 10 to 50%.

SPENCER: Yeah, excellent point. And I think that's exactly right. If you're a startup founder, maybe there'll be a 10% chance of success if you have no advantages (you're just typical), maybe 40% chance of success if you're really good, and 50% if you're really, really exceptional. But maybe a 1% chance that you're going to build absolutely massive success and that's cool. That's actually where a big chunk of the value comes. But there's a lot of other outcomes there that are still really good, that are not in the "you built a $10 billion company," right?

HABIBA: Yeah. And say you're comparing two things, and they both have really good outcomes looking the same, but one of them has some safer spread of other options, other sort of mediocre kind of outcomes, but there's a safe spread of some chance of those happening. Whereas the second one has, it either goes extremely well, or most of the time, it's actually just not very good at all, then you might think that you care a bit more about the spread of outcomes here and you might prefer the first one.

SPENCER: Right, and going back to the math example, this is why I think it'd be generally much better advice to say, “Oh, yeah, you love math, and you're good at it. Go work on being a math professor and work on really important problems,” rather than “Go spend time on your own and devote your life to solving this unsolved math problem that a hundred of the best mathematicians have failed to solve.” Because in the first case, there's a lot of graceful fallback plans, like you become a math professor, or you leave math and go work on interesting math-related AI or whatever later. Whereas the second case, where you're just working out by yourself on an unsolved problem, it doesn't fail gracefully because, if you fail, you don't have any way to prove to someone that you've actually learned a whole bunch of stuff and you're actually really skilled.

HABIBA: That's definitely a really worthwhile consideration. The idea of “What happens next if this doesn't work? Can you fail and land on your feet? And, do you have things that are externally legible to other people on your CV to try and make sure that the rest of your career, or the rest of your life in general, is just going to go pretty well?” That said, I want to push back a little bit around the — broadly, I'm in agreement with you that you want to pay attention to not trying to get yourself to do something that seems like you can't motivate it if the chance of success is so low. But that said — when we talked about this thing around, maybe if it goes extremely well, there's a ton of value. I think if you are comfortable doing some of these Expected Value calculations, it is possible that that very high value outcome that's less likely actually kind of dominates the calculation. And so it is worth paying a bunch of attention to that. That means that, in the spread of different things that could happen, much of the value is, if you happen to be particularly good at this thing, it's just going to turn out to be much better. And so maybe you should at least open yourself up to that possibility, and at least give something a shot. Because if it does go well, then it goes incredibly well. And if you don't even try, then you've completely closed the door on that being a possibility.

SPENCER: I agree with you actually. And what I would say is you can have a useful thing that happens where your inside view and your outside view don't agree. So for those who are not familiar, the outside view is basically saying, "If I view my case or my situation as just one example among many, what is the probability of different things happening?" Like, if I think of my startup as one startup among many, maybe it has a 10% chance of success, because that's the broad base rate, versus the inside view that says, “Knowing the details of my case, knowing the particular product I'm trying to build, knowing my exact situation, maybe I have a different estimate of the probability sets, maybe I think it's a lot higher.” And so one thing I think can happen is, outside view can say, "You're probably going to fail, almost certainly going to fail." But inside view can actually be like, "Wow, I think I'm going to succeed at this." And actually, that's an interesting way to motivate yourself, because the inside view can drive a lot of the competence and motivation, even though when you're stepping back and thinking, "Outside views say I'm probably going to fail, and I have to acknowledge that."

HABIBA: Yeah, and I'm really interested in this ‘how to relate to the motivation’ piece as well. And particularly thinking about — we can be talking about ambition in many different senses. But if you're wanting to open yourself up to the chance of something going really well, that goal is about helping others or trying to have a bunch of impact, I think it just seems really good for people to be trying to find out ways that they could motivate themselves to go for some of these things, if they are in fact, the right thing to do. And maybe one of the motivations — maybe some people just really do have this drive to succeed and this inside view of how plausible, how great this idea is, is like really driving them on to succeed. And that seems great as a way that can motivate people. I possibly am not as good a fit for an entrepreneurial type mindset with this strong belief in my own product and this verve, these huge amounts of energy to believe in it and make it work. But I think I could be more motivated by maybe having an ambitious goal in mind for where I might want to end up. Or maybe even just kind of knowing that the world is full of a bunch of people who have this kind of drive to make their things succeed. And maybe the thing that I'm working on feels very worthwhile in comparison to a lot of the things that other people are working on. And that might give me a bunch of drive to work on my thing even harder.

SPENCER: Yeah, it's funny because we as humans are so far from rational agents, we can get into these conversations, not just, "What was the rationale to this?" but "What's the way to trick one's own monkey brain?" [both laugh] One of my favorite tricks that I use for myself (and I think it's a pretty valid one), is to kind of step back and view your path as a longer one and say, “Okay, so this particular really ambitious thing I'm trying for the next three years, well, that might fail, sure. But if I learn from that, and then I make a new plan, and then I try that plan, and sure, that one might fail. But then I learn and I do a new plan after that, and so on.” If I'm willing to run this strategy as a longer-term strategy of trying ambitious things, failing, learning and trying another ambitious thing, then it seems reasonable to think that that whole path has a pretty good chance of success at something of value, even if it's not originally what I set out. And so, just stepping back and viewing your path as sort of this longer thing.

HABIBA: Yeah, you've basically given yourself this repeated game thing, which then helps you with making the bets. That's a really interesting way of approaching things.

SPENCER: So how do you think about ambition when it comes to your own career path?

HABIBA: Hmm. Good question. [laughs] I think when I was a kid, I was definitely much more classic ambitious, where I'm like, "Yeah, I want to be Prime Minister one day," or something like that. And I think I do feel a bit like, maybe as I've got older, I've got more complacent and relaxed and kind of happier without pushing myself as hard, which I think is kind of bad and I think I should push myself a bit more. I think there are things that maybe I struggle a little bit with thinking about...like ambition, I think, feels like a very instrumentally useful thing and mostly tracks to people wanting to be ambitious potentially for self-interested aims, for getting more status and wealth, or things like that. And then I think I have a little bit of an aversion to that. But I think that it's actually just the things that I'm trying to do in my life, or my career, if I actually think those are really worthwhile, like trying to help others a lot or something, it is kind of beholden on me to try and try a bit harder to do it better. But I like the framings that bring in the impact into the ambition so make it more about getting that end goal a little bit more in there. So I find the motivation around trying to be really virtuous, or really generous, or really giving through my career to be more motivating, or even a duty framing of, "I've got to work really hard on this thing that's important," rather than focusing so much on the instrumental side of things around ambition for ambition's sake.

SPENCER: You know, bringing this idea of ambition together with this idea of Expected Value we were talking about, the advice to be ambitious in your career if you want to help the world is implicitly an advice that being on the higher end of ambition tends to have higher Expected Value. That's what’s being implied there. So I'm wondering, do you agree with it phrased that way? And if so, what are your thoughts on that?

HABIBA: Yeah, I think the the link between the two is something like, if you're wanting to aim for doing the best that you can, then plausibly, the things that seem like the very best are these moonshot things, these things like becoming president, or becoming a billionaire or something like that. And those just tend to be the ones that have lower probability of them panning out. And then you can do some examination of, “Does it actually make sense to go for these lower-probability things that could still pan out, and if they panned out, would be really successful?” And it seems that that case goes through if you're willing to use some of these Expected Value calculations, I think. Does that make sense?

SPENCER: Yeah, and I also wonder whether there's an implicit assumption here that people tend to underestimate their abilities. Essentially, by not choosing an ambitious goal, they're not letting themselves live up to their true potential. And so whereas if they chose an ambitious goal, they might fail, they also might succeed. But if they choose a low-ambition goal, they're probably not gonna go above that. In other words, people are essentially capping their ambition. I think you said something earlier that's sort of along those lines, but I'm curious if you want to elaborate on that.

HABIBA: Yeah, this idea of opening up at least the possibility of things and I think there is this asymmetry going on between being under-confident and over-confident. Both are bad in some way. You can err on the wrong side. Which side is worse to err on? Erring on the side of under-confidence does seem worse, because you're not even allowing yourself the opportunity to try and hit some of these highs, whereas erring on the side of overconfidence, if you at least make sure you do some of these things to set up your life so that you're not going to ruin things if you fail (if you have backup options, if you have things that you can go and switch into afterwards, if you look after your mental health and things like that), then you might be (hopefully) opening up the possibility of things going really well without too much downside.

SPENCER: Yeah, basically do ambitious things, but then make sure that you have a backup plan. [both laugh]

HABIBA: Yeah, and some of that is maybe, if you're coordinating with the community of people or something like you said before, about the community making that, facilitating that, and rewarding people who go for things that might not pan out, but at least they gave it a go, and rewarding people for the trying rather than turning on them, not valuing the fact that, just because it didn’t actually work out in the end.

SPENCER: Someone asked on Facebook this question around what should you give to, in the effective altruism community, given that there`s now a lot of money in the effective altruism community. It seems more and more people haven't done really well in crypto and things like that. And Eliezer Yudkowsky chimed in saying, "fund weird small things" or something like that, (I'm paraphrasing). But I also wrote a similar response along those lines, which is basically, I would love to see lots of small grants for really crazy ideas, things that, for a large funder, it's not worth their time to investigate because to give out $20,000 grants, they'd have to give out so many of them, and the amount of research time would be too much. But for an individual donor to say, "Hey, that idea sounds really cool and actually could have really high impact." We don't have the evidence yet to know, but maybe $20,000 to experiment and see how it goes could be actually quite a worthwhile Expected Value bet. So I'm curious what you think about that.

HABIBA: Yeah, that's interesting. And it feels like it's a really nice healthy way for a community to operate where some of these smaller nascent ideas can get a bit of nourishment and the chance to try them out and see if they might turn into something bigger. And in particular, I think you might think about this in terms of "How can you beat the market? How can you beat the big funders out there that have so much time and resources to focus on this kind of thing?” Well, you might want to focus on areas where you potentially have a bit more private information than they do. So if you happen to particularly know a person who is trying something interesting, and you have a bit more context on that particular project or the specific needs that they have right now (their particular timelines), in a way that maybe a bigger funder just doesn't have access to a lot of that stuff, because they are much more at arm's length at a distance. And that might be one other thing that's going on here about why funding these kooky small things seems quite useful.

SPENCER: Yeah, absolutely agree. Okay. So before we move on to the next topic, we talked about this idea of, ambition makes sense when you're trying to do impact. Do you want to say any final words on that?

HABIBA: Yeah, I think, ultimately, particularly if you're part of this kind of effective altruism community and you're trying to do good there, there have been a bunch of people who've actually been extraordinarily successful so far. And actually, maybe that should just update us that actually taking some of these big bets is actually very worth doing. I think it's really good to encourage people who are trying to do good with their career to actually be really ambitious. And I think possibly we could move a bit further in that direction, especially if you're someone who considers yourself part of this effective altruism movement. So I think, considering what are the options that you could do with your career specifically that, if they panned out, could go extremely well, I think I would encourage people to pay quite a lot of attention to the upside scenarios when they're thinking about their career.

SPENCER: I would just add to that, that I think this interesting psychological thing happens when you let yourself be really ambitious, where you're like, "Well, let's say instead of just doing X, I try to do like the best X that's ever been done, or at least the best experimented on in some way, or for some community or whatever.” Suddenly, your mind starts thinking, "Oh, okay, if I had to do the best version, how might I do that?" And it sort of opens this door. I guess what I'm getting at is at least allowing yourself to explore the possibility you could do the best thing ever in some particular domain, then maybe you'll actually have ideas how to do it. Whereas if you'd never let yourself explore that idea, maybe you'll miss out on some really good opportunity that you just never thought of.

HABIBA: Yeah, definitely agree.

[promo]

SPENCER: All right, so switching topics now, this idea of longtermism, where we should think about directing the long-term future because there's so much potential good in the long-term future if things go well for humanity, and there's so much potential loss if things go badly for humanity. So this is an idea that has been going around the effective altruism community. And I think you have a really interesting take on it that differs from a lot of people, in that, a lot of times thinking about long-term future is justified on more utilitarian grounds. But I think you argue that there are other ways to justify a focus on long-term future and other ways to think about it. So you want to lead us in that discussion?

HABIBA: Yeah, I think it is the case that a lot of people who are interested in longtermism or interested in effective altruism do tend to be consequentialists or utilitarians. I think that focusing on longtermism falls out quite naturally if you do take a utilitarian approach, or at least it follows quite well, I think, if you take a certain kind of moral view. But that said, I think I take quite a lot of comfort from the fact that there may be other good reasons, even taking a kind of more pluralist view, or maybe thinking about different moral frameworks that are working to safeguard the future of humanity or maybe reduce existential risks are really important things to do even if you take a different view. I think I personally wouldn't say that I'm signed up to any one particular view myself, but I’m very sympathetic to non-consequentialist reasoning and even when I think in a consequentialist framework, I care a lot about other non-utility kind of goods, things like equality and justice and things like that. So I'm pretty interested in there being more discussion around some of these other arguments that might lead to similar conclusions and how robust those are.

SPENCER: I'm glad you say that, because there's this funny thing that happens a lot in the effective altruism community, where a lot of people just assume that everyone else is utilitarian. But the irony is, whenever I actually pin someone down, they're like, "Well, I don't really know, maybe there are things that matter and I actually care about this other thing." It's just funny how everyone assumes other people are utilitarian but I'm not sure how justified that is. So I'm glad that you're openly stating that you care about these other things through the lens of these kind of other values. How do you look at longtermism?

HABIBA: There are a few different arguments that I think people have made in different places, and I wanted to touch on a few of them. I think some of these are targeted at some different things. Some of them are targeted very specifically at existential risk reduction, rather than longtermism specifically. One first move that people often make is that like, if you're a consequentialist, but even if you care about things that are more than welfare (if you care about beauty or knowledge or something like that), whatever it is that you care about, there's more of this in the future. And the same argument goes through about this, the sheer vast stakes of things that there might be in the future, even if you're not strictly just a utilitarian. So I think that this definitely works if you have this pluralist view of what kind of a good world even looks like. And I think it's worth actually mentioning that that is actually really important and this is kind of long-term is for you. Even the tiny classical one doesn't just rely that you only care about welfare or something. I think this doesn't completely speak to something that I'm really interested in, which is, what happens if you care a lot about justice and if you have instincts towards thinking that a just society is a better society. And I think this argument doesn't completely work for that because I don 't think necessarily that, if you have this justice-focused instinct, you're really like, "Ah, I want there to be more justice in the world by there being more world and then that's more justice.” It doesn't quite work. I think the thing that you want, if you have these justice-focused intuitions, tend to be less consequentialist, I think, so they don't really fit into quite the same framework. And so there are some other arguments that I think maybe speak more to that.

SPENCER: So then how do you think about justice when you are taking a long-term perspective?

HABIBA: I think I want to say that I don't know what the final answer is here. But I'm going to suggest some ways that some of these arguments might come into play. So one argument that some people make is that maybe people are actually this completely unrepresented group. Their lives completely depend on the actions that we take, us who are fortunate enough to be alive right now. And yet these people in the future have no say over what we do. And in fact, we barely consider them when we make a bunch of decisions. And so you might, in fact, put this group, the next successes in this long and hopefully proud tradition of trying to consider the rights of various disadvantaged or underrepresented groups like women's rights or LGBT rights, things like that. And I think this argument kind of resonates with some people. I think it very much applies if one of the things that you're worrying about is that the future might go really badly and might involve a lot of suffering in the future. And then this very straightforwardly feels like those people's lives are really on the line and it's really beholden on us to do something about it. Climate change is maybe a really strong example of this that might speak to people. Yeah, if we carry on polluting the world, then there are people in the future who are going to be living in famine-struck regions, or flooded areas, or have way more disease, or much lower qualities of life, or they'll be losing their homes, all kinds of terrible outcomes, and it's very selfish of us to not be considering that when we're making decisions now.

SPENCER: Yeah, it's an interesting view. If you think about disenfranchised people, it's hard to think of someone more disenfranchised than someone who literally doesn't exist yet. They literally could have no actions that affect the world. And yet, the decisions we make today could have really negative consequences on them so we're sort of acting on their behalf. And so that creates some responsibility.

HABIBA: Yeah. I think there's an added kink to this argument around, well, what happens if they don't get to exist at all. So if the thing that we're worrying about is not the future goes really badly, but actually that we will go extinct, and then all of these trillions of people in the future never get to live. Now, I think that people have different instincts on this. So I think I have some very consequentialist friends who really strongly feel this total utilitarian kind of instinct that to be alive is better for someone than to not exist at all. And then for them, it just feels really important for us to pay attention to try and make sure that these future people get a chance to live at all. I think that definitely resonates with some people. I think It doesn't resonate with me as much, because I think I don't really take this view, at least not as strongly, that to exist is better for that person than to not exist, which is a philosophically complicated point that comes up when you sort of look at population ethics. And I think people take different views on this one. But if you do take the view that to exist at all is better for that person than to not exist, then I think this argument I talked about disenfranchised future generations, applies just as strongly to extinction kind of events as it does to these terrible dystopias.

SPENCER: And why do you think it's not better for someone to exist? Is it because if they don't exist, there's no one to do the comparison to?

HABIBA: Basically, yeah. I think it just feels like, philosophically, it feels incoherent to me to feel like this person who didn't get a chance to exist, they're worse off than if they existed, because they're not worse off, there is no "them," kind of thing.

SPENCER: Although they are better off if they exist, right? At least if they have a good life. There's nobody to be worse off, but there's someone to be better off.

HABIBA: Well, so then, I would also say it the other way around, if they exist, they're not better off than if they didn’t exist at all. I think maybe I would say that they're better off than if they existed and they had lower welfare, but I'm not sure that I really would say that they're better off than if they didn't exist at all. I don't know. I feel like moral instincts are not well-formed to grapple with something like this. I feel like I don't completely buy this premise that the world is better off for them because they get to exist, because the comparison just doesn’t quite work for me.

SPENCER: Maybe this is just a phrase for that. Maybe there's just weird ambiguities going there. Maybe we could just say the world is better if you have like these happy people living than just an empty universe forever, with just atoms sloshing around.

HABIBA: That bit, I definitely buy. So this is a problem that you run into with population ethics, which is, you may have these two different claims, one that you want, for there to be like world A and world B. For world B to be better than world A, it's got to be better off for someone, and then you maybe have this other claim of, well, nonexistence doesn’t seem to be better or worse than existing for that person. And these two things can't both be true at the same time. And I think some people that I know, maybe they would say, "Okay, fine, nonexistence can be better or worse than existing." And then maybe this intuition goes through of, "Well, okay, then it's really, really bad for all those future potential people if they don't get to exist." Whereas for me, I think I buy that the world is better if it has more happy people in it, but it doesn't necessarily have to be better for them. It's just better, the world. Yeah, I don't necessarily buy that they're better off than if they weren't existing at all.

SPENCER: There are these interesting philosophical problems based on the fact that we don't know who's gonna exist. Any kind of decision we make could slightly change the world in a way where people have sex one second later, and then that means different sperm reached the egg and then different people exist. So it seems, due to kind of chaotic effects, constantly, we're accidentally causing some people to not exist and other people to exist, because you turned left slightly sooner. And that causes this person to get home slightly later and have sex one second later.

HABIBA: Mm-hmm, and you maybe can be, "Well, it was worse for that person who didn’t get to exist, but it's better for the one that did." My understanding is that if you do buy this, you do have to sit with all of those consequences of all of these different potential people. Well, the world was much worse off for them because they didn't get to exist.

SPENCER: So are there other arguments about why people who care about justice should care about longtermism?

HABIBA: Yeah, there are a bunch of arguments, maybe I'll just touch on briefly. And some of these, Toby Ord makes in the book, “The Precipice”. So he makes a bunch of different arguments for why you might care about safeguarding humanity. But some of them, I think, are just actually pretty interesting and speak to maybe quite different instincts to this kind of consequentialist instincts. So maybe we have responsibilities to people in the past, and a just or fair way of us performing our role within this vast narrative of human history is to carry on passing that baton onwards and preserving the important things. Or maybe we even have duties to atone for past injustices, all these like terrible things that we've done as a species, like slavery and Holocaust, all of these terrible things that we've done, maybe we need to atone for these. But in order to atone for them, it's a prerequisite that we actually carry on existing. Otherwise, the tally will end and we will have not fixed some of these problems. So that very much does speak to a very classic kind of justice-focused mindset where you might think it's actually incredibly important to atone for past injustices. And I'll just point out that that kind of argument very much does capture even extinction of the human race would therefore be bad on this ground because we don't get a chance to atone for some past wrongdoing. It doesn’t only apply to sort of like suffering futures and things. And maybe the last thing I would mention is that — so it is possible that, if you have both this kind of justice instinct and also this kind of consideration of the welfare or these other kind of consequentialist goods that might be in the future, you might think that the case for the utilitarian or for the welfarist, of what's at stake is really quite significant and you should pay a bunch of attention to that. And maybe you shouldn't be super sure that that doesn’t matter. If you at least put some decent credence in this being a correct view, then it's possible, especially under conditions of uncertainty, specifically like moral uncertainty, that you might want to pay a bunch of attention to this. This actually does come back a bit to what we were talking about earlier, about how do you deal with this Expected Value in different situations, and maybe you get into a problem where the utilitarian view wins too much, because the stakes are just so big, often. But I think there's a sensible way of balancing that. If you care a bit about these justice instincts, and you also care about the consequentialist kind of welfare stuff, then you might want to not be too hasty about being glib about the badness that could result from extinction or an existential catastrophe.

SPENCER: Right. It seems, if there's anything you should be uncertain about, it should be about finding objective moral truth, given the philosophers have debated this for well over 1000 years, and there's still a massive debate about what the right answer is. Seems somewhat reasonable to be like, "Okay, I'm not totally sure what the right way to look at ethics is. And if I'm not totally sure, I should assign some credence to different perspectives, and so it's useful to be able to take on these different lenses. So I'm wondering, since you spent a lot of time talking to effective altruists, how do people react when you talk about justice?

HABIBA: I think my approach to talking to people is to be very reactive to the things that they bring up. And so I'm less likely to be like, "But have you thought about justice though?" and more likely to respond to what it is that they've said is important to them. And I think I...

SPENCER: So is this your justice coming out right now? Is that what's happening?

HABIBA: [laughs] If I'm talking about what I think is important, I think I want to see more of these justice framings discussed more. If it's really important for the person that I'm talking to, I absolutely will talk about it. And I think I can talk about it with a lot of authenticity and integrity, that I actually really care about this as well. But I'm not really trying to change someone's view when I talk to them about their career. So I'm not really going to try and argue someone into or out of being a total utilitarian or something. I do find that, when I talk to people who do have some of these instincts, I think they can find it very validating that someone else also shares some of the same views that they have, is pulled by some similar concerns.

SPENCER: So do you have any other values that are not utilitarian or not consequentialist?

HABIBA: Yeah. In some ways, I really haven't settled on one particular view. I just feel the pull of a lot of different kinds of concerns without having fully worked out my one true belief. I'm quite consequentialist, in that I care a lot about the actions. I feel like at some point, the scale really does trump things, but I care about multiple different kinds of goods like welfare and equality and justice. I also care quite a lot about a completely different framing of virtue ethics. So if I think about what I'm trying to do with my time being alive on Earth, I care a lot about being a good person and trying to act out of motivations of kindness towards others, kind of regardless of the consequences. And this, I think, is a lot less popular as a view both amongst philosophers and also amongst people in the effective altruism community. Although it's not a vanishingly small group of folks. It's a fairly popular view that goes back to Aristotle. But I do think it speaks to something that is pretty important about. Yeah, there is just a frame of ethics, which is kind of a different set of questions that you could be asking about what does it mean to be a good person. And I think that that is personally very important to me.

SPENCER: Yeah, I can relate to that. Although I have somewhat of a different frame on it. I have a philosophy I call value-ism. And I don't believe in objective moral truth. But I do care a lot about different things that I view as my intrinsic values. And one of those is telling the truth and not spreading misinformation. And I care about that even in cases where it's harder to make the utilitarian argument for it. So I think aspects of that end up looking more like virtue ethics, although utility is also a big part of my value system, like trying to reduce suffering in the world, trying to increase happiness, which I imagine is for you, too. Is that right?

HABIBA: Yeah, it absolutely is. I think, in some ways, it's easy for me to feel like I'm not that consequentialist because I'm defining myself in contrast to many other people that I see around me. But in fact, I really do care about consequences a lot. And I think welfare, to me, seems like a very important part of what makes things right. Do they lead to the better outcomes ultimately? In some ways, I feel like most people (unless you're actually slightly pigheaded) will be consequentialist at the extremes at some point. You might cling to some deontological or virtue-ish constraints in edge cases. But if the stakes are sufficiently high, I think someone would tell a lie in order to save everyone on earth or something. And so they just feel like consequences just seemed like a really important thing.

SPENCER: Yeah, I think that is one of the appeals of utilitarianism that doesn't get talked about enough, which is just that even if people have lots of other values, which they tend to, almost everyone does care about reducing suffering and increasing well-being. So it is common ground for most people. And especially when the stakes get larger, as you're saying, and if we're talking about really large amounts of suffering and well-being, like most people say, “Okay, that actually can matter a lot, even if I do care about other things.”

HABIBA: And that's maybe one way to think about it in the way that we, people within the effective altruism community, I think it's really good and important that there are people with all kinds of different moral views who are trying to be part of this endeavor to do the most good that they can. I think that it's easy to have this kind of rallying thing that is convenient that most people agree on as, at least part of what's worthwhile, thinking about some of these consequences. And I think that's a way in which I think about consequentialism in relation to effective altruism.

SPENCER: Before we wrap up, I just want to take a few minutes to talk about 80,000 Hours with you. I know that you all are expanding your advising to giving more people high-impact career advice. You want to just tell the listeners about that a little before we finish?

HABIBA: My job is to do one-on-one career advising calls with people, to have a call with them to talk about their career options and see if I can help by introducing them to people or working out some next steps or suggesting jobs or funding opportunities. And we've been doing this for quite a few years now as a product. And we really are just trying to kick up the volume that we're doing now. So last year was kind of a record-breaking year for us where we hired two new people to the advising team. So for the first time, this is the most amount of people that we've had focused on doing these calls, in our 10-year history of 80,000 Hours. We also did the most calls that we've ever done. So we had over 800 calls last year, but we're planning on doing even more in 2022. So we want to do even more calls like over 1000, 1200. And we want to hire more people to join the advising team so we can carry on expanding and helping even more people.

SPENCER: So who should reach out? And how do they reach out to you if they want advising?

HABIBA: If people are interested in talking to the team for an advising call, if they check out our website, which is 80,000hours.org/speak, there's an application form there that you just have to put in a few paragraphs of what you're thinking about for your career options. And you really don't have to have a very thought-through career, if you think that it would be useful to talk to someone about it then just write a couple of paragraphs with what you're thinking. And then we'll see if we can help. If we can, if it seems like we're the best people to talk to, then we'll invite you for a call and ask you for a bit more information. If it doesn’t seem like we're the best folks to help you, then we'll try and send you some resources or maybe even introduce you to someone else who might be a better fit for helping you. For example, if you're very interested in a cause area that our advisors don't know as much about (like animal advocacy or global health or something), we might see if we can facilitate a different connection instead.

SPENCER: Got it. And basically, you're going to be trying to help them think through how do you have more impact during your career. Is that right?

HABIBA: Yeah. And we talk to people at all different stages here. So even if you're a student, and you're mostly thinking about what should I be exploring at undergrad, or if you're actually just an experienced professional, you've got a bunch of years of experience under your belt in a particular area, and you're wondering, “Can I use this to do a bunch of good now?” We're happy to talk through what that might look like, what different parts might be open to you, and what the routes into those different areas might look like.

SPENCER: And the calls are free, correct?

HABIBA: Yes, they are absolutely free.

SPENCER: I think you should emphasize that because people are gonna be like, "What's the catch?" I think anyone who's thinking about doing more impact in their career, you may want to check it out, apply for some advising. I know they've been doing this for a long time. And they also have some really wonderful articles on the 80,000 Hours website, helping you think through different aspects of a career, whether it's career capital, or how do you find a career that makes you happy, but also has high-impact and so on. Habiba, thanks so much for coming on.

HABIBA: Thank you so much, Spencer.

[outro]

JOSH: How are you able to switch contexts so quickly?

SPENCER: I unfortunately have no insight into how I do that, but it is true. I can switch contexts in five minutes, talk about that and just switch to a completely new project and be in the zone on that. Not five seconds, but certainly not a day. It's something like five minutes to switch. And I don’t know, I think it's just a personality thing. Another thing I would just say about managing lots of projects is, obviously it depends so much on having the right team. I couldn’t possibly do it without my team. And so, yes, I'm involved in a huge number of projects, but that doesn't mean I'm the lead who's like doing all the work or something like that. I'm just working with other people to get things done.

Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of the major podcast platforms:


Credits

Host / Director
Spencer Greenberg

Producer
Josh Castle

Audio Engineer
Ryan Kessler

Factotum
Uri Bram

Transcriptionist
Janaisa Baril

Music
Lee Rosevere
Josh Woodward
Broke for Free
zapsplat.com
wowamusic
Quiet Music for Tiny Robots

Affiliates
Please note that Clearer Thinking , Mind Ease , and UpLift are all affiliated with this podcast.