with Spencer Greenberg
the podcast about ideas that matter

Episode 201: How to have a positive impact with your career (with Benjamin Hilton)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

March 14, 2024

What's the best way to think about building an impactful career? Should everyone try to work in fields related to existential risks? Should people find work in a problem area even if they can't work on the very "best" solution within that area? What does it mean for a particular job or career path to be a "good fit" for someone? What is "career capital"? To what extent should people focus on developing transferable skills? What are some of the most useful cross-domain skills? To what extent should people allow their passions and interests to influence how they think about potential career paths? Are there formulas that can be used to estimate how impactful a career will be for someone? And if there are, then how might people misuse them? Should everyone aim to build a high-leverage career? When do people update too much on new evidence?

Benjamin Hilton is a research analyst at 80,000 Hours, where he's written on a range of topics from career strategy to nuclear war and the risks from artificial intelligence. He recently helped re-write the 80,000 Hours career guide alongside its author and 80,000 Hours co-founder, Ben Todd. Before joining 80,000 Hours, he was a civil servant, working as a policy adviser across the UK government in the Cabinet Office, Treasury, and Department for International Trade. He has master’s degrees in economics and theoretical physics, and has published in the fields of physics, history, and complexity science. Learn more about him on the 80,000 Hours website, or email him at

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Benjamin Hilton about high-impact careers, personal fit, and leverage.

SPENCER: Benjamin, welcome.

BENJAMIN: It's great to be here.

SPENCER: This episode is hopefully going to give some insight for people who really want to have a high impact in their career. Let's start there. What do you think is the core essence of thinking about an impactful career?

BENJAMIN: Oh, man, that's a really tough question. There are a few pieces of advice that we talk about at 80,000 Hours, some of which are... They're normally all false in some way, but they're all important structures about the way you think.

SPENCER: They're false in some way, meaning that they're not true 100% of the time for all people, but they're still good guidelines?

BENJAMIN: Yeah, they're not true 100% of the time, or there are nuances and caveats, and you can go on about them for hours. There's this classic one, which is: your choice of problem is really important. That is, if you want to go out in the world and help people, what problem are you trying to solve? The choice you make there really matters to how much impact you end up having. Yeah, this is a very important piece of advice. It seems like the spread in the amount of impact you could have between problems is potentially orders of magnitude between something that I think is really important, like (say) reducing existential risk and something that I think is still very important but just doesn't have the same impact that might be, for example, curing diseases in developed countries. You have a gigantic spread between the problems you might work on.

SPENCER: It's funny that you chose that example because it's such a controversial example. I thought you were gonna say something like 'helping cute puppies in your neighborhood.'

BENJAMIN: Helping cute puppies in your neighborhood also seems... I guess I was trying to pick something that I think people might actually want to work on. If you're really trying to do good, you might help cute puppies in your neighborhood but it's probably not very likely, because I think most people have some kind of intuition here that there aren't that many puppies in my neighborhood, or it might be quite hard to help them. Most of them probably already have owners. They're probably a bit of a handful if you don't know how to manage dogs; whereas, going into medical research in the developed world and helping cure cancer seems really great, and is really great. I want to emphasize, it is really, really great and I really want more people to go cure cancer. But if you're already working with someone who wants to do good, they're probably already working with some ideas they have to help them do good. And so the question is, well, can you do better? Can you do even more good by your lights or by whatever lights?

SPENCER: Do you think that almost everyone who wants to do as much good as possible should be going into existential risk-type causes?

BENJAMIN: No. [laughs] Well, I guess it depends how many people I'm working with here. If there were only ten people in the world to match this really high bar for 'really wants to do the most good possible,' then maybe I'd say yes. There are definitely loads of reasons not to. I said that there's these big problem spreads. And I also mentioned that all these things are false in some way. So here is one of the ways in which it might be false — in fact, is false, in some way — is basically other factors also matter a lot. You have these other factors that are like, what solution are you actually trying to work on with your career? How do you tackle that problem? And also, how good a fit are you for working on that solution? If you are a technical person, it's probably good that you are working on something technical. And if you're completely non-technical, it probably isn't gonna be helpful if you go do something technical. So if I have no background in biology — maybe I could get that background, but assuming I don't go and get that background — it's probably not very useful for me to go and work on technical solutions to preventing the spread of pandemics. I'd probably be a better fit for a policy role or for founding an organization or something like that, something that doesn't require technical knowledge. And then you can be like, how much of the spread in marginal impact of your career is from just this problem spread, and how much of that spread is from these other factors like the spread of solutions and the spread of personal fit?

SPENCER: What do you mean by the spread of solutions? Just want to clarify that.

BENJAMIN: There are many different ways you might tackle a problem. The classic area to talk about this is in global health. Say you want to tackle malaria, there are many different ways you might do that: You might distribute malaria nets. You might distribute malaria vaccines. You might work on gene drives. These are all different ways in which you might tackle the same problem. And we actually have a bunch of empirical data in some areas that attempt to quantify this spread of solutions. Probably the most famous is the Disease Control Priorities project two , DCP2, which basically looked at (I think it was) global health interventions or health interventions in the developing world and found that, basically, the best interventions were something like 10,000 times more cost-effective than the worst interventions. That sounds a lot, and that is; it's many orders of magnitude. But if you just compare it to the median, the best intervention is about 50 times more cost-effective than the median. So there's this really large spread by a couple of orders of magnitude maybe, between picking a median intervention and really trying to find the best intervention within a particular problem area, making people healthier in the developing world.

SPENCER: That's interesting. I actually wondered myself: is the median intervention actually positive at all? Because if it was zero, you could get an infinite benefit above that. One divided by zero is infinity.

BENJAMIN: Well, actually in these circumstances, this is a list of interventions people actually tried, so you're already selecting for decent interventions when you look at this dataset.

SPENCER: I see. So it's not just the median of all interventions. Yeah, I do have some doubts about using the ratio there. The ratio is compelling. It's like, "Oh, 50 times, that sounds really good." But I also wonder whether it actually makes sense at all, because the distribution probably covers zero. [laughs] There's a lot of interventions that just don't do anything.

BENJAMIN: Yeah, I guess maybe one better thing to do is to think about the ratio with the mean, because the mean is something like, if you pick at random over the whole distribution, and you weight by the frequency of each intervention based on how effective it is, you're actually more likely to bid more common interventions.

SPENCER: Yeah, I like that, because it's almost like saying, if you had a portfolio, imagine that you could put a little money in every intervention. How much good would that do? And then if (let's say) you tried to put in the best intervention, how much better is that? That makes a lot of sense to me.

BENJAMIN: Yeah. And I don't have the data in front of me about how many times more effective it was than the mean, but I think it was pretty similar. The mean is only slightly higher than the median because the distributions are heavy-tailed. There are very few really good interventions, and most interventions are around about the average. So the mean is actually normally slightly higher than the median, which means this 50 times more effective than the median, it'll be slightly lower when it's the mean. I think it's something like 30 times more cost effective than the mean, in this particular dataset, DCP2.

SPENCER: That's really interesting. Okay, so we talked about, there's the distribution of different causes you can focus on, and there might be huge differences in how effective they could be. There's a distribution in solution so, within a cause area, you could pick from different solutions, and they could be really different in terms of how effective they are. Where do you think people get the bigger bang for the buck: focusing on a better cause area or focusing on a better solution within a cause area?

BENJAMIN: This question really depends on the samples the data are drawing from. If you're picking out of a list of all really good problems on which to focus, then you're not going to get much bang for that buck, changing which problem you're focusing on; whereas, if you're picking out of a list of all possible problems in the world, then it's going to really matter. Similarly, if you're picking out of a list of the best possible ways to work on something, there's not gonna be that much spread there. But if you're picking out of all possible ways of helping solve something, then it's gonna make a big difference. I think this is one reason why, at 80,000 Hours, we often say something like, "Here's a list of problems. Here's a list of jobs you might do. How do you decide between them?" Well, the way you need to decide between them is how well they actually fit you. Because that's an area where you can't just be like, "Oh, I'm just gonna pick the stuff that's good for me;" it's actually quite a hard problem. And so there really is a large spread, even among these top areas and these top solutions, just focusing on the fit that you have for working on a problem in a particular way.

SPENCER: Let's unpack this idea of personal fit. What are the elements of that?

BENJAMIN: Formally, you can think of it as something like it's a multiplier on other things. You can think about the importance of the problem as something like, 'how much good do I do if this problem gets solved?' And your solution, which is this other factor which you multiply by that is, 'how good is this thing at solving the problem if a generic person did it?' And then this personal fit factor is, 'how good will you be at that particular thing?' And then the question is, okay, so you need to go and find something which you'd be really good at. How do you go and figure out what you're gonna be really good at? I think most people get pretty tempted at this point to be like, "Ah, I'll pick something that interests me or I feel like I'd enjoy," or something like that. But I think these are not the best ways of going about this. They're okay; they're great places to start. Often as you're writing down a list, you really need to narrow down a list of 1000s of different jobs; that might be helpful. But the best things to do here are... I guess there's really two things: the first thing you can do is, you can make your fit for something better by learning about it, by building skills in an area. And the second thing you can do is, you can go out and explore; you can try out a bunch of different things and you can see if you're good at those things, if you're actually having success on those things, if you're getting promoted, if people around you told you you're doing a good job, that kind of thing. And then you actually need to combine these two things in some way. You probably want to spend the first part of your career, to an extent, doing some combination of exploring, seeing if you're on track, exploring, seeing if you're on track.

SPENCER: There's these two aspects. One is the skills you build, and anyone pretty much can get better at any skill, just about. And then there's something like aptitude. Is that how you break those out?

BENJAMIN: Yeah, I wouldn't really call it aptitude. Aptitude, to me at least, implies some innate thing that you have. But yeah, maybe aptitude is a decent word for this. It's not like I'm saying you're born with this thing and it's always gonna be the case. It's more like you have already built up in your career, in your life, a bunch of skills and tools you have for tackling problems. If you've spent ten years already as a kid learning to program, then that's going to be helpful for you if you wanted to learn programming and you wouldn't need to spend as long getting up in that area. There's a bunch of reasons why you might be better at some stuff than other stuff already. To an extent, it's like you have some aptitude for things — I guess this is what I normally call fit — you have some fit for a thing already. And then also you can build up the skill. And this skill forms part of what, at 80,000 Hours, we tend to call career capital, which is the stuff you build by doing a job which helps you then get or do better in future jobs. It's like capital; you invest in this thing and you build up this stock of stuff.

SPENCER: Does career capital include skills?

BENJAMIN: Yeah, totally. In our career guide, we say there are five components of career capital and skills is the top, maybe the most important one. The other ones are connections (your network), your credentials (having a degree), that kind of thing.

SPENCER: My understanding is that 80,000 Hours has shifted their view on career capital a little bit. Can you talk about that?

BENJAMIN: Yeah. We used to talk a lot about this concept of career capital, this idea that you need to go in the wild and build up this thing so you can get jobs in the future. And the problem with this concept is, it's counterintuitive, or at least, it's not very intuitive to think about. People, for example, were really focused on 'I need to go get this degree,' or they'd be like, 'Oh, the best thing for me to do is to go and become a management consultant because that sounds impressive.' There's actually a trade-off there, which is, if you go do something like become a management consultant, it might sound really impressive, and you might learn a fair amount. But if it's not directly related to the thing you want to do eventually to have an impact, there was probably a better way of getting there, which gave you more targeted skills, or more targeted knowledge. So what we've done is we've shifted our focus from this broad concept of career capital, on to the idea that, to have a fulfilling career, you should get good at something. And then you should use that to solve problems. And the idea behind this is something like, well, if you're actually really good at something, people are going to want to network with you; you'll probably get the credentials along the way, getting really good at the thing. And it stops you making this mistake of being like, 'Oh, I'll just go get this transferable credential over here,' instead of getting really good at the thing you're actually going to need to do in the future.

SPENCER: So it's a focus on the skill itself rather than things like indicators of the skill, or people knowing that you're good at the skill. It's like, no, just go get the skill, and the rest will follow. Is that the idea?

BENJAMIN: That's not always true but that's the idea. Like I said at the start, lots of this stuff is gonna be false in some way. But this is one rule of thumb for how to think about your career and I think it's actually quite a useful rule of thumb. The rule of thumb is something like, 'Where can I go where I will learn the most that will be useful to solving a pressing problem in the future? Where is the place and where is the job I can do where I'll learn the most?' And often that rule of thumb does help you get those connections, those credentials, those other things that you'd want, to get those jobs in the future.

SPENCER: What are some skills that are cross-domain enough that you could recommend like, "Oh, most people, if they want to advance in their career, should go get skills A, B, and C, almost no matter what field they're in"?

BENJAMIN: Yeah, I actually think the answer is almost none. And maybe this is another shift of the focus; in our previous career guide, we used to talk a lot about this idea of transferable career capital, which is similar to transferable skills. And the idea was, all else equal, it's better to have a transferable skill because then you make an option value, you'll make more choices in the future, and that's great. The problem is that transferable skills are often less directly relevant to the really important thing that you want to be doing in the future. Great, get transferable skills if you can, but it's probably better to really focus on the skills that are most relevant to what you want to do. I've actually just spent some time writing some articles on this, and we split this up into a bunch of different skills. They include things like what I call bureaucratic skills, which is working around really large institutions and getting them to function. And that will include things like being able to talk to people; that's a great skill for everyone to have, but the focus is on actually experiencing what it's like to work in one of these large institutions. And then another skill is research; yeah, it's important to be able to communicate your research — kind of a similar thing — for example. But am I going to say that, to become the world's best (say) technical AI safety researcher, you need to be really great at communicating the way you need to do if you want to go into politics? Not really. There's some degree to which that skill's useful whatever you do, but your focus probably shouldn't be on getting that skill. It should probably be on doing whatever it is that you really think you're going to do to help people later on in your career.

SPENCER: It seems to me that some of the most transferable skills are very subtle skills. I'm almost tempted not to call them skills but more like tendencies or rules of thumb that people learn. An example — one that's been most powerful in my life — is trying to treat fear as something that you shouldn't let stop you from doing valuable things. So 'I'm afraid' is not an excuse not to do a thing. I try to live by that, not that I'm perfect at it, but that's just been incredibly powerful for me. And I think, across a lot of domains, that would be a helpful thing for many people. Some people don't need that obviously, like if fear doesn't stop you from doing things, then you don't need to develop that. But I'm curious if you think there are things like that that are cross-domain.

BENJAMIN: Yeah, super interesting. Yeah, I think there really are. The first thing that comes to mind to me — and it's been really big in my life -- is just dealing with mental health, making mental health a priority, and just finding ways to be fairly happy lots of the time. I'm quite lucky; I think I'm fairly happy by disposition. But part of this is like, when I was a kid, I had not a great time sometimes in high school, for example. And my mom is a psychiatrist. [laughs] She took me aside and said, "Hey, there's this thing called cognitive behavioral therapy, and here's how you'd do thought challenging." And that's been a really useful thing for my life, because it means I can generally stay positive in most circumstances.

SPENCER: Yeah, that's really great. And it's really good to learn that at a young age, too. Another one I would say that's been hugely beneficial for me, and I think could benefit almost everyone — though not everyone, as you said there really are exceptions — is learning to form healthy daily habits where you learn like, 'Okay, I'm gonna construct a new habit where, every day when I wake up, I'm gonna do these five things that are good for me.' And it seems like that can help in almost any domain of life.

BENJAMIN: Yeah, one of my favorite techniques here is the anchor habit, where you take something you do every day, and just be like, 'When I do that every day, I'm gonna do this next thing afterwards.' And you just practice doing that thing afterwards and then you just end up building this up, which is really useful. I realized five months ago that I was just not really responding to my friends at all, on WhatsApp or anything. And so I decided to just anchor this into my morning routine when I get in to work. I would just check my messages — it would only take a couple of minutes — and just respond to anything that needs responding. And now I actually have a social life again. [laughs]

SPENCER: It's a small tweak, but it can have a really, really big outcome. I think another thing is communication, but I'm reluctant to call this a really valuable skill, because it's such a bullshitty phrase, 'communication.' But there is something really deep and fundamental there. It's just a little bit hard to put your finger on, but the difference between someone who can clearly communicate their ideas and also assertively communicate them where they're explaining what their boundaries are, they're explaining what they believe, they're pushing back but in a healthy, respectful way, they're not getting walked over. That also seems to me just one of these fundamental life skills that helps in almost every domain.

BENJAMIN: Yeah, it seems true that it helps in almost every domain. And actually, I think this is true of communication and is also true to an extent with this habit formation thing and other productivity skills; they also help in every domain. But I think in both cases actually, I don't know if I'd recommend everybody spend time learning or developing these things. It really depends on what it is you're trying to do, and basically how similar... How easily can you apply these things to your job and your life? And how much is that actually going to help, relative to spending time building up some other skill? 'Cause there's always some trade-off, and I guess the problem with these very broad, generic things is you can become really good at them. But at the same time you fail to develop some more specific skill that is more relevant to whatever it is you're trying to do. For example, yeah, communication is actually one of the skills that we highlight on our website in 80,000 Hours, and we call it 'communicating ideas.' The reason for this is, communicating ideas seems like a high-impact thing. If you have an idea, and you can convince someone else of that idea, and then they'll act according to that idea, that's going to be plausibly a really high impact if the idea suggests to do high-impact things,. What we're focusing on there is things like becoming a journalist, or becoming a podcast host or that kind of thing, where your life is focused around communicating that idea, or your job is focused on communicating that idea, and that's the core thing you're trying to achieve rather than being like, 'Hey, this is something you should definitely get amazing at in all circumstances.' Get a basic level of proficiency? Yeah, sure. But is it something you should really focus on or really spend time developing? Seems unclear.

SPENCER: One way I like to think about this is, because many things around success are products of factors rather than sums of factors — for example, if you have zero energy, you have zero output — so it's not a sum, it's a product because anything times zero is zero. So you have zero energy, you have zero output. Other things like if you have zero ability to communicate, generally, in most domains (not all domains), you can't really produce output. There are a few exceptions, maybe some mathematicians have no ability to communicate [laughs] and they still manage to do it. But because of this product nature, it lends me to think about, well, if there's a skill in that multiplication that you're really bad at, there's a good chance it's really holding you back because it's going to multiply through and really screw over the total output. And so that's often the useful frame is, look for the skill that is actually relevant in your domain that you're worst at and then try to work on that. But at the same time, I think often the way we get to a very high level of output is we also get really good at leveraging whatever our biggest strength is — the thing that you're exceptionally good at — you become even more exceptionally good at that thing. And in practice that often, I think, is what success looks like.

BENJAMIN: Yeah, these frames seem slightly contradictory. You're like, in one case, I have all these products and factors, and that suggests, if you have multiplication factors, it's equally valuable to increase any factor by the same percentage. And so in absolute terms, I should focus on the lower factors. On the other hand, you're saying, "Hey, look, getting to the tails of success on one factor seems really good." So what's going on there? These things seem contradictory.

SPENCER: That's a great question, yeah.

BENJAMIN: My guess is that it's something like... Well, actually, it's not a product of factors. And the one reason for this is delegation, or not just delegation; people in the world focus on different parts of the production process. To end up with an iPad, someone needs to discover electricity, and someone else needs to write a paper on that; maybe it's the same person. Someone else needs to come up with some more ideas, and then you need product design and software engineers. And this is a whole long process throughout the entire thing. And definitely, there's an extent to which everything is gonna require some communication, for example, because if I discover electricity but I can't communicate that to everyone, it's useless. But maybe I just get the idea out there — and I've discovered the idea because I'm so good at discovering ideas — and then someone else can run with it and pick it up. Or they communicate it better to other people because that's what they're really good at. So I guess we have a very specialized world. It's like a production chain or a factory. And so there's not really a product, at least in any one person's output. Well, there is, but it's over fewer factors or something.

SPENCER: I agree. But I prefer to look at it differently. I prefer to think of it as, it still is a product, but you're allowed to outsource. The way you improve at some of them is not doing the thing, but getting someone else to do it or collaborating. You still need to have that variable in there. And the reason I like looking at it that way is just because I think then, you can still say, okay, it's a product of these factors. But it doesn't mean you yourself actually have to be good at every single one; if you can find a way to avoid that factor or to get someone else to do it or to collaborate, then that's a good way to solve the problem. You don't just have to get good at it yourself. But then to your question about, well, what is the deal? Is it really true that, for your really best skills, you should work on improving them? Wouldn't this product model suggest that you should always be working on your lowest skills? There, I have a mathematical explanation for this, which is that there are certain factors that are relatively uncapped, where you can increase it not just by ten percent, but by a factor of ten. So to get to really big numbers, you have to work on improving your uncapped skills to a really, really high level. An example of this would be creativity. Let's say someone's in a creative field — and not everyone is — but let's say you're in a creative field, I think it's reasonable to say that some people are ten or even 100 times better at being creative in certain domains than other people in that domain. You can get a really big multiplier effect by working on your greatest strength in some cases.

BENJAMIN: This feels not true to me. And the reason is that I just expect almost everything to actually, in real life, have diminishing returns. And how can you show that mathematically? I guess the thing you're trying to output here is some marginal utility. And the marginal utility you're outputting is almost certainly going to have diminishing returns overall. If you're working on the same thing over time, eventually your utility output is going to diminish, and that suggests that your product as a whole has to also be diminishing.

SPENCER: I agree that each skill has diminishing returns. Every additional hour working on that skill, you're gonna get less output. But I think I disagree overall. And the reason is, I think the way things work in the world is that there's a huge differential between being the best in the world and being the third best. The skill level to be the best versus the third best might actually be that huge of a skill level difference. But it's the difference between winning the gold medal and winning the bronze medal, or the difference between being incredibly, incredibly popular on YouTube and only very popular on YouTube. And so I think because of the structure of the world, there are these huge extra gains to continuing to improve on your greatest strength.

BENJAMIN: Yeah, interesting. I feel like there's something interesting and confusing going on here, where I think, in the actual world, there's a big gap between being the best and being the third best in lots of skills. And the reason for this is just that you expect the skill distribution — again, these heavy- tailed distributions, they crop up all the time, but we're back in heavy-tailed distributions — you expect that skill distribution to be heavy-tailed. You expect a really thin tail with the very, very, very best people on it, which suggests that, actually, the very best person is going to be generally much, much better than the second best; the gap between the top people is gonna be larger. I guess what's going on there is something like they're both putting in as much work as they can to become incredibly good at this skill. But one of them, for whatever reason, ends up being better at it for a bunch of other factors going on, like their genetics or where they were brought up or I don't know what makes Usain Bolt a fast runner. You know what I mean; there's a bunch of factors going on. There's not just literally the amount of work you put into it. What training methods they use or that kind of thing.

SPENCER: Well, I guess someone like Usain Bolt, if he finds a slightly better training method and puts in a little bit more effort, that might be the difference between breaking a world record or not, so there could be huge gains to just even improving slightly. That's what I'm getting at is, when you get to really, really good levels, even though you have very strictly diminishing returns per hour, you may have the opposite of diminishing returns in terms of your impact.

BENJAMIN: The question is, is that relevant to doing good? In lots of the cases, you have these sudden, discrete things happening, and in this case, it's like, we have this human line. And if he gets over that line, he gets more reward. Whereas, when you're doing good, when you're actually trying to have an impact, it's much more continuous; you're in a much more continuous world, where I just expect there to be fewer of these sudden, discrete lines. There again, there are definitely circumstances where these do exist, like you actually come up with a great idea instead of not coming up with a great idea. That's a discrete 'I have the idea,' or I don't, rather than just like the idea or a worse version of the same idea, because probably you could iterate on it and come up with a really good one.

SPENCER: Yeah, well, I think it's probably domain specific, but probably the very best ideas for making safer AI are much more valuable than the tenth best idea. Or if you're trying to have a positive influence on the world through ideas, probably being the best at spreading positive ideas is actually much better than being the tenth best. But I'm sure it doesn't apply in every domain.

BENJAMIN: Yeah, safer AI is a fascinating one because I think there's just a disagreement here where some people are like, "Oh, what we need is this idea. When we have a great idea for creating safer AI, that will be the thing." And so the better the idea, the better. And other people are like, "Nah, what we need is, not quite grunt work, but lots of empirical work on the models we have, prodding them and testing them and seeing what happens." And that's much more like just more work, more work, much more continuous and probably diminishing returns to being the very best at this than the sudden jump.

SPENCER: Yeah, and that may ultimately be an empirical question that nobody has answered yet, but we can speculate.

BENJAMIN: Yeah, exactly. Predicting the future, it's hard.

SPENCER: Turns out, yes. [laughs] Breaking the past, a lot easier.


SPENCER: I noticed one thing in this discussion that you didn't mention is passion. Classic career advice always leads with 'follow your passion.' What do you think about that?

BENJAMIN: By this point, it's a meme that 80,000 Hours says, don't follow your passion. Everyone's aware of it. I don't know, lots of people must be; it's a very common thing we say. And I guess we have this whole article on what makes for a satisfying job. I think that's one part of passion; surely you'll enjoy it if you follow your passion. But turns out, it's not that great a rule of thumb for finding a satisfying job. The things that make jobs satisfying are mainly: not having major downsides like a long commute, having nice colleagues, and it being a nice work environment; having the right level of stress is a big one, not too stressful, because then you feel stressed and not too unstressful because then you feel like you're not really achieving anything, you'll have that right zone. And obviously, the one I always want to talk about is, to have a satisfying career, you want to help others with it, too, and that's great. But none of these follow your passion. I guess the steel man, to me, of 'follow your passion' might be 'do something that interests you, do something that excites you.' I think, all else equal, those are good things. So maybe it is, to some extent, a guide of what to choose. But this goes back to this personal fit question of how you find what you're actually good at. In our experience, people's guesses about what we are passionate about don't correlate that well with the things that they actually are able to keep on doing day on day and be good at when they actually do that job. It's generally much more an empirical question.You've got to actually be a scientist; you've got to identify your uncertainties and do tests to resolve those uncertainties rather than just introspect about what makes you feel passionate.

SPENCER: Yeah, I think when it comes to passion, one important element of that is meaning. I do think it makes a difference if you pursue something you find deeply meaningful, but because your advice is around how do you help the world, maybe you get that for free to some extent. You're only really trying to advise people to do things that are deeply meaningful; whereas, a lot of careers actually are really not deeply meaningful, so you're not getting that baked into a lot of jobs.

BENJAMIN: Yeah, totally. And all the psychological evidence about what makes for a satisfying job supports this idea that it needs to be meaningful. It's just that most ways to make stuff meaningful is just thinking that your work has meaning, which is not the same thing as following your passion. Lots of passions aren't actually about the work being meaningful. They might be about... If I'm really passionate about tennis, I might become good at tennis, but I'm probably not going to feel like it's filled with meaning. And I think the basic reason there is that it's not really helping other people. There's no core meaningfulness to it. Maybe I'm just wrong about what people mean by meaningful, but that's the impression that my brain has when I think about that word.

SPENCER: I like the phrase 'the impression that my brain has.' It's a great way to distance yourself from your thoughts. [laughs] When people come to me — let's say a startup founder — and they're like, "I'm trying to decide between these two ideas," and I get the sense that they're much more excited about one of them, to me, that actually is a really good tiebreaker, assuming all else equal, let's say; obviously, there are other considerations. And I think that part of why I believe that excitement is important there is because, in startups, one of the biggest dangers is giving up. And actually probably the top one or two or three things that kill a startup is that the founder gives up. And so I use being excited as... It helps you with stickiness, gets through those difficult situations. What do you think of that?

BENJAMIN: Yeah, I think that makes sense. And I think this is why sometimes this is the right thing to do. Because when you're founding a startup, you probably don't have other things that can make you not give up. For example, I might do a job that makes me not give up just because the job is nice. I'm surrounded by nice people. I don't have that long hours, or something like that. And then I'm not going to give up because it's great. Whereas, founding a startup, just in most cases, sounds pretty horrible. [laughs] And you don't have these other things to stop you giving up. And so you really do need some kind of like, 'Oh, I really, really want to do this, definitely. I'm actually really excited about this thing.' And so it's a good example of how, again, all these rules of thumb that we have at 80,000 Hours, they fall in some circumstances. And I think, yeah, in these circumstances, they'll seem true. I'd probably agree with that advice for the entrepreneur.

SPENCER: I do really appreciate how you critique your own advice. To me, it speaks well of your epistemics, and it actually makes me trust you more. But I'm wondering, it's such an unusual thing to do, especially since you opened up this conversation near the beginning, undermining your own advice. I'm wondering, how do people react to that? Do you think that makes people trust you more generally? Is that just a weird quirk about me? What's your thinking on that kind of strategy?

BENJAMIN: Actually, yes. I write articles most of the time and I send them out for feedback. And I try to talk to people who read the articles, see what they think. And one of the most common things we hear is something like, "Wow, you guys actually research things and then tell us what you thought about them." That includes like, "We don't know about this thing," or whatever it is. And you'd be surprised. It's a wide range of people. People who are fairly new to our content and our style, are like, "This is such a refreshing perspective on the world. We should really try and find out what the right answer is and then say when you're not sure." So yeah, it actually goes down really well. And it's actually surprised me since working here, how well that goes down, that kind of style.

SPENCER: One thing I've noticed when I talk to people about their careers, when they just informally ask me for my advice, is that when I'm talking to (let's say) non-Effective Altruism people, or more broadly, just people who haven't thought that much about how to help the world, but they are altruistic people, I often am nudging them, being like, 'Well, you should think about what actually would be deeply meaningful for you, what kind of impact you have." And then it often feels like, even though they're altruistic and they care, that maybe it hasn't weighed as much as I think it should, from their point of view, their own values (not just my values) in their decision making. But when I talk to Effective Altruists, I find myself sometimes giving almost the opposite advice. It feels like sometimes they'll be asking, "Well, should I do X or should I do Y?" basically asking which, according to some objective calculation that has nothing to do with me, is the objectively better thing to do? And I'm like, there is no objectively better thing to do. You have to take into account who you are much more. And I'm just curious if you have observed the same thing, almost like the advice that the two groups need is somewhat opposite.

BENJAMIN: Yeah, I don't talk to people about their career advice very often.

SPENCER: Okay. Because you're more writing articles and things like that.

BENJAMIN: Yeah, exactly. So I think it's pretty hard for me to give a good answer here. I would just say, "Yeah, this is good. You're saying good things here." [laughs] This is why we focus on personal fit so much. This is why we talk about this kind of thing. It really doesn't matter. There is no optimal thing for everyone. It's not the case that every single person should be going to work on technical AI safety or whatever it is that seems like the highest impact thing to them. This is definitely really important. And a related thing here is, I think it's really important to try to balance having an impact with just having a nice life. And there are lots of good arguments for this. There's a pluralism argument that maybe you should care about yourself a bit, too. And there's also a burnout argument that you're going to look after yourself better and be able to have more impact if you don't just solely focus on having an impact. And it's also just the fact that I think many more people are willing to go have an impact in a way that doesn't sacrifice everything about their lives, than people who are willing to make that kind of sacrifice. And so generally, if we can go about and give advice that lets people have a substantial impact and also allows them to have good lives, we're just going to get more people doing this. And that's going to be worth it as a trade-off.

SPENCER: It works really well with my life philosophy, which is that people have different values. And one of their values is helping the world but they also have values around their own happiness and maybe other things in their life, like creative expression or whatever. And so helping them find the right balance between those, to me, that makes a lot of sense, rather than trying to force them to go all in on just one of their values, which I think may be self-defeating in a lot of cases. I've broken the hearts of a few EAs who've come to me and said, "How do I calculate how many utils this project will do if I undertake it?" And I'm like, "Sorry, you can't," and they're like, "What do you mean 'you can't'?" [laughs] I'm like, "You just can't. There is no calculation that will tell you that to any degree of accuracy."

BENJAMIN: At the same time, I have a sweet spot for this. It comes down to this, which is that there are many different ways to produce this meaningfulness, and even if we buy my weird hypothesis that meaningfulness is related to helping people — and I think there's some psychological evidence out there for this, if I remember correctly — then that doesn't necessarily mean that you should go do as much good as you can. Priests, I think, have a super high job satisfaction because it feels meaningful and they... I don't know what else really goes on there. Maybe they get paid, whatever. They're talking to people a lot, they work in communities, things like that, they have really high job satisfaction. And I think there is a jump you have to make, which is to be like, okay, for my life, I want to have a meaningful job and that means helping people. But if I'm going to go help people, should it seem better if, all else equal, I can help more people more than help fewer people less? And to do that, to have this kind of scope sensitivity — it's what's going on here is, to say this — and this isn't necessary for the job satisfaction; this is just needed to actually help as many people as you can, and to do that, you often need to calculate, you often need to actually multiply some numbers together and see how many people you're helping, in what ways, by whatever it is you're planning on doing. Yeah, I don't think this is practically that useful. Most of the time, doing these kinds of calculations is not very helpful. Instead, you should just use vaguely quantitative arguments about, "Haven't you noticed there are lots of animals on earth?" You don't need to worry about the exact number of animals to know that helping animals might be really important, if you could help lots of them or something like that. But ultimately, this is still doing some fundamental multiplying to get utils, or whatever it is.

SPENCER: Yeah, I agree with that. I think that some people expect you can plug in 40 variables into a model and then it outputs how much utility something doesn't. And I just find that those models are very brittle and tend to have, if you model the uncertainty, there's two or three orders of magnitude of uncertainty in them. They can be useful thinking tools, I think, to help you understand the factors and what drives the impact. I just don't trust the output of those models. One thing that I think is pretty cool is the way that you break down impact into a formula. And you mentioned it briefly before, but I thought it might be interesting to go through, this idea of scale times solvability times neglect in this. And I think it's also cool that it's actually a mathematical truth. It's not like you're just making up an equation that's not really an equation. Do you want to just walk us through that really quick?

BENJAMIN: Yeah, this is fun. You said it's a mathematical truth and that's sort of the case. What it is is basically the chain rule: you have some marginal utility, it's the derivative of utility with respect to work, because that's what we care about, the marginal utility from doing some work. And you can split it up into three factors. One way of doing this is the following: (This is the one I have to have in my head, there are actually a few different ways of splitting up that give you roughly the same intuitive factors, but whatever.) The first factor is the derivative of utility with respect to the amount you're able to solve some problem. And we call this scale or importance. And this is basically saying, "Here's this problem and, if I manage to solve the problem by a little bit, how much good does that do overall?" So it's capturing something like: what is the size of the problem? How important is it to solve?

SPENCER: So for every one percent of the problem solved, how much good in the world am I doing based on however I'm defining good? And obviously, that's a huge question — how to define good — but suppose we've just decided on a way to define good.

BENJAMIN: Yeah, exactly. It's how relevant is this to that definition of good for the problem. And then you can take the next term. And the next term is the elasticity of solving the problem with respect to work. Elasticity, if you're not an economist, you can think of the derivatives of the fractions or the log derivatives. It's how much percentage of this problem do you get for some percentage of work done on the problem?

SPENCER: That's called solvability, right?

BENJAMIN: Yeah, we call it solvability or tractability. And the reason we take the derivative as the percentage of the work is basically because, in most cases, we expect diminishing returns to actually working on something. And by doing this, we get to take away from those diminishing returns. So if there are only ten people working on something, and you go and work on it, you do ten percent additional work — that's a lot of additional work — and so you're gonna get a lot of additional problems solved. And if you're the 1,000th person working on something, you're only 0.1% additional work, and so you're gonna get less of the problem solved. By making this an elasticity rather than a derivative, we get to say it's roughly constant for the way in which we're going about solving the problem and doesn't depend so much on how many people or how much money or effort is already going into working on it.

SPENCER: I think of solvability as a percentage of the problem solved per percent increase in the amount of dollars going to the problem, or percentage of problem solved per percent increase in the number of people going into the problem. So if you were to increase the number of people working on this by 1%, what percent of the problem would that solve? Is that accurate?

BENJAMIN: Yeah, that sounds right.

SPENCER: Cool. Okay, great. So we've got scale, and then that's multiplied by solvability. And then the last factor in the equation is neglectedness which we multiply by. You wanna explain that one?

BENJAMIN: Yeah, neglectedness in this equation ends up being derivative of the percentage of work with respect to the actual amount of work, which is just one divided by the amount of work currently going into the problem, or in dollars, one divided by the amount of dollars currently going into the problem. The more neglected something is, the fewer people that are working on it, the fewer dollars there are going into it. And the reason we've separated this out is, like I was saying before, this solvability factor is supposed to be constant with respect to the actual solution that I'm pursuing, even if I change the number of people working on it, or the number of dollars going into it. That's why it's got percentages in there. And so we can take the diminishing returns that we expect in most cases, and that basically ends up turning into this neglectedness factor, which is how many people are already working on it.

SPENCER: Just to restate that if people weren't able to follow that, neglectedness is the percentage increase in resources per every person working on the problem, or every dollar put into the problem. So if you already have a lot of people working on it, there won't be that much percentage increase in resources per extra person; whereas, if you don't have many people working on it, there will be a larger percentage increase in resources per person. Is that right?

BENJAMIN: Yeah. So that's how this is actually mathematically formal. [laughs] Hopefully, people can follow that. The thing that's great about this is, you end up with these three factors that are actually quite intuitive to work with in most cases, but not in all cases.

SPENCER: And then when you multiply them together, you actually get the amount of good done. Is it measured per extra person or per extra dollar?

BENJAMIN: Yeah, exactly. It's the marginal utility of additional effort, basically.

SPENCER: And that's just because, when you multiply these through, the first one, scale, is good done divided by percentage of problem solved. And then that percentage of problem solved cancels as you multiply through, and all the way at the end, through all the cancellations, you end up with good done per extra person or extra dollar.

BENJAMIN: Well, technically, the derivatives and it's the chain rule, so they're not really canceling, but yeah, you can think of it like that roughly.

SPENCER: Right. Okay. So in the non-calculus version, the discrete math version, they multiply through, yeah, they're canceling. Right. So at the end, you get good done per extra person or extra dollar. How do you think someone should use this formula? What's the point of this formula?

BENJAMIN: The idea is, it really helps you identify what to work on. The first thing, scale, says, "If the problem being solved would do more good, that's a better thing to be working on," so it says look for big stuff. That goes back to this idea of being sensitive to the scope of things. If many more people are going to be helped by completely solving this thing, it's going to be better to work on.

SPENCER: Just thinking about that in concrete cases, take something like climate change. It'd be saying, "Well, look, if you could solve 1% of the problem of climate change, how big a deal would that be? If it'd be a really big deal — that solving 1% of it would do a lot of good in the world — then that problem has a large scale. If solving 1% of it would barely do anything, then it doesn't have a large scale. Is that accurate?

BENJAMIN: Yeah, and to go back to your example from the start, solving 1% of the problem of helping the puppies in your neighborhood, it's not going to have that large a scale. And so it's not going to be as useful to work on that rule.

SPENCER: Right. And then solvability, how would you use that?

BENJAMIN: Yeah, this goes back to the thing we were discussing earlier, which is the actual interventions you can use to affect something. How do you do it? If you are trying to help puppies in your neighborhood, maybe one way is to adopt one puppy at a time. Maybe another way is to set up an organization which helps all the puppies at once for the same amount of money. I don't know if that's possible — I don't know much about helping puppies — but that's the idea, is that there are many different ways of solving some problem.

SPENCER: And looking at the formula, it's the percentage of a problem solved divided by the percentage of increase in resources. So it's basically saying, if you could increase the resources going to saving puppies by ten percent, how much of the problem would that solve? Would that solve a large percentage of the problem of unadopted puppies or only a small percent? Or with climate change, if you could increase the amount of money going into climate change by ten percent, would that make a really big difference in terms of the percentage of the problem solved, or would it only move the needle a little bit?

BENJAMIN: Yeah, although there's an ambiguity here, which is, is this factor an average for a whole problem like climate change or helping puppies? Or are you actually trying to use this entire thing to describe a specific solution to a specific problem done by a specific person? So we can apply this formula at the problem level or at the individual career level. And if we're looking at the individual career level, it's no longer how much would ten percent additional resources help solve climate change? It's more like how much would ten percent additional resources given to this specific carbon offset, which is planting trees in South America, help go towards solving climate change?

SPENCER: Because you're just narrowing the scope of it to that solution?

BENJAMIN: Yeah, exactly. Well, the framework has given me a more precise thing. So in the first case of climate change, it's telling me, on average, how important Is it to work on climate change? Whereas, if I'm looking at a specific solution, it's telling me how useful is it to actually work on that particular solution? And you can apply this formula at different levels.

SPENCER: Got it. Okay. And so let's go to neglectedness now.

BENJAMIN: Yeah, neglectedness is, I think, the most interesting of these, because it basically says, if people are already working on it, it's way less useful to work on, which is kind of crazy because most of the obvious problems you'd come up with when you're trying to look at big problems already have people working on them, obviously; people want to do good. And so this basically says something like, you should be willing to do some pretty weird, wacky things because that's, on average, going to be better.

SPENCER: Right, as long as you can keep the other variables constant, not make them too much worse.

BENJAMIN: Or you can use it to compensate a bit. You might think it's actually lower scale in expectation but the neglectedness is sufficiently worth it, that it compensates.

SPENCER: Right. So this formula would suggest, for example, that finding new projects where there's very little money or resources, but for which you could get a quick improvement in that problem as you add more resources, and the problem is a big problem, then those could be especially good opportunities.

BENJAMIN: Yeah, exactly. And this is, in part, why we focus on existential risk at 80,000 Hours. The scale, oh, that's like everyone dying, that's really, really big. And then you get these long-termist arguments, which is to argue that the scale's even larger, because it's also future people dying. And are they tractable? That's maybe where they come down a bit; it depends on exactly how you're trying to solve them. Are they neglected? Yeah, to a large extent, they are. Climate change is probably the least neglected of the major risks. But there aren't that many people really working on preventing nuclear war, for example. And by my latest estimate, there were something like 400, 500 people working on reducing existential risk from AI. That's probably gone up a lot because that estimate was about a year ago now and there's been a lot of change in the last year. But it's a really good reason to focus on these things; they seem really big. And okay, you might be worried they're a little bit weird but that's why they're neglected to an extent, possibly. There might be other reasons why they're neglected, but that's definitely one reason.

SPENCER: Do you notice any ways that thinking about this formula or these three factors can mislead people? Can they take them off course?

BENJAMIN: Yeah, one thing here is people actually trying to use these factors mathematically, I think, often takes people off course. And that's just because these are insane numbers that are really, really, really hard to estimate. How important is something? That's just not something that's easy to actually put a number on. And so what I'm saying here is these are useful heuristics and the facts are mathematically actually true. It's nice and neat and it will give you the right answers if you do it properly. But doing it properly is practically impossible. And so you might see people do some really naive calculation. A classic one is to do something like, "Well, if civilization doesn't go extinct in the next 100 years, then there'll be (like) 50 billion years of humans, and they'll spread over the galaxy so there'll be hundreds of trillions of people in the future, and so clearly, this is the most important thing." And I'm like, great, that's a nice multiplication. There are a bunch of reasons why you probably shouldn't actually take that calculation seriously. For example, almost anything you do right now can be undone in 100 years. So if you're really thinking about those people, you probably want to discount their size. I guess this is technically a tractability discount, but this is the confusion. If you're trying to put this entire scale into just one factor, then you have to separate out the tractability. But you might want to do something like, what are the chances I can actually affect people in 1000 years' time? And when you start introducing that kind of discount, you're no longer getting these gigantic numbers; you're getting something much more comparable to other problems. Another way I think this often misleads people is, some things sometimes have increasing returns. I think it's pretty rare, just empirically, but sometimes this is the case. And this framework doesn't actually assume decreasing returns but people often think it does, because it's got this neglectedness factor. And what's going on here is, this neglectedness factor is always there. And when things have increasing returns, you actually see increasing returns in the tractability factor. All that clever maths we did to try to make sure that it's constant with respect to the amount of work, just doesn't work.

SPENCER: Another factor that may be useful to think about is leverage. How do you think about that in a career?

BENJAMIN: Leverage is really interesting. An explanation is like the concepts that we've been talking about, these three factors for your career: there's your problem, there's how much you can contribute to solving that problem, and there's personal fit, which is how good you are at solving that problem. And so far, I've mainly been talking about this middle factor, how you actually go about solving the problem in terms of something like, well, do you distribute malaria nets or malaria vaccines or something like that? The idea of leverage is that you can get multipliers on that effect. It's an analogy to a physical lever or an analogy to finance. They use the term 'leverage' to mean you borrow a bunch of money, and then you buy an investment with that money, and then the value of the investment goes up, and then you can pay back what you borrowed because you're selling it for more than what you originally bought it for. And you get some money on top, and that's going to increase the amount of money you gain because the amount of money you gain is a percentage of the total increase. And so you borrowed some money and put in your own money as well, then you'll earn more, and you're able to pay it back.

SPENCER: Presumably you're not talking about people actually borrowing money, right? Is it a metaphor?

BENJAMIN: Sorry, yeah, not giving people license to borrow money. Sometimes it might be good, in some circumstances. You should talk to a financial expert. But if you have a mortgage, for example, you're leveraged on the value of your house. You're getting much more returns on the value of your house if it goes up than if you don't borrow any money.

SPENCER: Right, because usually, you don't put down the full amount of money when you buy it.That's right, yeah.

BENJAMIN: Now, because it actually increases the risk that you'll lose money, too. You'll lose much more if you can't get your money back. You're a little bit screwed. So it does increase your risk. And interestingly, that's also true in the career case. The idea of this leverage is that it's going to increase the amount of the solution you're able to get. And if your solution is actually bad, then you're going to do more of the bad things and then that's going to make things worse overall. So what does this thing actually look like? What is this leverage thing? Probably the simplest example is a money-based one. I can imagine a career where either I go out and distribute malaria nets or I do (what we, at 80,000 Hours call) earning to give, where I earn a bunch of money and I use that to pay other people to go out distributing these malaria nets. And it turns out, if you can go into a fairly high-earning career, you'll probably be able to distribute far more malaria nets by funding the jobs of people to do that distribution than doing that yourself. And that way, you're getting leverage; you're magnifying the impact of the solution you've chosen, which is distributing malaria nets, by earning that money.

SPENCER: Do you think that most people should be trying to find ways to use leverage in their careers? Or is this more of just something to keep in mind that you might be able to use but it's not going to apply to most people?

BENJAMIN: I actually think pretty much everyone should be aiming at a high-leverage career, and earning to give is just one example of this. And the reason is just that you can get really big multipliers. Here's some other examples. If you go work in a government or some other really large institution, and you get that institution to change what it's doing, that's a lot of effort that is now being directed in the way that you wanted it to be directed. Government budgets are just gigantic. Their workforces are huge. The same is true of international organizations like the UN, or really big companies like Microsoft or Google. They just have a huge amount of power that you will never have by yourself. And so if you can get them to implement whatever a solution is to the problem that you think needs solving, then you'll probably end up having much more impact in your career. And pretty much all the career paths we recommend at 80,000 Hours, we recommend because we think they are high leverage, and which solution you apply them to, well, that's probably pretty generic. If we say something is worth going into policy, you can apply that leverage to a bunch of different solutions. And ultimately, one of the more important deciding factors for your career path is whether it's a high-leverage career path.

SPENCER: What would an example be of a career path that sounds really good but actually is low leverage, and that actually takes a lot of the benefit out of it?

BENJAMIN: I think a really interesting example here is being a doctor. One of our most controversial articles of all time — it's not really that controversial, but definitely popular — is our article arguing that doctors don't save that many lives. It's a classic example of a career that somebody who wants to do good might go into. They're like, "I'm actually going to go in and help people. I'm going to administer the medicines myself. It's going to be great. I'm going to save so many lives." But if you do the calculations, well, it turns out, they don't. For example, this diminishing returns concept's coming up again. In developed countries, you have a lot of doctors, and each doctor is only adding some small percentage of the additional workforce. And probably if you weren't there as a doctor, probably somebody else would have been. Maybe the main effect you have is slightly reducing waiting times or something like that, rather than ensuring people get treatment at all. And so overall, we estimated something like you save, order of magnitude, ten lives over your career by being a doctor, which is great, and it's fantastic. Saving ten lives, wow, you've done an amazing gift.

SPENCER: It's so good compared to the vast majority of people. It's a wonderful thing to do.

BENJAMIN: It's fantastic. But on the other hand... It's really hard to do comparisons here, because it's really hard to compare this person's life versus that person's life. But GiveWell estimates you can save a life in the developing world for about $5,000. So you can probably save more lives by being a doctor in the West, and donating money to life-saving intervention in the developing world than by doing your actual job, which is an example of just how powerful this leverage thing is.

SPENCER: Have you ever tried to calculate how many lives that you've saved in your career?

BENJAMIN: No. [laughs] Ultimately, I think this kind of problem... With doctors, you have huge amounts of data about the market and about the problem they're trying to solve in a way that we just don't have elsewhere. And this just goes back to this kind of idea of doing these nonsense mathematical calculations which give you some number that don't really mean anything. I don't think it'd be that meaningful.

SPENCER: So when would it be really beneficial to be a doctor? Maybe in a scenario where you're doing some kind of life saving surgeries or really well being-improving surgeries that wouldn't have happened otherwise, like maybe you're going into countries where there are no eye surgeons and you're doing hundreds of cataract surgeries that wouldn't have otherwise occurred?

BENJAMIN: Yeah, maybe this is kind of a cheat answer but my obvious answer is using your skill as a doctor to do biomedical research because, if you do biomedical research, you get to develop ideas, and developing ideas is another way of getting huge leverage. Because once you have an idea, other people can pick it up and use it, and you can spread it over the entire world. For example, if you're able to develop some treatment for a disease, suddenly, many, many people are going to be able to be treated for that disease, many more than could ever have been treated if you just went around administering that treatment one by one.

SPENCER: Actually, now that it's making me think about it, there was a doctor that I saw a little documentary about, who went into countries where they didn't have people doing cataract surgeries. But not only did he do them, but he actually did them while teaching the doctors there to do them. And so he would leave and then, from then on, the doctors there could do these surgeries, and they would cure blindness. I think that's a nice clean example; the doctor could have just done them himself but, by also training the people, he increased the leverage a lot.

BENJAMIN: Yeah, it's a great example. One problem with this idea of leverage is, I think people just sometimes find it a bit confusing. And I think it's this split between the solution you're going for and the leverage. It's not a neat split; sometimes it's not really clear. So maybe if you're doing research, you might think, well, the research is the solution. How is this leveraged in some way? But it's at least a decent rule of thumb to think, is there a way of solving things and multiplying that up, especially when you're thinking in the early stages before you even have a solution in mind, to give you rules of thumb, and the rules of thumb are things like work in governments, build organizations, come up with ideas, communicate ideas, and money, find other people who are trying to do lots of good and help them (that kind of thing), mobilize others, build communities, and these things are just going to, on average, give you better outcomes, are going to help more people if you do these things than if you try and do something that doesn't fall into this category.


SPENCER: Before we wrap up, there's some non-clear topics I want to discuss with you that I think you have an interesting perspective on. The first is about updating too much based on evidence. I guess this is thinking in terms of a rationalist view where you get some evidence, you want a Bayesian update based on this, you have to consider the probability of seeing this evidence if the hypothesis was true compared to the probability of seeing the evidence if the hypothesis was not true, and changing your mind. When do you think people do too much updating?

BENJAMIN: I think people do this all the time. It's really interesting because we have lots of focus — on this podcast or just in general — on like, "Oh, it's really important when you see evidence that you actually change your mind." And that's so important and so crucial. And we never really hear this kind of thing of, well, can you change your mind too much? Obviously, you can. From a Bayesian perspective, there's Bayes' theorem; Bayes' theorem gives you one factor. It's one number which tells you how much you should update. And yeah, you could update too little, but you could also update too much. And I think that this happens a lot to most people a lot of the time.

SPENCER: What immediately comes to mind for me is one of the most ubiquitous thinking errors I see, is people did something that helped them and then they become convinced that it's generally helpful. You see this so much with (like) nutrition things.

BENJAMIN: Oh, nutrition is such a good example.

SPENCER: Oh my gosh. Someone goes on a certain diet and they're so convinced that everyone should be doing this thing that just worked for them.

BENJAMIN: Yeah. I think there are two ways this might look. And it depends on whether you are updating too much in general, and updating too much by roughly the same factor every time, or whether there are specific areas in which you are updating too much. If you, in general, just change your mind too much when people say things, you might just see your beliefs fluctuating a lot. Maybe you always believe the last thing someone told you. And I find myself doing that from time to time; it's quite an easy thing to do just me convincing conversation. And then you talk to someone else about the same thing and you realize, "Oh, yeah, that's a good reason why they were wrong. I should have thought of that at the time," and you change your beliefs too much just because of that conversation. But then the other way is if it's inconsistent. So if it's inconsistent — and I think this is more common because it's how humans work — you'll need to partly be not perfectly consistent in the amount you're over updating for it to look like this. You will just end up with certain areas — like nutrition for example — where you can end up way too confident about certain things. You might find that there's a particular area of your beliefs where you're just much, much more confident than maybe other people around you. And my hypothesis — at least from this model where everyone's actually perfectly rationally updating; there aren't other things that are going on — that's at least in part because they've updated too much on the evidence they saw about those beliefs. And they tend to do that more in that direction. Yeah, this also happens, I should say, when you're updating more when you see positive evidence than negative evidence. You can still update perfectly reasonably when you see something that's against what you believe. But as long as you're updating more when you see stuff that's in favor of what you believe, you'll end up really, really confident about that thing. There's one exception to this, which is, in real life, when the evidence mainly is in one direction, like when it's really obvious. I might be updating too much on the existence of the table in front of me right now, but it doesn't really matter, because I'm not going to get much counter evidence to it. And even if I'm updating too much in the positive direction than the negative direction, I'm still gonna end up with this very, very confident belief. So when all the evidence mainly is in one direction, when every piece of evidence you observe points towards one thing, then you'll just end up really confident, and that's perfectly valid. But when it's these areas where it's just trickier to come to conclusions where there's evidence pointing in all sorts of different directions, where this is more likely to happen. So one rule of thumb you could do here is say, "Is there an area where it seems like people disagree a lot, but I feel really confident?" This is a rule of thumb that suggests, hmm, maybe you've updated too much or you're updating badly, or updating too much in one direction when you're thinking about that thing.

SPENCER: Another thing this makes me think about is the way that people may update too much in really emotional situations. A classic example would be, someone has a traumatic experience, and then they're so freaked out by it that anything vaguely resembling it makes them fearful, and they start overestimating the risk of doing things that are vaguely in that ballpark. Do you think that that's a real thing?

BENJAMIN: That definitely seems to be a real thing. I feel like it's not an example of what I'm talking about here because I think what I'm talking about here is a failure of your rational mind to adapt to the evidence in a certain way; whereas, I would classify that as a bias, where there's a reason why you've done this wrong. And I'm not claiming this is the result of some kind of social bias or whatever. I'm just claiming, when you're responding to evidence, there are all sorts of reasons that might affect just how much you respond to that evidence. It's definitely possible to respond too much to that evidence as well as too little. Here's another interesting example of the way you can use this. I often think — and maybe this is slightly more controversial — that people update too much when they hear philosophical or first principles arguments for things. Let's say somebody comes to you and they're like, "I've got this great first principles argument that, if the US government implements this particular tax, it will raise government revenue." And I'm like, "Hmm, that's a really persuasive argument. Maybe I'll buy that. In fact, I can't see any flaws in that argument whatsoever. I should buy that with 100% probability. Oh, my God, what an incredible argument." One interesting thing here is you can compare that argument, how much you should update on that argument with the empirical evidence you might see that could convince you otherwise. In this case, what might convince me otherwise? Well, I guess if somebody did an RCT, and they showed, with some certain power and significance, that it actually decreased government revenue instead of increased it, and this nice, neat argument was wrong. It doesn't matter how beautifully argued it was or how much I was really convinced, I should probably still end up believing the empirical study. I mean, it depends how good the empirical study is, but assuming it's a decent empirical study. Then we can say something like, "What's the strength of this empirical study?" We can just find that out mathematically if studies have the significance and the power, and you assume they replicate — which maybe they don't, but you assume they replicate — this gives you a Bayes' factor. It gives you a multiplier, how much you should multiply your initial odds by to end up with your posterior odds of something occurring. And so you know that if you update on this initial first principles argument, and then later see a study, you know that you should end back somewhat roughly where you started, maybe a little bit uncertain. Or maybe you should even end up relatively certain that this person was wrong about their first principles argument. And that means that the strength of the update from their argument cannot be larger than the strength of the update from the empirical paper that would convince you that they're wrong. And so this can bound the strength of an update from first principles arguments, and what instead I see, I think, is people updating too much because they sound neat.

SPENCER: So maybe something that sounds like an airtight logical argument, we feel like we have to believe the thing, and we have to believe it strongly because, well, logic has this cache of, oh, well, logic is perfect. I don't know though. I wonder if this is something that Effective Altruist communities are much more prone to; whereas, I suspect that a lot of people who are maybe less analytical, less philosophically inclined, if they heard a theoretical argument, they might be like, "Uh, yeah, that seems reasonable," but it may not actually nudge their beliefs that much. What do you think about that?

BENJAMIN: Yeah, I think that's true. And I think that's one of the really interesting things about this, is I think the Effective Altruist and rationalist communities put a lot of focus on making sure you don't update too little, and I'm like, "You're missing a thing which is making sure you don't update too much." My favorite phrase for this is, "Don't open your mind too much or your brain falls out." [laughs]

SPENCER: Yeah, I have a heuristic I use which I think is helpful, to me at least, which is that I know that there will be invalid arguments I can't find the flaw in, or at least I can't find the flaw immediately or the first time I hear it, etc. I think just trying to keep that in the back of my mind, "Oh, this argument sounds convincing, but it doesn't mean that it's correct." And I know that I've been confused in the past. I've made mistakes on arguments that seemed airtight. And there's actually a really lovely example of this where there's this claim that sometimes when you start with water at a hotter temperature and then you cool it, it can actually freeze faster than if you started at a cooler temperature. There have been empirical studies that claim to have found that, if they're sometimes starting at a warmer temperature, water cools faster, or can get frozen faster than starting at a cooler temperature. Now, on first principles, it sounds impossible because, if you start at a warmer temperature and you cool it down, at some point, it has to get to the cooler temperature and then, it seems like, at that point, wouldn't things just be the same as for the other batch? So the batch at the warmer temperature would have to first get to the temperature of the batch of the cooler temperature and then do additional stuff; whereas, the batch starting at the cooler temperature just has that little bit to go to get frozen. So on first principles, wow, it seems like there's no possible way the warmer water could freeze faster. Empirical studies contradict this. Let's assume for a second that those empirical studies are right. What I think is really interesting is thinking about all the assumptions that we bake into that seemingly airtight argument that the warmer water can't freeze as fast. For example, we assume that the water is well mixed. But maybe for example, the cooler water might actually start forming some ice crystals that make it less well mixed; whereas, the warmer water, it might actually, due to starting warmer, end up being more well mixed, and that can actually make a difference to the freezing. I'm not saying that that's actually what's happening. But the point is, there's actually a ton of assumptions, even though it seems like there's no assumptions actually being made.

BENJAMIN: Yeah, I'm not quite sure what the correct response is here because, in the abstract case... Because I do want to believe things if it seems like they're true. One response I do personally is to use the heuristic as something like someone's told you something convincing, update halfway — believe them to half the extent you would have — and then, over time, if you don't come up with reasons why they are wrong, then you can gradually accept the rest. And the reason to do this over time thing is, yeah, maybe you told someone else the next day and they point out that, "Oh, actually the water wasn't mixed properly and that explains this," or what I would guess is going on here is, it's something to do with nucleation sites or the way the ice actually crystallizes. That'd be my guess, because that's one of the weird things about water. Somebody points this out to you. And you go, "Oh, yeah, that's great. Now I shouldn't update." Because otherwise, you risk this problem where your beliefs just fluctuate wildly because the first person gave you this great argument and then the second person points out the floor, and then you go out to the first person and they point out that that was a floor and mat. And so your beliefs just jump around. What you want to aim for is your beliefs roughly staying at the same level each time.

SPENCER: Jumping right to the correct level of belief rather than rebounding back and forth. Yeah, absolutely. Logical arguments that seem airtight absolutely should move our probabilities on our beliefs. The question is how much, and then I think this framework of looking for hidden assumptions, maybe checking in against other smart people that don't agree with that argument and seeing why they don't agree, can be useful tools.

BENJAMIN: Yeah. Alongside this tool, think of the argument as a piece of empirical evidence in itself. Think of this argument as if it were an observation or a study, and then try to be like, "Well, how much would I update on that?" And then I think you're much more likely to treat this in the appropriate way, rather than just being like, "It's logical so I gotta believe it."

SPENCER: But there's also this outside view thing of, how many times have I been convinced by something that seemed airtight and then actually turned out to be wrong because I learned something else? An example that comes to mind for me is minimum wage and unemployment.

BENJAMIN: Classic one.

SPENCER: Yeah, there seems to be such a strong argument based on economic principles that, if you raise the minimum wage, that should increase unemployment because basically, you're thinking about it from the point of view of a company. It's like, "We have to pay people more than we were paying them. We only have a finite budget, so we're gonna actually have to hire fewer people." Or we're going to fire some people so that we can still not go over budget. And then you can also make really strong arguments based on supply and demand, if you're using basic economic theory. And yet, when they do empirical studies, the empirical studies sometimes find an unemployment effect from raising the minimum wage, and sometimes they don't. It's baffling.

BENJAMIN: Yeah, this is really famous. I think it's Card and Krueger who did this really big study in the 80s in the States, and they first found that this core prediction of economics basically just didn't seem to hold up. I think some people critiqued that study and there've been a whole bunch of other studies, and they all go in different ways so it's not really obvious. But yeah, it's a really good example and I use this example a lot. And I think the reason I use this a lot is just how simple the arguments for it seem. And I guess that you don't get better economic arguments than that, not really. You don't get better predictions about society or the world than the simplicity, at least in terms of how clear is the logic, how simple is this argumentation. It's the idea that that one doesn't work... It's just mind-blowing in some sense that logic just doesn't work that well, compared to going out and measuring the world and observing things. Now, obviously, it makes a bunch of simplifying assumptions but they're pretty decent as simplifying assumptions. Sure, people are actually rational actors, but on average, they probably are.

SPENCER: Yeah, and then you can start trying to use economic theory to explain why you don't seem to always get these extra unemployment effects. You can start saying, "Well, maybe if they can't pay people less, they make the job worse for them in other ways, like make them work longer hours." For example, if they can't pay people less, maybe they find other ways to squeeze more out of the people, work them harder during the hours they are working or something like that. And so maybe it all balances out. But you start getting a little strained trying to explain it. [laughs] Or maybe the people that work at minimum wage jobs, they're not the people you think necessarily. Maybe a bunch of them will switch to working illegally. Who knows, really?

BENJAMIN: This is a really interesting example of this fact that, for every argument that sounds convincing, you can come up with a convincing argument in the opposite direction. [laughs] One thing I sometimes say to people... Because you can be talking to someone, and they can be really convincing. And now I'm doing this thing where like, only about halfway, "Why aren't you convinced? I just gave you a really rigorous argument." All I can say is something like, "If I go home and spend an hour thinking about this, I just reckon I could come up with an argument that's as convincing in the other direction." And so I just can't update that much because I could go do that and I have to update in the expectation of that fact.

SPENCER: Right, but if you try to refute it, and you spend too much time thinking about it, and show it to other smart people and so on, then you can maybe more fully update if it doesn't...

BENJAMIN: Yeah, exactly. It's like the failure of your attempt to refute it that's leading to the full update.

SPENCER: I think it's also a little unfair if someone's pressuring you in a conversation like, "Well, you can't find any flaws in my argument so you should, right now, change your mind." And it's like, no, you should take some time to reflect on it. You don't want to jump the gun.

BENJAMIN: Yeah. It's rarely literally like, "You have to change your mind right now," some kind of gun to your head thing. I feel like it's often just more like, "Great. I just don't have anything to say to you. I guess I'm convinced. I can't come up with a reason right now. Huh." [laughs] So it feels like there's this pressure to be like, "Oh, yeah, that's great. I agree with you totally. Well, then, apply everything." Instead, I go, "Yeah, cool."

SPENCER: Benjamin, final topic before we go: people often are really positive on economic growth. They think, well, if we can get the economy to grow faster, that's really beneficial because it will help raise people out of poverty and we'll just have more to go around and so on. What do you think of that argument?

BENJAMIN: This argument, I think, has a lot to it. You get transactions: if I pay you for something and I want to give you that money, and you want to give me the thing, then we both get gains on that trade. Isn't that fantastic? More of this happening is great. Unfortunately, as far as I can tell, I don't think that... Well, there's really two parts of this: first, I don't think the growth has a clearly positive track record. What's going on here basically, is obviously anytime there's a transaction between two parties, and they get benefits and gains from trade, there are externalities. That means there are people not involved in that transaction that also have effects on them. And sometimes they're positive, and that improves their lives. And sometimes they're negative, and that worsens their lives. And to me, the really obvious thing that's happened since (say) the Industrial Revolution — which I'm picking because that's when we started a current period of really rapid exponential economic growth — is you also get an increase in externalities on things that are not involved in these trades. The obvious example to me — because I really care about animals — is factory farming. We currently have over 100 billion animals in factory farms. We kill 50 billion of these factory farm animals every year. They're living awful, awful lives. And this is a consequence of the growth we've had since the Industrial Revolution in a very fundamental way. Is it clear to me that growth has had a clearly positive track record? No. Would I think it has overall? I guess. I think I do, I'm not really sure. Now, to be clear, I do think degrowth would likely be bad. I don't think we should just be shutting down the economy and depressing it at all. But what you often hear people saying is, "Oh, wouldn't it be great if we could marginally increase the growth rate? Wouldn't that be a really effective problem to work with in a career?" And that's just one of the reasons why I'd be a little bit unhappy with it. Yeah, one response to this would be like, "Okay, yeah, so you don't think it's had a clearly positive track record, but all the technology we have, all our capacity as a society is based on this growth. The Industrial Revolution is based on our ability to research new things, to find new things, and this is all following on from this economic growth. We'll be able to fund research and things like that. Surely this is good in the long run. It's giving us lots of capacity, lots of option value, lots of ability to affect the world. And my instinct in response to this question is to say that, to take this really, really seriously, what we need to grapple with is, what are the effects of this growth on the negative effects of those technologies? And in particular, to me, the most obvious negative effects of technologies is the risks they pose to our very existence, the risk of existential catastrophe. And so then this question of what effect does growth have on existential catastrophe? And this is an area I did some research on a couple years ago.

SPENCER: We have these two arguments against economic growth per se, or just for growth's sake. One is that the mistreatment of animals has vastly gone up. The second is that some of these technologies pose a risk to all society potentially. Are there any other arguments you'd point to against economic growth? I just want to make sure I have the full picture.

BENJAMIN: No, no, they're the two I would think about. I think growth has probably been really good for most humans — at least humans alive today — especially in its effect on the developing world and reduction of poverty, for example.

SPENCER: So if you don't care about animals, and you don't care about the entire world being destroyed, then growth is good, right?

BENJAMIN: Yeah, I think that's a good argument, yeah.

SPENCER: [laughs] What do you think people who are just pro-growth — growth for its own sake — would say about these two issues? Or would they just say, well, maybe the existential risk is not that big and maybe factory farming is not such a big problem?

BENJAMIN: Yeah, saying factory farming is not a big problem is a pretty common response, I think. Most people don't care about animals quite to the extent that I do, so I think that would be a fairly reasonable thing. I think it's pretty difficult to really decide exactly how much you care about each chicken, for example. I think on the existential risk side, one really interesting response comes from this paper by Leopold Aschenbrenner where he argued that actually, growth helps decrease existential risk, even though it leads to the creation of these technologies. The idea is, it seems like risk goes up with human activity; the more activity we do, the more risk of accidents there are, the more chances we'll build some dangerous technology. There are more externalities. Carbon dioxide emissions, for example, is directly related to economic activity and so this causes risk. But at the same time, you can spend money to reduce that risk. You can spend money on this defensive technology, on safety. What Leopold did is he split the economy up into these two categories; there's growth, there's standard consumption, and that increases the risk. And then there's a safety sector, safety category of spending, which decreases the risk. And what he found, basically, is that, as we get richer, you have a few things happening. The first thing is, the amount of risk we're creating each year increases, because there's more human activity. At the same time, people's lives get better. The value people place on being alive increases and therefore, they are more willing to spend money keeping themselves alive and so they actually spend money on reducing these risks. And also, effectively, the cost of spending on safety decreases because you get diminishing returns to consumption, to consuming goods. If you're already very rich, you just don't get quite as much from the extra dollar of spending than if you're quite poor. And so it's cheaper to just not spend that money if you're already quite rich, and your value of being alive increases. And so over time, as we get richer, even though the risk increases, the amount of spending on reducing these risks also increases because it becomes rational to do that. So you end up with this time where risk is highest, but then it falls again. And what Leopold found is that, for most parameter values — or at least the most plausible ones — just speeding up this whole process was useful. You just end up crushing this perilous time period.

SPENCER: Huh, that's interesting to me. Because if you look at something like risk from bioterrorism, it seems to me there's just vastly more money spent on things that could make it easier to do bioterrorism than there is on things trying to stop bioterrorism. And I would say the same applies to climate change, the same applies to air risk, and so on. So even if the proportion of that spending that's on the safety side is going up as people get wealthier, it still seems to me it's way out-dwarfed by the spending that's creating more of the problem. But I'm not sure if that contradicts what he's saying, or maybe I'm just not seeing his full argument.

BENJAMIN: I think that's definitely a piece of evidence that this model is missing, and yeah, I do think this model is missing something. In fact, I think the key thing this model is missing is that, when he did this, he basically assumed that the actors were perfectly moral. They were utility maximizers. They were going to spend on safety as much as would be optimal for society as a whole. I was like, "This seems like a really dodgy assumption. [laughs] Why would people do that?" We can go to this other extreme where, instead of assuming people are totally moral, they're instead these selfish, rational actors, and that, I think, looks more like the world we're in. And so yeah, when I was looking at this, I was like, let's find what we call the Nash equilibrium. The Nash equilibrium is a game theory term for when no one has anything to gain by choosing to change only their own strategy. That is, everyone on Earth is basically defecting in this prisoner's dilemma of how much should I spend on safety. No one has any incentive to spend more on safety because, if they do, they get to consume less, and the safety is helping everyone and they don't get that much of the benefit of it. So they'd rather just spend on their own consumption. And when I went through this model again — a shock — you find that actually, safety spending is much lower in this model, and under the same parameter values, you do find that humanity just eventually basically guarantees it goes extinct.

SPENCER: Final question for you. How do we navigate economic growth in a healthy way where you're getting the benefits from it but with fewer of the harms?

BENJAMIN: Ultimately, there's a spectrum here between these two models. There's this 'entirely selfish people' model, and there's this 'people really care about the world' or act together or cooperate model. And the real world is somewhere in between. And I guess what this says is, if you just let things go unchecked and no one in the world tries to do good, then we're just going to end up in pretty horrible situations. We're going to do horrible things to animals. We're going to do horrible things to the future. But if you end up in a situation where you can get people to cooperate — maybe that's by governments regulating, maybe that's by people trying to do good with their careers and help other people, and actually being willing to do something which helps the world as a whole — you can increase the amount of spending that's on existential risk. But it's not just about spending less, just an over simplified model. You can increase the amount of good that we're doing as a whole, and provide this way for a corporation to enter into this growth-y thing. Shockingly, I'm saying the same conclusion that economists always say, which is, you need some kind of cooperation mechanism to get people to work together to reduce negative externalities. I'm not saying anything more than that really. I'm just saying, some negative externalities — to me in particular, factory farming and existential risk seem really big and quite worrying — but that doesn't mean there aren't ways of solving them.

SPENCER: Benjamin, thanks so much for coming on.

BENJAMIN: Thanks so much for having me. I really enjoyed it. Thank you.





Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: