Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
April 20, 2023
What is charity entrepreneurship? What sorts of incentives pull charities away from their stated goals? Why is Effective Altruism even a thing when it's already the case that most charities probably try to be as effective as they can be and probably use evidence of some kind to move towards that end? How diverse are the value systems in the EA movement? To what extent should charity funders diversify? Under what conditions does expected value theory break down? Is it possible to be too altruistic? Have too many EA orgs moved away from more traditional, near-term causes to pursue long-term causes? How frequently should charities switch projects? What is foundation entrepreneurship? What's the best advice to give to a non-EA person who wants to do some amount of good in the world?
Joey Savoie wants to make the biggest positive difference in the world that he can. His mission is to cause more effective charities to exist in the world by connecting talented individuals with high-impact intervention opportunities. To achieve this, he co-founded Charity Entrepreneurship, an organization that launches effective charities through an extensive research process and incubation program. Prior to Charity Entrepreneurship, he co-founded Charity Science, a meta-organization that increased the amount of counterfactual funding going to high-impact charities. Subsequently, he co-founded Charity Science Health, a nonprofit that increases vaccination rates in India using mobile phones and behavioral nudges. He has given lectures on various aspects of charity entrepreneurship and Effective Altruism in Oxford, Cambridge, Harvard, Yale, EAG London, EAG San Francisco, Berlin, Basel, Vancouver, Stockholm, and Oslo. Learn more about Charity Entrepreneurship here, and learn more about Joey here.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast and I'm so glad you've joined us today. In this episode, Spencer speaks with Joey Savoie about charity entrepreneurship and expected value theory.
SPENCER: Joey, welcome.
JOEY: Hi. Great to be on the show.
SPENCER: So you're someone who's really devoted their life to making the world better and doing it in a particular style through charity entrepreneurship. So let's start there. Can you tell us about what charity entrepreneurship is and why should people be doing it?
JOEY: Yeah. So I think this is one of the most exciting career paths that very few people consider. There's a huge social enterprise movement, large-for-profit entrepreneurship movement, but charities don't just pop out of the ground; they need to come from some sort of place. And I don't think most people have that on their radar as a thing to do or a career path to pursue. You probably didn't think of being a charity entrepreneur, even though we both are pretty entrepreneurial-orientated.
SPENCER: So why charity entrepreneurship as opposed to other career paths?
JOEY: I think the main contrasting variable here is that as a charity, you can focus exclusively on having impact. So with social enterprise, you get the benefits of maybe getting to a larger scale, if you can find a for-profit mechanism that ramps up. But you're constantly pulled in two directions between that for-profit motive and actually doing good. Charities, on the other hand, generally, is a much less competitive market and not everyone is striving to make the most impact or the most difference, probably less talent on average is drawn to it, partly because it's a neglected field. So I think if your sole goal, or at least your primary goal, is having an impact, charity entrepreneurship offers a really large advantage in that relative to the other paths.
SPENCER: Yeah, it's really interesting that you say that, because the way I think about it is that every system has like messed up incentives, and just sort of which set of messed up incentives do you want? So if you're doing a for-profit company but you want to benefit the world, there's going to be this tension between trying to benefit the world and also trying to make a profitable business and also figuring out what you can pitch to investors successfully. So you have things pulling in different directions. And you might think with a charity, you don't have that. You can just focus on helping the world. But I think that's not true. And I'm curious if you disagree because I think that a lot of charities feel intense pressure to please donors because, essentially, a charity can grow and continue to exist indefinitely as long as they please donors. Whereas if they're just focusing on helping people but not focus on pleasing donors, they may actually just run out of money.
JOEY: Yeah. I think you're absolutely right on this. Donors do provide a big pressure point of incentives. Some donors are very impact-orientated. So if you're looking at the global health space, and you look at someone like GiveWell (who is just doing cost analysis and supporting the biggest charities) then the incentives are aligned, because the donors are also highly impact-orientated in the same way. If you're looking at a closed area where there's less impact-orientated donors, then you do have a bit of this dual line pressure. Although the bar to pass that threshold for donors having acceptability or being excited about your idea is a lot easier than the bar to pass profitability in social enterprise. You have a bit more room to focus on impact, even though, if you're in a non-impact-orientated cause area, you're gonna get a bit of pushback.
SPENCER: Yes, it's interesting, because I think what you've done (as far as I can tell) is not only that you are trying to align incentives by doing charity entrepreneurship but you're trying to align incentives by having donors that are really impact-focused, which allows you to really think about, “Well, if we're really doing good, we'll probably get it funded. And so we don't have to have these two different things in mind. We can just focus on this one thing.” Does that sound right?
JOEY: Yeah, I think that's totally true that the charities are more aligned — you can have very strong-willed founders who are going to keep them aligned to impact no matter what — but it's certainly easier for charities to be aligned if you also have a growing network of effectiveness-orientated funders who are gonna nudge things in the right direction as well.
SPENCER: When it comes to really large charities, there can be really large charities that do a ton of good in the world. But I tend to sort of have a little bit of default suspicion that's like, “Hmm, if this is one of the biggest charities, what's more likely that they're really focused on having the most impact, and they just happen to find lots and lots of people that want to support them, or that really what they've optimized for is how to grow or how to convince more and more donors and spend more and more money in convincing more donors to give them more money and so on.” So I'm curious whether you share this suspicion that I have when you find these really huge charities?
JOEY: Yeah, I think that's often the case. And I think this is a little bit like a death-by-a-thousand-cuts type of scenario, where it's not like the founder of that charity was like, “Oh, I care about getting the biggest charity ever,” and then they kind of lost their way there. But as you have more and more employees, they start getting individualized incentives. It's very easy for charity to get siloed and start to think of that charity success as impact in and of itself — which are actually really different things, a charity could be not successful at all, but very very large in terms of budget size — but, I do think if you look at the 10 best charities in the world, they are on the smaller side, and they are on the more narrow and focused side. And there's even just conceptual reasons to buy into this. So if you think, “Oh, a big charity is doing 25 things. What if there was a charity that was doing the best out of those 25 things exclusively?” That's just going to out-compete it from a cost-effectiveness perspective.
SPENCER: Yeah, it's interesting. It's like the more different activities you do, the harder it is to have them all to have a high impact. It's hard enough to find one high-impact thing to do but like 25? Oh my gosh! How on earth would you do that? And so if you're doing 25 things, it makes it harder to see how they're going through that process of being really hard-nosed about what they're doing.
JOEY: Exactly. It's easy to get lost along the way. I think this is part of the reason why when we set up Charity Entrepreneurship as kind of an incubator, we did it that way instead of going down the more traditional, like Oxfam model where you just end up having more and more and bigger, bigger departments. If the charities are in fact independent, then they can fail in a different way than they would if they were kind of latched together as part of a bigger organization. You can let the market (insofar as it is actually selecting for impact) actually select out the over-performers.
SPENCER: Right. So it's like, if you are going to do many things (like you do at your organization), you want to do it in a way where you're kind of trying many things, putting them out there, but then have some selection pressure. Do you want to describe your model and how that works, and why you think it's a good model?
JOEY: Yeah, so a really short way to describe it is it's like Y Combinator, but equivalent for charities. So we get co-founders (it's actually a little bit earlier stage even than YC, so pre-idea, pre-co-founder) and pair them together over the course of two months, train them up in skills, knock down all the stupid barriers that stop good charities from starting (like logistics, finding their first grants, this sort of thing). And then, push them off into the world to run a good organization. And this is a pretty interesting model because it's very, very non-possessive on our part. The charities are totally free once they go through the program, it's very much an education-style program. So this attracts a really high level of talent because they still get all the kind of autonomy and benefits of entrepreneurship, but also all the benefits of having a more established organization that can at least do things that would take a long time for an individual. For example, just registering as a charity can take forever, or trying to get your first grant can take forever. But if we institutionally can just solve that and let a really great charity get started in say, three months instead of three years, that's a huge one.
SPENCER: What do you see as the biggest value add you have? Is it that you're causing people to start charities that otherwise just wouldn't have taken the plunge? Is it that you're helping them pick ideas that you think are, on average better than they would have had or something else?
JOEY: Yeah, I think our bar for starting charities is really, really high. So unlike in the for-profit sector, if a charity is good at fundraising, it can exist for a long time. And if you're just pulling funding from other charities, you really have to beat the median charity to even be worth existing. So I think we push them up the quality gradient, and that ends up being really important because a charity that's twice as effective can end up actually having a difference where something that's half as effective might be net negative for the world. I think in terms of the value add that the founders see, they normally think it's funding and then end up realizing that it's actually more to do with the co-founder, or the training, or the speed boost, they would have ended up fundraising a lot of the time again, if given the level of success that these people are coming in with.
SPENCER: Do you think they would have started charity almost the time? Or do you think there's something about giving them the structure process that nudges them to do it?
JOEY: No, I think most of them would not. So maybe out of a group of 20, maybe two out of the 20 would start a charity. I think part of it's just like a licensing thing, like how do you know when you're qualified to found a charity? There's no degree you can get, now we've written a handbook. But before that, there was no real guide or handbook. So I think just someone in authority who has started other successful charities saying like, “Yes, you can do this, you've been selected using a fairly rigorous and thoughtful process.” And people look at the outcomes of other charities who have gone through a program. And I think that buys a lot of confidence in themselves to actually be able to do something that otherwise seems very, very intimidating or near impossible.
SPENCER: So what advice would you have for people who are thinking about, “Hmm. Maybe I want to serve a charity” but they're not sure?
JOEY: Yeah. So I think one of the first litmus tests is like, “Do you have the autonomy to get yourself through independent projects?” So, can you do an online course if you're keen to do an online course? Because if you can't, that's gonna be really tough because there's no manager. And once you're starting a charity, you have a board, but they're often very softly involved. So I think that's the first benchmark. But in general, are you entrepreneurial-minded? Do you like the idea of starting something new and kind of lean that way but also have impact as a really strong drive? Ideally, even the top drive, because if it's just one of many drives, there's a lot of ways that the charity can go astray. Our website, Charity Entrepreneurship, is a good resource, generally at looking at the resources that other charity evaluators put out. So I love a lot of the writing that GiveWell has done on this topic in terms of what makes a good charitable organization and what we'd want to see in the sector as a whole.
SPENCER: In the startup world, people often say that about 90% of startups fail, which seems roughly accurate — obviously, it's a little tough to know what exactly this type of startup is, is it someone just working on the weekend, is that really a startup? But anyway, it seems roughly accurate — I'm curious, what would you say is true about charity? Do you think it's a similar figure? Do you think it's different?
JOEY: It's really tricky because charities don't fail; they just fail to grow. So a big percentage of charities will just limp along forever at some tiny amount of budget.
SPENCER: Like one person being able to work part-time, that kind of thing?
JOEY: Yeah, exactly. One person, maybe even a volunteer-run or something like this. If you actually look at the charity statistics, there's a crazy number of charities, but a huge number of them are under $50,000 a year of net budget. So it's less than a full-time staff in most cases. So that happens to a lot of charities. CE (Charity Entrepreneurship) charities have a pretty good success rate right now, but I think it is because we give them quite a lot of momentum out the door. And even by our assessment internally, we often assess out of a group of, say five charities, two of them will be kind of big successes, like on route to become recommended by a charity evaluator or a field leader according to the eyes of other informed actors in the field. Two of them are unsure, so they've made some progress, but maybe stalled out or maybe slowed down or kind of aren't experiencing that exponential growth that you'd hoped for. And then there's one that explicitly fails and shuts down. But I do think that explicitly-fail-and-shutdown mode is just much less common in the charity world.
SPENCER: Got it. So okay, so that seems like a pretty good result. Actually, it reminds me a little bit of Y Combinator. This was a number of years ago, but they did an analysis of all their companies, how many of them have succeeded? I think they were measuring success where the founder would have made more money, if they just like work in software engineering, instead of selling the company or something like that. And I think that number was something like 50% were successful by that metric. And they are considered the best startup accelerator in the world. They have this incredible ability to choose talent because everyone wants to work with them, and all this experience, and then they help them out. And so, these companies have sort of everything going for them, right? So if you think about the base rate of failure might be 90%, if you've gone through Y Combinator, it will be 50%. So it sounds like your metrics are like, once they go through Charity Entrepreneurship, maybe you're at like 40% chance of really big success or that kind of thing. Does that sound right?
JOEY: Yeah, I think that's right. And I think the advantages that charity founders have — which is why it's not like a 90% failure rate — is that it is just a less competitive market. The number of people who are seriously trying to start high-impact charities is quite small. So that does give you kind of more ownership over the market, relative to there's a lot of people incentivized to make a profit or to start something in the for-profit space. So I think that's probably why the (I don't know) yellow range exists, the charities are kind of in the middle, more saliently there as supposed to kind of upper out.
SPENCER: I think I'm a little confused by that, though, because it feels to me that while it is true, there are way fewer people like really trying to have super high impact in the charity space, and it's not that competitive in that sense, it seems like there is intense competition over charitable dollars. In other words, people are constantly being bombarded with, “Donate to this, donate to that,” “We will save someone's life for $1,” or all kinds of really ridiculous claims that are actually just not true. And so it's like, how do you actually stand out to the donors? Is it just that there are enough hard-nosed donors who really just care about impact? And so, if you are really focused on impact, it's not that hard to raise money. Or do you think, actually, I'm wrong and just the general charity space Isn't that competitive for dollars?
JOEY: A bit of both. So you definitely can orientate towards higher-impact donors, and that's great. But I also think just the average skill level in the nonprofit sector is not the same, especially if you're looking at software startups, or someone that's kind of known to draw really high-octane talent. I just don't think it's seen as a career path. It's often like nice people go into charity, not the most competent people go into charity. So I think that that's there. I also think just in general, people don't understand the nonprofit market that well. So in terms of it being a viable option to start new charities and compete with old actors, it's just not portrayed anywhere. So it's not like that many charities are starting up that it's then competitive to be like a new charity startup in the space.
SPENCER: When people join your program, what do you see as some of the common mistakes that they make?
JOEY: Very common to a lot of mistakes across the entrepreneurship world are: picking the wrong co-founder, picking a co-founder who has too similar to you skill-set-wise, and then you're constantly competing over the same sort of things, that happens a lot. Sometimes people kind of lock in to an idea too early, that's not really a great fit for them, that sort of thing can happen. We slept pretty hard for entrepreneurial orientation, but even still, I would say, about 25% of people that are going through the program realize that that's not actually a great career fit for them, they want something that's a little bit less risky, or they want something that's a little bit lower pressure, or a little bit less intense. So I think that happens, too, although often, that's self-selection.
SPENCER: Why do you think there aren't more groups in the world that are really focused on impact? It seems almost a little ridiculous that effective altruism can be a thing, because it's like, surely if you want to improve the world, you'd rather do it more effectively, rather than less effectively. And the idea of using evidence and reason is not new. There's lots and lots of groups that are interested in evidence and reason and are trying to figure out the way the world works. So yeah, I'm just curious. Where do you think that the ability to have these low-hanging fruit even stems from?
JOEY: Yeah, I have a bit of a different take on this. I think that almost everyone would agree with the one-sentence version of effective altruism: do as much good as you can with the resources you have available. Like, sure. But that's not really what effective altruism is about. There's all sorts of other things that come with that. There's an implicit ethical assumption. There's an implicit epistemic assumption about how to handle evidence and this sort of thing. So I think there's tons of movements that would say, “Oh, we are doing the most good or we are trying to do the most good and accomplishing it this way.” I think what's different is the kind of assumptions that play into that. There'll be a lot of (say) communist groups that would say, “We are doing the most good. We think this is the best thing for the world.” But they're obviously coming to that conclusion with kind of different baseline assumptions and epistemic views, and maybe even ethical views in some cases about what the best world actually looks like. I think EA is a bit different in that it's, in theory, open to new ideas coming in. So it's closer to a question than an answer. But there's still a lot of ancillary aspects.
SPENCER: Right. So maybe you could pinpoint some of the key differences. So you've got like, a group of communists who say they're doing the best thing in the world, and you have a group of (let's say) environmental activists who are saying they do the best thing in the world. How is effective altruism really differentiating itself?
JOEY: I think the question-versus-answer framing is good. Communism is an answer to how to do the most good, or even environmental activism is an answer in terms of a broad area. Effective altruism, in theory, is not an answer. It is a question that one could update. So, could the whole EA movement pivot to a new cause area? That seems a lot more probable than all of communism pivoting to a new political system, if they now learn libertarianism is the best. Even how the way the communist movement is structured is just that someone would leave the communist movement and go join the libertarian movement, there wouldn't be room for shifting within the movement. So I think that cross-causes aspect is pretty different. I think there is more of an accepted plurality of value systems, particularly, but also epistemic, to some extent.
SPENCER: Oh, really, that's interesting. You think in effective altruism there's a greater plurality of value systems? Could you elaborate on that?
JOEY: I think there's more of a plurality. I think a lot of EAs lean consequentialist, in some way or another, so caring about the ends of things. But within that flavor of framework, whether it's kind of your classical utilitarian, or you care a lot about suffering, or this sort of thing, I think there is quite a lot of differentiation within that. Of course, there are thought leaders who affect local communities/circles and this sort of thing. But you actually could be an EA with multiple different ethical systems. And I think some other movements that maybe claim to be doing the most good would require quite a narrow set of viewpoints, or values, or beliefs to end up coming to that conclusion.
SPENCER: That's interesting. It's funny to me because I'm kind of trying to push EAs to be a little broader in their values. But I actually feel like they're not that broad right now. It feels to me like a lot of EAs sort of accept broadly utilitarian thinking. Do you disagree with that?
JOEY: No, I agree. At least consequential, I think, is definitely there. And I am on the broadening even further side. I think that there's lots of plausible ethical views and doing things that are kind of positive on a bunch of ethical views is a lot more epistemically safe and thoughtful and careful. So I do wish there was even more broad value pluralization. But at least, many stated values are accepted. I guess you're right, that I think if you look at these surveys, it does tend to converge on pretty consequentialist leaning sort of things. But the fact that there's a bunch of different religious groups within EAs, that's quite cool and shows that there's a wide variety of people who believe it from that sense. There's quite a diversity of political opinions as well, and these sorts of aspects. So I think when it cashes out, maybe consequentialism cashes out to a few different outcomes, but there are some sub-variations in that which are pretty strong.
SPENCER: You mentioned that it's more consequential rather than utilitarian. That also surprises me that you're saying that because that seems to me that the flavor of consequentialism that almost all EAs I talked to resonate with seems to be utilitarian thinking. So do you see other flavors of consequentialism that are non-utilitarian as being common in EA?
JOEY: Not stated as such. I think a lot of the people who study philosophy end up with a pretty specific, hardline perspective. A lot of people who study philosophy won't even just end up utilitarian, they'll end up some specific flavor of utilitarian. But if you look at a lot of people who are kind of following softer moral principles that aren't quite as explicitly laid out, or having some sort of moral parliament is quite a common thing that I've heard people talking about, they end up having seats for a pretty wide range of views, including even non-consequentialist views in that but often the consequentialist seats dominate. A lot of people talk about the veil of ignorance and taking that argument quite seriously, which often is framed in quite a consequential way, but it is not an explicitly utilitarian framing.
SPENCER: Right. So the veil of ignorance is the idea being, imagine you don't know what person you're going to be in society or what conscious being you're going to be in society, maybe you'll even be an animal or something like that. And then thinking about what choices you make under that sort of veil of ignorance where you don't know who you're going to be. Is that right?
JOEY: Yeah, exactly. And I think just the idea of the fundamental value being happiness is questioned at some points. Like, maybe the fundamental value is suffering, that is a flame of utilitarianism, but even how important is fairness or helping those who are worse off or this sort of thing, I think it's a thing that is kind of actively debated. Maybe not at a high philosophical level, but in pragmatic actions.
SPENCER: Got it. So let's talk a bit about how you pick the causes that you actually get your charities to focus on. You want to tell us about the process you use for that?
JOEY: Yeah. So under the top-line idea of, we want to do the most good, one thing that's interesting about Charity Entrepreneurship is we can cycle a little bit more aggressively than maybe other organizations that are really specifically focused on (say) just animals for just mental health. It actually, if anything, benefits us to cycle between equally promising causes areas because there are new pools of ideas that we can research, new pools of people who are kind of excited. So what I would say is, we have this kind of tier A cause areas, areas that we think are highly competitive with each other in terms of the best of those cause areas are going to look equally competitive with each other from plausible, moral and epistemic viewpoints. So what this is like led out to historically as we've done animals, sometimes we've done global poverty, sometimes those are both kinds of classic EA areas that we do quite often. But we've also done mental health. Pandemic preparedness is a thing that we're looking at right now. We've done family planning, we've done meta, so kind of cross-cutting overarching projects, like Charity Entrepreneurship itself would be considered a meta-project. So we're able to kind of test out these areas, and then see how those charities perform in terms of our estimations of their impact and eventually external reviews when they get big enough, and kind of have a comparative that way. But maybe that's why I end up in a more pluralistic part of the EA movement, because there's lots of different cause areas to add during the program.
SPENCER: Yeah. So why is it that you end up cycling that way between different broad cause areas, rather than say, pick the one that you think is the single highest expected value? I really like your approach. I'm just curious how you get there.
JOEY: I think that we're just quite modest in some of the difficult trade-offs. So say we take the kind of most obvious one in the EA space, say we have the animals versus human trade-off. How do you prioritize those? There are a bunch of assumptions that go into that in terms of ethics, like how much do you ethically care about a cow versus a human or this sort of thing. But then there's also epistemic questions like, there's not that many RCTs on how to help cows, and there's quite a lot of RCTs on how to help humans and these sort of things. And what we describe as very plausible viewpoints that we put high credence on could value these areas equally, or could value either one of these areas higher than the other. And then it's kind of peaks versus median question. So, do we think the peak of global poverty is more impactful than the median in factory farming or vice versa? And if we do think that the peaks of one cause area kind of supersede the (I don't know) 10th best charity that we could start in an area, if we repeated the area multiple times, then it makes sense to do more cycling or more rotation. So it is something we're always updating. And it's not saying that we kind of view every cause area as exactly identical. We definitely don't. But I do think there's a slightly broader menu of cause areas that do seem quite plausible under believable ethical and epistemic stances.
[promo]
SPENCER: What do you think of the principle of going all in on the one thing that you think is best? Because I think that some people in the effective altruism community, instead of...at least in the abstract, they support that principle, even if they say, “Well, in real life, there are a lot of other factors that maybe you shouldn't go all in.” But they, in principle, support it.
JOEY: I think it depends on your role within a product space. So, I'd like our charities going all in on their cause area and their direct implementation actors, and they need to really be passionate about that. But CE benefits the world a lot more by being a bit more pluralistic, and a bit more open to multiple cause areas as they come in. So I think that depending on your role, like if you're a funder, I'm quite sympathetic to worldview diversification as a strategy, especially a large funder, where if you're kind of an individual, especially if you're going to be working in a career over a long time, there can be some benefit of kind of selecting your cause area. But I do think a lot of these things are just much closer together than people imagine. If you take something like poverty versus mental health, it could be very close to just how much you think subjective well-being as a metric is good versus how much you think disability-adjusted life years is a good metric. And that's quite a difficult thing to come to a judgment on. And I think a lot of people don't even know that that's the implicit assumption they're making when they pick one of those causes areas.
SPENCER: Yeah, when I think about a really difficult philosophical problem like that, that choice hinges on, my strong intuition is that we should be splitting across it, like putting some resources into one, some on the other. Again, like you're saying, not necessarily every individual person. It might be fine for an individual person to choose one pathway. But let's say as a broader movement, you kind of want some diversification there. But other people push back and say no, if you think one has even slightly higher expected value, just go all in on that, because you'll end up with higher expected value. So I'm curious, where do you diverge on that? Is it that you think, in practice, it actually won't get you higher expected value, even though naively it would seem to?
JOEY: That's right. I think the marginal returns taper off a lot quicker than people imagine, in terms of...say I'm a large funder with $100 million to donate. And I pick a really narrow area, say I think mental health is the best area. The first $10 million in mental health is going to be significantly more cost-effective than the next 90 million. So the bar kind of moves up higher and higher for mental health in terms of how much better it has to be than everything else to compare. And I think the same is true with a lot of individual actions. So say CE as an example. We're funding charities every year. So if we only found mental health charities, and found five the first year or five the second year, now we're looking at the kind of ideas that are made the 20th best idea in mental health. And it's not like the field has changed that much because it's only been two and a half years or three years or something like that, where we're comparing that to the very best idea in some other cause area. So I do think cause areas can differ by a lot, but I also think that a lot of the differences are exaggerated, or I guess kind of described out with trade-offs that seem kind of speculative or seem kind of unevidence based.
SPENCER: So what drives that steep diminishing marginal return that you're witnessing? Is it that it's just hard to find that many ideas that seem really promising? Or it's hard to find enough talent to put into those ideas? Where are you seeing diminishing returns quickly?
JOEY: Talent is always the hardest. So I think ideas do diminish off slower than talent diminishes off; there's just only so many people interested in a given cause area and you saturate the top percentile most talented people in that pretty quickly. And then you just end up starting to get a lot of mediocre projects—projects that are run by people who aren't as good a fit and ideas that aren't as good a fit.
SPENCER: That's really interesting. But would you say that if you didn't experience that marginal return diminishing, you would go all in? Is it really just a consequence that you think is actually, in practice, to the higher expected value to diversify?
JOEY: I think that it's not just that. I probably also hold more just moral pluralism principles in general. So, for instance, there's this idea of the moral parliament, which is you have a bunch of different seats on your ethical parliament, and you can assign them to different ethical views. I think there's a lot of times when it's a very small step away from your majority parliament to really significantly benefit other ethical and value systems. And I think that that can just help things, like if you're modest at all, if you put any significant chance on you being wrong, or any significant chance on those other viewpoints being right, it often just does not take that much to make sure you're not violating other norms. And I think a lot of utilitarians buy this, like, “We shouldn't do things that violate other ethical systems or go against more common sense ethical norms.” But I think we take that maybe even stronger than most people.
SPENCER: Do you have an example you could give there, where how your moral parliament actually works in practice?
JOEY: So say that there's something controversial, like one thing that animal advocates debate a lot is, what if there was some super high welfare meat? So, some cow raised on a super happy farm? Would you eat it, then? And there are some ethical systems, maybe like a classical utilitarian ethical system that would suggest, “Yes, you should eat that cow because otherwise it wouldn't have existed. Cows with a happy life; that's kind of plus one happiness.” But then there's other systems that might have a general dermatological rule, like don't kill or don't raise up something that's subservient or something like this. And I might not give those viewpoints a huge number of seats on my moral council, but they're very, very vocal about that issue. So they're willing to make gains and trades on many other things to make sure that you don't eat that cow in that situation. As it is pragmatically, it's very hard to find ethical meat. But I can imagine this sort of thing being quite a difference in pluralism. I think the cause area thing is another example. Why do you favor diversification or this sort of thing as a strategy?
SPENCER: I think I'm just not sold on the expected value theory, which I know is maybe a strange thing to say because everyone takes expected value theory for granted. I think that it's very strong as a theory if you're talking about making small bets with one unit of good; it's like just preset, like, this is the only thing we care about in this scenario, and you have actual real probabilities, then you can prove really nice properties of maximizing expected value, like you can prove that you will end up doing really, really well and you'll maximize the amount of money you have, and so on. But then when you start loosening those assumptions, when you start saying, “Well, is there really only one thing I care about?” When you start saying, “Well, are these really probabilities or are these just numbers I made up there between zero and one?” And you start talking about really large bets; we're not talking about small bets anymore, relative to the size of your pot. I think that the theory is very hard to defend. Suddenly you're outside the territory, where you have nice theorems. And then it's like, why do you believe in expected value theory?
JOEY: Yeah, that's super interesting. I think I also harbor some skepticism about expected value theory. I believe it as one tool of many; that's often the framing that I use. So if expected value, like a naive expected value calculation, is pointing in the direction of something being impactful, I think that's great, and that is an evidence update, and you should take that seriously. But I think if you have, say, multiple other forms of evidence pointing the opposite direction, expected value at the end of the day is just one particular way of modeling the world. And it can be wrong and fallible and prone to error, like every other model. So I guess I like the convergence of many models, as opposed to something that looks kind of astronomical on one model and then kind of mediocre on a lot of other models.
SPENCER: I agree, and expected value can have this tendency to produce really large numbers in some circumstances where you're like, “Well, it's so big, it should trump all the other models because it just sort of dominates them.” But then it's like, “Hmm, it's not very robust.” It's a little bit worrisome that it can spit out such a big number that like it doesn't matter, all the other considerations are irrelevant.
JOEY: Yeah, I think you can deal with that as by sandboxing. So you can epistemically say, “Okay, I have these three clusters of ways I look at the world. Say, one is expected value framework, say one is expert wisdom framework, and say one is a multi-factored model (where I put weights on different things but don't multiply them together, I sum square them or whatever). Three different tools that you're using to view the world and you want to look for something that looks solid on at least 80% of the tools that you're using. I think that can kind of bound expected value in a way that it's like, “Okay, no matter how good this expected value is, if it's looking bad on every other front, I probably shouldn't do it.” And I think in general, that's probably quite a good move in life. You end up at some really weird, repugnant conclusions if you're resting too heavily on one set of assumptions.
SPENCER: Yeah, I like that way of thinking about it. And I don't mean a distance expected value too much. I think it's one of the best theories humans have ever invented. It's an incredibly powerful theory. It's just like, I think that people sort of want to generalize like, “Oh, this is an amazing theory, in these limited circumstances too; this is the answer to all problems.” And then I think you start actually having a bunch of issues crop up. There's also kind of this interesting side note, which is like, even if something worked really, really well in theory, it doesn't mean that when humans tried to use it, that they'd actually get the best result. An example of this would be Bayesian thinking. I think Bayesian thinking is a really good model for thinking. But does that mean you should be trying to do Bayesian update calculations and calculate the probability of the evidence given the hypothesis all the time? Or does that actually get you better results? That's actually a different question, right?
JOEY: Yeah. Well, I'm a huge fan of instrumental rationality as opposed to epistemic rationality, and that being right about the world is great, but only so far is that actually lets you accomplish your goals. And I think lots of the time, there's situations where a model might seem totally valid, but it later is not valid, or there was some sort of flaw in the math, and you have to kind of use it under the application of how humans use things. And humans are a messy network of neurons that make a lot of mistakes, whenever there is room for mistakes.
SPENCER: Yeah, and I see one way we can improve these kinds of theories, like expected value theory and Bayesian thinking, is to find better ways to help people use them properly in real life, where they actually getting better answers than they would without the model, which is not actually as easy as it necessarily sounds. I think one thing I tend to be a bit skeptical of is when someone shows me a model that has like 50 different variables, and they do some big calculation to combine them all, and they get some number at the end. And then they're like, “Look, we have the answers.” My experience, typically with those kinds of meta models, or plugging in some calculated expected value is that I'll find some assumption there that I just don't agree with. And then I don't understand how that affects the output of the model, like 45 variables later. And I'm just like, I don't know what to do with this.
JOEY: Yeah, it's one of the basic mistakes that a lot of our charities make. I think they try to model out their impact, and they often will way over-complexify the model. And I always say, I'd prefer a model with five inputs, each of which are really well cited rather than a model with 45 inputs, half of which you completely guessed at. And it's because robust and small models can be a lot easier both to error check and actually lead to some sort of predictive validity, even if they don't include every single factor that you'd like to.
SPENCER: Yeah, I think we totally agree on that point. There's something really powerful about being able to keep them all on your head, like having it be simple enough. Now, of course, that's not always possible. We could just encounter things in the world where there actually are 100 important variables driving the behavior. But fortunately, in a lot of things, you can actually peel away most of the variables and hone in on the things that matter. And if you can keep it in your head, then you can reason about it, it's easier to find flaws with it, you can explain it, and then someone else can critique it, you just get all these nice benefits.
JOEY: Yeah, I think this is actually an argument against utilitarianism in some ways. And an argument for something like real utilitarianism is that if you have a set of heuristics that you can kind of use and go back to reliably, even if the heuristics are not always right, that might be better than having five times as many heuristics but you inconsistently applied them, or forget or use the wrong one. So, I don't know, I'm definitely for maybe it's just a drive towards parsimony. But I like lists of threes and things that are very simple to break down and remember.
SPENCER: Well, relevant to that, I know you have a set of principles for yourself. Do you want to walk us through those and how you think about that?
JOEY: Yeah, I do. So I'm pretty quantified in general. And I wanted to think about what were my underlying principles that were leading to all the actions that I was doing. And I was able to break it down to three top principles. I call it the three H's: helping, happiness, and health. Helping, being this broad catch-all for doing good for the world, doing good for people outside of myself. And then under each of those, I kind of had three pillars. And then in each of those, I had three heuristics. And I kind of score myself on how well I'm doing the heuristics. I kind of color-code them over the course of (I kind of review it) every quarter. And those heuristics, in theory, if I do them well, will lead to that pillar being done well. If I do those three pillars well, those will lead to the top line principle being done very well. So yeah, it's three sets of threes, threes all the way down, but it is short enough to be memorable and it's action relevant enough at the most specific level to actually be something quite concrete that I can do.
SPENCER: That's really interesting. So it's probably too much to go through all of them. But why don't you walk us through just a couple of them and how they work?
JOEY: Yeah, so I can do one that's maybe more concrete in that, say I'm looking at health as one of my principles. And then I have healthy habits as a pillar of that. That's if I have a lot of good healthy habits, that's probably gonna lead to my health being better. One of those healthy habits might be shaping my environment. So making good habits easy and bad habits hard. So that will lead to those habits being better. Another one is avoiding particular big bad habits. So keeping smoking, drinking drugs and BMI lower than others. If I kind of check those two boxes, and if I check all three boxes, that kind of leads up to the more concrete eventual goal of health. And interestingly, two of my goals are actually satisficing goals. So both health and happiness, I just want to be healthy enough or happy enough. I don't really need to be fundamentally the healthiest person ever, the happiest person ever. So a 90-10 kind of approach to this can get you a lot of the benefits quite quickly.
SPENCER: Whereas I guess you're gonna say the helping goal is sort of unbounded, like you want to help as much as you possibly can?
JOEY: Correct. That's the one I'm infinitely ambitious about, which probably isn't great for the happiness goal. I think they often say happiness is your ambitions divided by your success. And if your ambition is infinite, that's kind of tough. But I do think that is kind of the top-level goal. That's where I put 90% of my weight. So I am overwhelmingly focused on helping people versus everything else. That's kind of the end goal for me. So with that one, yeah, I will never be satisfied. That will just be a number that I keep wanting to make higher, where if (I don't know) I was stably above an eight out of 10 happiness or stably kind of above an eight out of 10 healthiness, those would both be kind of fine.
SPENCER: How do you think about the trade-offs there? So like, maybe it turns out you're the sort of person that can work really, really hard to help in the world and not be too unhappy. But imagine that there is a significant trade-off, where you're gonna get actually much less happy by helping more. I'm just curious how you think about navigating that. Not that there's easy answers there.
JOEY: I think often it aligns more than people imagine. I think if you are setting helping people as your fundamental purpose, and then you're accomplishing that well, that can give you a lot of happiness. But I do think there are explicit trade-offs. Like, if my top goal was happiness instead of helping, I do think my actions would change quite concretely and quite significantly. So I do think there is a difference there, and probably my happiness is taking a bit of a hit. But it's not as big a hit as people might perceive. I think, a lot of the time, people with a really strong transcendent purpose are actually still happier than those who are explicitly happiness-seeking. Sometimes the way to get a goal is to kind of go at it sideways.
SPENCER: If your happiness was worse, do you think you would sort of ramped back to helping if you thought it could help your happiness until you get to serve a certain amount? You mentioned satisficing, which is usually where you're like, “Oh, I want to have enough of this thing; I don't need to optimize it further.” Is that kind of how you think about it? It's like, once you're over a certain threshold of happiness, you're like, “Okay, good. Now I can go focus on helping.”
JOEY: Yeah, I think if you get too low happiness-wise, it starts to affect your productivity. So it's almost like helping is eating its own tail in terms of that really matters. I think I would, if I could, press a button and sacrifice a lot of my happiness for a kind of if I knew that more helpfulness would get done in the world. I'd press that button kind of all the way till I'm super unhappy; there wouldn't be kind of a limit on that button. But I think that's a really big difference with other people, I think it's much more common to have some sort of threshold you need to pass happiness-wise before you can start outputting good work practically.
SPENCER: So you would essentially annihilate all your happiness to double you're helping the world, that kind of thing?
JOEY: Yes.
SPENCER: And why is that?
JOEY: So back to the veil of ignorance. I think of the veil of ignorance just quite seriously. So I happen to be in a really good position where I'm happy and able to do lots of great things and able to have access to lots of resources. But I don't see any reason other than luck, kind of why I got that compared to being any other kind of sentient being in the world. I could have been born in a much, much worse situation. And, yeah, maybe I'd zero my happiness and that causes 1000 other people to be much happier, that just seems like what I would want people to do kind of behind that veil. So that's the action that I would take.
SPENCER: So the way you're describing that sounds almost like conscient actually, in a sense of like there's a universalization principle like, “This is how I would want everyone to behave, and therefore I'm going to behave it.” Does that resonate or am I getting it wrong?
JOEY: No, I think that's right, although I kind of view that as more like a rule, utilitarian principle. So it's like the veil of ignorance, I think, is quite a good way of quickly proxying what might be an ethical thing to do, although I can't imagine edge scenarios where it doesn't actually make sense and doesn't fall through. In that case, I wouldn't. But it's a good approximation for how to do more ethical actions and not to see yourself as kind of that special or superseding importance that gets into identity stuff a little bit, I guess.
[promo]
SPENCER: One thing I wonder about with regard to your thinking on this is, where do you stand on this idea of objective moral truth? Do you think that there is an objective answer to what's good and it's just hard to know?
JOEY: Yes, it's very weird. I've seen like 50 debates on this, and I've never seen anyone change their mind. It just seems like quite a hard one intuition with a lot of people.
SPENCER: I changed my mind. But it wasn't from debates. It was more just I'm thinking about. I used to believe in objective world truth in college, and then I stopped believing.
JOEY: Interesting, I wonder if this is actually a similar story to me. So I believed objectively, but then I kind of talked to someone about it, and they kind of defined it in a slightly different way. So by objective I was thinking that there's like a convergence of human brains towards certain ethical principles, because we have evolved that way and that sort of thing. But I don't necessarily think there'd be convergence if (I don't know) squids took over the world or something like that, or some alien species came abstractly. What was it that convinced you? What kind of thought experiment pivoted it?
SPENCER: So when I was 18, I was reading Jeremy Bentham for the first time. And when he described his philosophy, I immediately resonated with it. I was like, “Oh, this is how I think about the world.” And so that's when I started identifying as utilitarian. And then in college, as I started thinking more about what's really grounding this, like, “Why is it objectively right to increase utility in the world? Like, why increase happiness and decrease suffering? Am I just describing my preference here?” And so I started reading the different arguments for the existence of objective moral truth. And I just was like, “I don't find this convincing.” So I think I had just had this intuition that there's objective moral truth. I didn't question that hard at first, you know. And then when I actually started digging the arguments, I was just like, “Oh, wait, there doesn't seem to be much there. I don't find these arguments persuasive.” So I guess I don't believe in this thing. I just feel it intuitively, but it doesn't mean I think it's right.
JOEY: That makes a lot of sense. I think that's somewhat true is just getting clear on the definitions of what exactly that's meant. So whether it means humans converge, whether it means that different evolutionary systems would emerge, whether it means that kind of there is something absent, any sentient being that transcendently exists as a moral thing, I think as people get clear on the definitions, maybe they can end up at a different spot some of the time. But yeah, I'm on the subjective side. So I don't think my morals are kind of particularly correct, although I do have a preference for other people to have them. So that's one of the differences between a moral and a preference, in general I guess, is that you want your morals propagated, but you don't really care about your preferences being propagated.
SPENCER: So then it's really interesting to me that if you don't think they're objectively correct, but it seems like you would be willing to sacrifice all your own happiness, at least for a certain amount of helping. I assume you wouldn't be willing to sacrifice all your own happiness to just help a tiny, tiny bit more, but you will be willing to sacrifice all your happiness to double the amount you help or something. Is that accurate?
JOEY: It's hard when I think about thresholds. I think it has to be more than 101% my own happiness, but I'm not sure how much more. It might only be like two or three times my net happiness before I would trade off all my happiness for that.
SPENCER: You're saying if you could take all your happiness you'd experience for the rest of your life, and give two to three times that much happiness that you're sacrificing to other people, then you'd be willing to sacrifice your entire happiness? Is that right?
JOEY: Yeah, that's right, assuming it's clean of flow-through effects. So it's not gonna make me work really crappily because I'm not happy enough to actually do work.
SPENCER: But it's just, yeah, a clean abstract thought experiment. That's incredibly noble and good, but also kind of a shocking degree of altruism, I think. It's interesting to me because I usually think of that kind of devotion as coming along with a belief in objective moral truth. But if you think it's subjective, I'm wondering, what drives that level of willingness to self-sacrifice? And maybe it's just that thought experiment mentioned earlier, but I'm curious if you have anything else to say about that?
JOEY: Yeah, I think it is the degree of suffering that exists in the world I find extremely compelling. I think I find my own identity not particularly ethically important. And then you get there quite quickly; the veil of ignorance plus not having a particular preference of self-identity. I maybe have some strange views on self-identity in general, as well, in that I probably don't value my future self as much as other people do. I do value them insomuch as they're gonna do good, but I guess I describe it as I see myself as say, you're watching a movie, and you're kind of experiencing that movie. And then there's many other movies going on at the same time. And you're like, “Okay, this movie isn't particularly special. It's the one that I'm watching. I'm in the theater and this is the movie that I'm experiencing—the movie of Joey Savoie, his life.” But I don't think that that adds any transcendence to it, at least from an ethical perspective. So I just don't see my happiness as all that special.
SPENCER: Now, do you feel identified with your past self? Do you feel like you're the same Joey or just feels like a different person?
JOEY: I do, but probably less than most people. I think some people have this really strong sense of self-identity. And I definitely have a coherent narrative for how I got here, this sort of thing. I think I probably feel more kinship to people who are more similar to me now, but who are not me than I would to my high school self, might be quite different from who I am now.
SPENCER: Yeah, that's really fascinating because I feel so identified with my past self. And I just feel like I'm the same exact person. I'm just, hopefully, a more wise version of me as a child.
JOEY: Do you think that would change if you changed more, though? Maybe that's just that you were very similar?
SPENCER: That's an interesting question. I perceive myself as really the same. Obviously, I've changed in many, many ways, but it just feels like this core is the same. But yeah, it's a funny personality trait. One thing I've been thinking about lately is, it seems to me that there are three main drivers of ethical behavior in humans. The first is our emotions. So, if you see someone's suffering, you want them to stop suffering. Most people do. It's like, “Ah, that's terrible.” You just feel it's bad on an emotional level: you have compassion, you have love, empathy. The second are social norms. People just sort of automatically mimic each other without even thinking about it. So, if people don't go around punching people in the face, you're not gonna go around punching people in the face by default. In the social norm thing, it's partly just this automatic mimicry that humans seem to have. And part of it is just wanting to avoid punishment, like, “Oh, you don't want to do stuff that's considered bad socially, because then you might be ostracized.” And then I think the third piece is our beliefs about what's good—these logical beliefs, or these theories or frameworks we have. And that could be everything from like, you're raised Christian, and you're taught that the Bible is the source of truth, and you believe that on an intellectual level. Or it could be you're an effective altruist, or whatever your philosophy is. And I think all three of these things tend to drive people's ethical behavior. But for some people, they're much more driven by one than the other. Like, you might have a super empathic person who has no particular philosophy about the world, but they really try to help people, because they just have such empathy and compassion. And usually, they'll help in a certain way. They'll usually try not to save the whole world, they usually try to help the people around them and make sure everyone's happy around them, and so on. Whereas other people, you might have someone who's essentially sociopathic and in the sort of technical sense of not being able to experience empathy, or compassion, or love, but they're extremely driven by some intellectual belief about what's good, that might actually cause them to behave in a good way, even if they lack empathy. So anyway, I guess what I'm getting at is it feels like these three forces drive us to different degrees. And it seems to me, just talking to you, that you tend to be driven by that third piece, that belief piece, maybe a lot more than other people. I'm curious, do you think that's true?
JOEY: More than other people, yes. More than other EAs, I'm not sure. I think I have the first one quite strongly as well. I think a lot of the time, it's the empathy that gives you the fuel. So being able to really vividly imagine things or put yourself in other people's shoes or this sort of thing. But it's definitely then applied to a cognitive system, which I've built. And I kind of trust my cognitive system, kind of the same way you trust a task management system, like, “Okay, I've set up my values here, I've thought about them really thoroughly. Here's the principles that I can do.” And then the empathy kind of creates the fuel or the energy or the urgency that then allows me to put more energy or intensity into it than a lot of kind of rational altruists would. Do you also have the more rational side that's kind of heavier for you, or just some mix of both?
SPENCER: I think it is a mix for me. I think I'm less likely to copy other people's behavior than a lot of people are. I just remember a bunch of cases, especially when I was young, where I just wouldn't copy people's behavior and they thought it was really weird. For example, I never drank alcohol, and people just always thought it was weird. I remember one time, I got to a party and people wouldn't let me in there, like, “We're not gonna let you in until you drink this beer.” And I just sat there refusing to drink it for like 20 minutes until finally, they felt bad. [laughs] I think it is just something in that way a little bit different about me, like less likely to have that social piece. But I do think I have the first and the third really strongly both. And I think they're both a really big part of my behavior. I don't think one facility dominates.
JOEY: I think in some ways, that second piece is kind of like a social resilience aspect. And I actually think to be very extreme in altruism, you might almost have to not have that piece. Because if I did kind of have that sanity check of, “Oh, what is normal in society?” You just wouldn't push as far. You wouldn't have as strong a view about how much to donate or that sort of thing.
SPENCER: Yeah, it reminds me of this guy — who I imagine you probably read about him as well — who made a bunch of money in business. And then he decided to give all his money away. But he also went further where he gave away one of his organs, and was giving away his family's money, etc. And there was a profile of them and what's really fascinating to me about it is that it wasn't just people were like, “Oh, that guy is amazing.” And it wasn't just they're like, “Okay, that's kind of weird.” It was like, people wrote this guy hate letters. I'm sure some people were inspired by the behavior, but others were horrified by it. It was sort of like they thought it was like a grotesque perversion because he took altruism so far.
JOEY: Yeah, well, I think people really hate it when altruism starts competing with other values. Like when we're talking about trading off happiness, I think it's all fine and dandy to say, “Oh, yeah, I like helping people and I think it makes me net happier.” But as soon as you say, “Yep, I think I'm just less happy because I'm being this level of altruistic.” I think people are really uncomfortable with that. And part of that is just like, I guess, implicit judgment they're making about, like, “Oh, I don't want to do that.” But I also think part of it is just some broader moral uncomfortability with, “Okay, this person is taking that idea really seriously. That kind of pattern matches some scary things that other people have done when they take ideas really seriously.”
SPENCER: Ah, yeah. It's sort of like they're unbound in what they could do, right? They're not limited by the normal things that limit people. And maybe there's some wisdom in that intuition. Like, some of the most horrible things have ever happened is because someone just sort of violates sort of normality so, so far.
JOEY: Yeah, I agree. I think if you have social resilience, you actually have more of an obligation to be more thoughtful about your ethics and making sure that you're not stepping on other ethical things because you've kind of taken away those guardrails in a way of standard societal norms that kind of keep you from going to off the deep end of doing terrible things. So you almost have to enforce that if you're gonna go, “Okay, well, societal guardrails aren't good enough, I want to push society forward in some way. Okay, let's make really, really sure that I'm doing something that is constructive across a lot of different viewpoints and perspectives.” So maybe back to your original question of what do you do if you have 51% confidence? Maybe it depends how crazy outside the norm it is. If it's super far outside the norm, you want to be a lot more cautious with divvying up things like that.
SPENCER: Yeah. Reminds me of the Sam Beckman-Fried interview where he was asked. It was something along the lines of, “Would you take a 51% chance of doubling the world and 49% chance of destroying it?” And he's like, “Sure.” To me, and I think to a lot of people, that's horrifying because you're so much on the line. It's not just your thing. It's like you're putting everyone's thing on the line, right? So yeah, there's something that's like, “Wait a minute, your moral theory says to do something that almost everyone finds repulsive, you probably want to really think about that carefully.” Right?
JOEY: Yeah, I think you should be spot-checking yourself, especially when you're going into tons of other moral perspectives. Like, how confident are you really? There's lots of brilliant people that have pretty divergent ethical and evidence-based views. And a trade-off like that, yeah, I don't feel like it's something that you want to do. At least, you don't want to do it, if there's enough different seats of different parties that are saying, “Hey, that's really, really wrong.” So yeah, I really think that more people should have guardrails on that sort of thing. And I don't know, a lot of the things that look the most effective are very traditionally good on basically every metric, like we make more bed nets get handed out, or less children eat lead. There's not a lot of controversy, and that just looks good across a lot of different ethical systems. But as soon as it's something that's kind of, I guess, strongly violating some norms of ethics to massively benefit others, I think that is where you get big mistakes. Yeah, FTX being a very recent example. But I think, also historically, it's where you get a lot of big mistakes.
SPENCER: Do you have an opinion on the FTX catastrophe, like what happened there? And what that should mean for the EA community?
JOEY: I think my main thought is, I really hope we take the right lessons away from it. So I think there's some lessons we can take away that would be really, really good. Like, we should probably have better governance. We should probably not make so many assumptions about how much funding is going in our way. We should probably have some more diversity of perspectives and viewpoints. All that stuff is really great. I think there's some negative updates that could happen, like, hey, maybe we shouldn't be so risky. We take too many big risks or do too much entrepreneurship. I think that would be an incorrect update. So I think it's yet to be determined what exactly the EA movement will take away from it. But it certainly could be a lesson. I think in many ways, I'm more like the classic EA camp in terms of focusing on animals and poverty and near-term issues, or issues that are more quantifiable or evidence-based, and also the kind of norms that come with that. So, on the more frugal side, on the more maybe standard normal values aside when thinking about actions. And I do think that those areas are less likely for big explosions like this to happen. There are reasons for that. If you have good feedback loops, stuff like this gets caught earlier. If you have no feedback loops, and you generally work on things with no feedback loops, stuff kind of builds on until it's a really major crash.
SPENCER: You mentioned that you don't want the EA community to update it by not taking risks. I feel like there are risks, and then there are risks, where it's like, there are things that are risky like, “Oh, yeah, you're going to try to start some new charity, and you don't know if it's going to work out.” That's a risk. But then there's risks that are like, you're doing something where if it goes badly, a million people's lives will be really destroyed. And so I'm wondering, I'm curious to hear your unpacking of what are you worried people won't do in terms of avoiding risks?
JOEY: Yeah, I think sometimes there's just this really high fear for really low possibilities. So for instance, when you're doing something, it's like, do you hire one accounting firm to do your finances or do you hire one accounting firm, and then hire a second one, just to check it to make sure that it's still there and kind of like really dot the i's. So I think you can move quite a bit slower as a community if you do that. So obviously, with FTX, it would have been nice if they had any accounting or this sort of thing. I think the first level you definitely want to take, but you don't want to adverse people from taking risks that ultimately the downside isn't that significant. Like the downside of starting a lot of charities is you waste two co-founders' time for two years and maybe half a million dollars or something like that. That's an acceptable risk, given the occasional charity that has a really big impact. If the risk was that everyone hates the community forever, or you do some major harm to other people, or that sort of thing, those are really, really different risks. So I guess what I mean is, don't become more risk-averse when it comes to risks that are trivial or net neutral in terms of outcomes instead of like these huge net negative outcomes.
SPENCER: Right. Yeah, that makes sense. You also mentioned just a moment ago, that you tend to focus on sort of more (I don't know where to put it) traditional cause areas like, “Okay, let's make sure that people are healthy and let's not torture animals,” and things like that. This, I think, actually makes you a little bit different than a lot of where the EA movement has been moving in recent years. where people are focusing more and more on sort of longtermist cause areas. So I'm curious, what your reaction to that is and where you think you're differing from others on that?
JOEY: I think it's actually a more common view than it's expressed. But a lot of the people who are doing these classic areas, they're off working in those areas and not like writing on the EA Forum or going on podcasts or this sort of thing. So there's a little bit less active outreach, I think, happening from that side of the EA community. But yeah, it is a worrying trend. I think that you want to keep the movement open to new causes coming in, whether that's kind of longtermist or shortermist causes. And you do want to have kind of a high level of generalized rigor. And I do think that sometimes people get a bit caught up in one methodology, like a naive expected value calculation and weighing really, really heavily on that, as opposed to looking at a broader plurality or kind of encouraging critical thinking from a bunch of different actors, or this sort of thing. So most of my concerns are kind of epistemic in nature, about the sorts of evidence that are used and the type of arguments that are used, as opposed to an ethical stance on whether people in the far future matter. I totally think they do.
SPENCER: So we take the sort of very basic argument for longtermism, which is that people in the far future matter, even though they don't exist yet, just the way our lives matter 1000 years ago, before we existed. Additionally, as long as society continues to exist, and humans continue to not go extinct, there will probably be massive, massive numbers of humans in the future. And finally, if we can do something now to change the probability of either humans continuing to exist or humans flourishing instead of civilization going badly, and if you kind of multiply those numbers together, we get like, really, really large numbers. Right? So that's like the basic argument of why maybe the far future is the most important thing—this expected value calculation. Where do you see yourself deviating from that in that kind of chain of argumentation?
JOEY: Yeah, I basically think I don't find a really highly uncertain, but high-value expected value calculation as compelling. And they tend to be a lot more concretely focused on what's the specific outcome of this? Like, okay, how much are we banking on a very narrow sort of set of outcomes and how confident are we that we're going to affect that, and what's the historical track record of people who've tried to affect the future and this sort of thing. There's a million and a half weeds and assumptions that go in. And I think, most people on both sides of this issue in terms of near-term causes versus long-term causes just have not actually engaged that deeply with all the different arguments. There's like a lot of assumptions made on either side of the spectrum. But I actually have gotten fairly deeply into this. I had this conversation a lot of times and thought about it quite thoroughly. And yeah, just a lot of the assumptions don't hold.
SPENCER: So it seems like a lot of times people's reasoning is something like, “Well, even if I only have a probability p of having an impact, when I multiply that through, it's still incredible, where p could be some sort of semi made up number that sounds small, but it's like 0.1% or something. It's not necessarily coming to you through some principled reasoning, it's just sort of like, “Yeah, okay, it's hard to say you're definitely not gonna have probability p. It's sort of like a little bit made up, but maybe plausible.” So is it like the construction of that probability that you tend to have a lot of issues with as well?
JOEY: So both I will construct the probability upfront, but I would also say that fundamental argument isn't a great way of making decisions. So it's kind of a broader claim. And I think this actually goes back to like, what's a useful heuristic first? What's the theoretically sound heuristic? If we think that expected value calculations are a really good tool, that's very different than if we think expected value calculations are a really good tool for certain and well-understood phenomena. That ends up being how much you could make the argument between a ton and nothing, depending on how you do that. I think I have an evidentiary threshold, which things need to pass before they are counted as anything. And I think a lot of arguments made do not pass that threshold, regardless of the EV at the end of the tunnel.
SPENCER: So before we wrap up, I suggest we do a rapid-fire round, where I ask you a bunch of questions, and you give your quick thoughts. How does that sound?
JOEY: Sounds great.
SPENCER: None of these are easy questions. [both laugh] One thing I've observed is that people who are trying to do the most good may have a tendency to flip between projects, because they're like, “Oh, this thing seems like the most good.” And then a year later, they're like, “Hmm, but maybe it's that thing.” And then it feels like they can cycle sort of, and can end up leading to not doing that much good because you're kind of changing your mind. And so I'm curious to hear your quick thoughts on that.
JOEY: Yeah, I think this is a super common pattern, especially in entrepreneurs, which I work with a lot. What I often tell people is that five fully-finished projects is a lot better than 10 half-finished products. And it's better to set specific plenary evaluation points at key points in the future, of which you then reevaluate and then switch to another project. So it might be, “I'm going to work on this project head down for two years and do a really good job. And then I'm going to come up for air, reevaluate, explore and then nail down on the next project and focus for another three years.” I think that sort of method gets you a lot of high-quality completed projects, while still keeping you open-minded. You just need to remember which hat to wear and wear it at the right time.
SPENCER: So earlier, we talked about charity entrepreneurship. What is foundation entrepreneurship and why might people consider that?
JOEY: This is kind of another new idea where funders are starting out. A lot of the time they want to do a really good job and there's relatively little explicit training. A lot of foundations, I think, get a lot of advice from people who want to kind of lead them to a specific outcome. And I'm much more interested in almost having an incubator for foundations where they can learn how to make really good decisions themselves. And then they're kind of like out there individually making good granting decisions. So it's not like there's 25 foundations that grant to one person who makes all those decisions. It's more like, we have 25 intelligent independent actors who are kind of decentralizedly funding impactful things. I think this avoids stuff like the FTX thing, where there's really heavy dependence on a very small number of funders. Setting up funding circles or a distributed funding network, each of the funders impact-orientated, but with slightly different epistemologies and values just seems like a much healthier ecosystem.
SPENCER: So people do a lot of things to find meaning in life, right? Sometimes they get meaning from their hobbies, or from their children, and so on. And I know that you think about getting meaning through altruism. Do you want to just make some comments on that?
JOEY: Yeah. I value altruism really highly. And I think there's lots of really good reasons to value altruism really highly. But I think that having a central purpose to your life is super undervalued, in general. So generally having meaning like, if you have some sort of goal (I'm going to write a best-selling book or whatever), it kind of pushes you forward and keeps you motivated. And I think it's actually quite a good way of finding happiness, although indirectly, because you're kind of chasing the meaning instead of happiness explicitly. I don't think many people are thinking about altruism as an implicit meaning, as kind of the most fundamental thing to your being. But I think it's super valuable. It can be very satisfying. I think a lot of people maybe come around to it really late in their life, like after they've retired, and they've kind of had kids and are like, “Oh, how do I get meaning in my life? Oh, maybe I can do philanthropy or donations.” But I think you can just find that a lot easier and a lot earlier if you consider it as, “Hey, what if I set this as my fundamental purpose and built my life around that? How much good could that do? And then also, how much would that affect me and my psychology?”
SPENCER: So suppose you're talking to a non-effective altruist, and they're interested in helping the world. What's a piece of advice you would give them?
JOEY: I think the very short form is that different things do different levels of good, and you should try to move towards things that do more good as opposed to less good. So donating 10% is better than donating 5%, donating to a charity that does three times as much good because it works overseas is better than donating to a local charity. So just that good is quantifiable, and with whatever resources or time or energy you're going to put into good, you should try to find the best output to that. The shortcut to that is like, look at GiveWell's top charity lists and donate there. But I think that that kind of applies at all levels of kind of cause area and careers and all these different things.
SPENCER: Okay, now, suppose you're giving a piece of advice to effective altruists. Obviously, everyone's different, but what's a general piece of advice that you think effective altruists could benefit from?
JOEY: The first one that comes to mind is defer less. So I think that a lot of effective altruists are really brilliant, really smart, really interested and want to make their whole life about charity and helping the world. And yet, they kind of pick a relevant thought leader. I benefit a lot from this in terms of I have been around for a while, and I do get picked. But I think these people are people who would be leaders of the EA movement if they had been here 10 years earlier. And I just want those people to actually independently come to conclusions and think through their strategies and think through what their actual ethics are, and what's the most effective thing. I think that we tend to be a little bit too much following a couple of big thought leaders. And there's just a really large risk of that. The FTX thing being the most relevant and obvious one.
SPENCER: Final question. So suppose someone wants to start a charity working with charity entrepreneurship, what should they do?
JOEY: Yeah, go check out our website, we try to put up tons of resources publicly. And even the application process for applying to get into the program is a bunch of mini tasks about whether you'd like entrepreneurship or not. So it's really a quite a good way of self-assessing for if that's a viable career path to go down. So we run programs every six months. So there's basically always an open window or very frequently open windows. And I think that that's a really good thing to consider if you do lean in the entrepreneurial direction, and then also want to do a lot of good with your career.
SPENCER: Joey, thanks so much for coming on. It's a really fun conversation.
JOEY: Great. Thanks for having me, Spencer. It was really fun for me too.
[outro]
JOSH: How do we know whether or not we are marrying the right person?
SPENCER: It's really tough to answer questions like that, but I think there are some factors to look at. When I think about compatibility, the first thing I think about is this sort of emotional compatibility: Do you find it fun to be with this other person? Do you like this person? Do you have good vibes and good feelings when you're around them? I think that's sort of the first key piece. The second piece I think about is compatible life goals. If one of you wants to have seven kids and one of you definitely doesn't want to have children, that's gonna be really hard to make work. If one of you wants to live in one country and the other one of you absolutely doesn't want to live there, it's hard to make it work. So, you have to share life goals. There's also things in terms of life goals. In terms of finances, what kind of finances do you want to have in your life and what are you willing to accept, and so on? That's the second piece. The third piece I think about is around attraction: feeling sexual attraction, having similar levels of sexual desires, or at least being able to meet in the middle around sexual desire, also things around style in sex (sometimes people are just really compatible and sometimes they are not sexually compatible). So I think that's the third big piece. A fourth one that I'll mention is around what sort of person your partner makes you and what sort of person you make your partner. Does your partner make you a better person or make you a worse person? Do they make you more like who you want to be? And similarly, for you in the other direction. I think these are the four big elements, and I think if you get these big pieces right, then you've gotten pretty far towards finding the right partner.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: