May 31, 2021
How can people be more effective in their altruism? Is it better for people to give to good causes in urgent situations or on a regular basis? What causes people to donate to less effective charities even when presented with evidence that other charities might be more effective? We can make geographically distant events seem salient locally by (for example) showing them on TV, but how can we make possible future events seem more salient? How much more effective are the most effective charities than the average? How do altruists avoid being exploited (in a game theoretic sense)? What sorts of norms are common in the EA community?
Stefan Schubert is a researcher in philosophy and psychology at the University of Oxford, working on questions of relevance for effective altruism. In particular, he studies why most donations don't go to the most effective charities and what we can do to change it. He also studies what norms we should have if we want to do the most good, as well as the psychology of the long-term future. You can email him at firstname.lastname@example.org, follow him on Twitter at @StefanFSchubert, or learn more about him at stefanfschubert.com.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Stefan Schubert about how people can be more effective in their altruism, the epistemic factors and incentives that impact charitable giving, research on effective charities, and norms in the effective altruism community.
SPENCER: Stefan, welcome. It's great to have you on.
STEFAN: Thank you so much, Spencer.
SPENCER: The first topic I want to talk to you about is one that I think is really important in the real world, which is how do we help people be more effective with their altruism? If someone's setting out to improve the world — let's say by donating money — how do we help them do so in a way that actually causes more good rather than less good?
STEFAN: This is the main topic of my research actually, that I'm conducting with Lucius Caviola at Harvard. It's really complex and a nuanced issue in a way. We're thinking that it's not that there's just this one obstacle that is causing people to give less effectively, but there are multiple obstacles. One issue is that people just don't know what the effective charities are and they have other misconceptions about how to donate effectively. So they look at overhead ratios and think that that's a good measure of effectiveness. But actually, you can have a very low overhead ratio but still not be particularly effective if you're working on a very intractable problem, let's say. And many people donate to disaster relief but it can be more effective to donate to charities that work on more recurrent problems.
SPENCER: Let's take a step back. Let's talk about overhead ratio for a second, for those who don't know. As I understand it, the overhead ratio is the percentage of the money going to a charity that is going to things like administration and paying the salaries of the staff and stuff like that, not the direct work that they do, not (let's say) money that's being given to the end recipient. Is that right, it's that ratio?
STEFAN: Exactly. Sorry, I should have been a bit clearer there. Yeah, you're exactly right. People often have this view that it's very important that as much as possible of our resources go to the beneficiaries in the end. And there's some merit to that thinking, of course, because we don't want charities to waste money. But even if you're not literally wasting money on things that are totally unnecessary, you can still fail to be very effective even if you're trying hard and being cautious because you're picking a problem that's hard to make progress on or you're picking a method that's not optimal. So much about charity effectiveness comes from those factors: picking the right problems and addressing those problems with the right methods. And that's something that people might not really recognize but they focus so much on these overhead ratios.
SPENCER: Right, because I guess the intuition is that, if all of the money is being spent on the salaries of the employees, it feels like they're probably not helping the world very much. But you could imagine a situation where (let's say) 100% of the money is going to the recipient. Let's say what the charity does is they give people vitamin D or something like that and so all of the money is going into giving people vitamin D — that sounds fantastic, very low overhead ratio — but it turns out, actually vitamin D doesn't help the people that are receiving it. Let's suppose that, in this case, it actually gives the people no benefit, then you'd have an extremely low overhead ratio but you'd have actually no impact, right?
STEFAN: Yeah, exactly. That's a great example. Those types of things actually happen a fair amount. And there are lots of things that have some impact, they are somewhat useful and it's not like they have no impact at all. People also have multiple preferences that run counter to effective giving, like preferences for charities that they have a personal connection to and a preference for disaster relief. Often, when there is a concrete effective charity that people could donate to, there are several preferences and misconceptions that block them from giving to that charity. These charities might have a high overhead, it might not address a cause that they have a personal connection to, it might address a recurrent problem as opposed to disaster relief, and so on and so forth. And the problem is that, unless we overcome all these obstacles, then we won't give to this most effective charity. That means it's not enough to just address this so-called 'overhead myth' (that effectiveness is about overhead) because that will just overcome one of these obstacles, and people will still not donate to this effective charity even if the overhead myth is dispelled because there are these other obstacles as well. We need to overcome all of them. And this fact — that is, multiple obstacles to effective giving — explains why effective giving is so hard and why it's so rare. Instead of addressing individual obstacles, we might need a more ambitious approach and that is then to develop general preferences for effectiveness, underwritten by general effectiveness norms, that effectiveness is a good thing. And these norms would help people to address all these different obstacles where they will be gently pushing people to overcome the different obstacles. But currently, there aren't such norms and in fact, instead, the norm is (there's some research that suggests that the norm is), that you should give what feels best for you. There was this one paper where they asked people, "Would you give to this charity which you feel for or to this more effective charity?" And they said, "Well, I will give to the charity that I feel more for." But then they also asked (I think, another group of people) "What's right to donate to?" And then they say, "Well, it's actually the right choice to give to the charity that you feel for," like you're justified in doing that and that's the norm, to give to what you feel for. And that's arguably the root cause of ineffective giving. Ideally, one would address that root cause and would develop more norms of effectiveness. I think that's what Effective Altruism is doing, raising the norm of effectiveness within the effective altruism community. But promoting such a norm is hard work because it goes against how many people see charity, which explains why there aren't more effective altruists in the world.
SPENCER: Can you have a situation where almost all of the money is actually overhead, and yet it's still an impactful charity?
STEFAN: Unfortunately, I don't have an example at hand but, in principle, it's definitely possible. It depends also on how you define overhead ratio, which is absolutely not straightforward. But many charities which are very effective, those are quite complex and you need to invest a lot in research and highly skilled employees and so on, and that will mean that the overhead ratio will go up. But then with the remainder that you're actually helping people with, you can still be highly effective.
SPENCER: I can imagine an example where almost the only thing that has to be done is some kind of rigorous research where all of that gets counted as overhead essentially, because it's just paying the employees of the group to sit around and think and write down their thoughts and debate with each other or something like that, but that's (quote) "overhead." Again, it depends on how you do the calculation. Would you say that there's no correlation between effectiveness and overhead, or just that it's a much weaker correlation than people think?
STEFAN: I think I saw some research which suggested that there was no correlation or almost none.
SPENCER: Right. It's probably a tough question also because it's hard to estimate how effective organizations actually are. It's hard to calculate the correlation if it's hard to estimate how effective things are.
STEFAN: Exactly. There are numerous measurement issues.
SPENCER: You mentioned another interesting example, which is recurrent giving versus giving that's based on some event. Like there's a big crisis — there's a flood, there's a hurricane — and a lot of people want to give in those situations because they feel like there's an unusual demand. And also it's just on their mind, they're reading about this terrible tragedy and they want to help. What are your thoughts there about why is it not necessarily a good idea to give in those situations and it might be better actually to give in a recurring way instead?
STEFAN: This is something that effective altruists have talked about ever since the start of the effective altruism movement. I focus on the more psychological bits, like why people are more inclined to give to disaster relief. But on this issue of effectiveness, one issue is, there is a disaster, it's in a faraway place, and resources need to get there very quickly. And, if you don't give, it might just not arrive in time. So that's one issue. Another issue is precisely because of the psychological bias (if you want to call it that) — this psychological tendency to give to disaster relief — money is flooding in anyway so your additional money won't make as much of a difference because it's not as neglected as they call it.
SPENCER: I remember, after Trump was elected, hearing about certain organizations that were working on pro-democracy stuff. They were just flooded with so much money as a reaction to Trump being elected, that it was inconceivable that they would have an effective way to use that much money. Because, if your budget swells by 10x or 30x over a period of a year, it's very hard to use that.
STEFAN: Yeah, exactly. One could speculate that we have this tendency because, in the ancestral environments, sometimes there were emergencies and we have this system that kicks in, and then you really need to help, which is useful in some ways. But then in the modern world, it's often the case that there are these more permanent or semi-permanent problems that we need to address, and unfortunately, we sort of underreact to them, and we react very strongly to these urgent crises. And there are numerous other such tendencies. Another tendency is that we react very strongly when there's an individual identifiable victim, then we really want to help, whereas, when there are larger numbers of anonymous, so-called 'statistical victims,' then we don't react as strongly.
SPENCER: If we can see the picture of the little girl's face, we want to really help that little girl who's in need, whereas, if it's just numbers on a piece of paper about, "Oh, you can help these 500 villagers," that's actually maybe less compelling. We have a harder time making an emotional connection with those 500 people even though maybe it's much more effective to help them than the one girl.
STEFAN: Exactly, that's a big issue. And then, the other thing is that, with disaster relief, for instance (if we just take that), one issue is that people think that disaster relief is more effective even when it isn't. But then we had a study where we also told people that this other charity is more effective, and then, still, some people went on giving to the disaster relief charity. And it was the same with overhead. When we told people that, actually, this high overhead charity is more effective (the charity with a high overhead ratio that's more effective), they still wanted to give to the charity with low overhead. There are two obstacles here: one is the pure misconception, but then there's also the preference for a low overhead charity and for a disaster relief charity. Those are two distinct obstacles.
SPENCER: Got it. And so I really want to dig into this with you, the psychology of the different reasons why people might actually prefer to give to something that's less effective. Obviously, if you had two charities and they both were working on something people cared about, and they're equal in every way — except one's more effective, one's less effective — it's clear people are going to give to the more effective one, right? If people want to improve education, they'd rather improve more rather than less. And if people want to help sick children, they want to help them more rather than less, clearly. However, I think what happens in real life is that all else is not equal. In many cases, there's something more appealing about the less effective charity that makes people actually prefer it, because it's not just the same except more effective, right? So I'm curious to hear your unpacking of what are these different forces at play that lead people towards the less effective option?
SPENCER: Oh, yeah, that's a great observation. It's very perceptive and I think that's exactly right. I think there are some people who are actually too cynical about this. They say that people don't care about effectiveness at all, and that's just not true. When there are these surveys, people list effectiveness as important. That's one of the top criteria that they have. So you're absolutely right, when everything is equal, they think that obviously, you should be effective, and not being effective will be wasteful and would be wrong. But then in the real world (as you're also saying), everything else isn't equal. So there are conflicting criteria and often people feel — especially for some particular charity, another source of less effective giving would be that they have some kind of personal connection with a specific charity — they're going to go with that even if they're told that there's some other charity that's more effective, so that trumps their preference for effectiveness. But that doesn't mean that they don't have this preference for effectiveness; they really do. And if there is a way to help this cause that they're personally connected with more effectively, then they will want to go with that.
SPENCER: Yeah, that makes a lot of sense. I think it's important to distinguish two types of giving. One is giving that is actually not altruistic. It's either social signaling or purely to make yourself feel good. It's not really a form of altruism. And then a second type of giving that really is about helping others, it really is truly altruistic. And then, of course, given the complexity of humans, we can actually be somewhere in the middle, right? Like you'd have a form of giving that's somewhat altruistic, but you actually have mixed motivations and you're partly trying to signal to others what a good person you are, but also, you actually do want to help, you actually have both motivations combined. But I think it's useful to distinguish because there is a type of giving that is really cynical, that really is just for status or something like that. And then there is a type that really is just about helping others and then there's everything in between.
STEFAN: This is an interesting, complex, and also kind of confusing question. I do agree that there are some examples you can point to where people were literally able to reduce their taxes by donating a piece of land and then claiming that this piece of land was more valuable than it actually was, or something like that. So they literally benefited in cash terms from donating. I think there are such examples.
SPENCER: Also ones where it's just more of a social thing, like someone gives because people expect them to give and they don't really care about the cause but they want to look good, or at least they don't want to look bad.
STEFAN: Yeah, but in such cases, it becomes a bit trickier. Of course, one can envision a case where someone is just making a cost-benefit estimate like, "I want to impress this business partner." Maybe companies are thinking in this way, to some extent, making these donations in order for 'our company or our brand' to look good, and so on. But I would expect that, lots of the time, when individuals donate — at least on the conscious level — people think in terms of like, "I actually want to help people." But then that feeling might in turn be colored by unconscious motives which are more selfish.
SPENCER: Right, right. Well, for example, people might be drawn towards a trendier cause where there's going to be more social clout from promoting it, as opposed to maybe an uncool cause (where actually maybe you get a little bit of social punishment for promoting it) even though you think it's actually just as beneficial or more beneficial. Yeah, it's an interesting question of, how conscious or subconscious is this? I think both can operate, I think there can be a social gradient that we get pushed into. I've noticed this in conversations with certain types of people. Well, I started noticing the things I'm talking about being shifted by the way they're reacting. And then afterwards, I'm like, "Wait a minute. That was kind of weird how my actual social interaction was subconsciously being shifted, but now, in retrospect, I see it," Maybe if you reflect on it, you can realize, "Oh, yeah, maybe I was kind of buying into this trend," but in the moment, maybe it doesn't feel that way.
STEFAN: Yeah, yeah, that's interesting.
SPENCER: So we've got a spectrum of different motivations. But I'd like to go through some of the biggest non-effectiveness-related motivations that you think influence what causes people tend to give to.
STEFAN: I covered some there, like people have a preference for low overhead charities and then they have a preference for disaster relief charities and a preference for charities that they have a personal connection to. We also covered the identifiable victims. Another big thing is, of course, proximity. People prefer giving to people who are close to them socially and spatially. There's this concept of 'charity begins at home,' like you should give locally to your local community, even though it can often be more effective to give to people in distant countries, particularly if those countries are poor.
SPENCER: It seems like there are multiple things going on there. One is that we tend to notice problems in our own communities, you're walking around and you just literally see it. A second is that some people seem to have this norm that it's better to help your own community. It's sort of like a responsibility thing. If you see trash in your own community, then it's your responsibility to make sure it gets picked up, whereas, if you're visiting some foreign city, maybe you don't have the responsibility. So it's like the way you might have a responsibility to your family, you might also have a responsibility to your community.
STEFAN: That's a good point. Interestingly, that might also vary a bit with the social and political system that you live in. I'm Swedish and, in Sweden, the view has been (from the Left) that like, "Well, we shouldn't need charity. The government should provide for people." So then maybe people would think that, "Well, I don't have as much of a responsibility to my local community because the government takes care of that," or it's supposed to take care of that. They might think that it actually does, and then, you're free to support a cause in some foreign country, whereas, in more low-tax countries where people donate more to charity — people in the US, for instance, donate more to charity than they do in Sweden — then they might have more of this responsibility in the way that you're talking about.
SPENCER: That's really interesting. Well, another factor is similarity. It seems like a pretty general rule that humans try to help those similar to them. It's amazing how predictive this really simple rule is. It ranges to everything from, your family is more similar to you than your non-family and you try to help them (most people try to help them) more than their non-family. Your neighbors are more similar to you than people that live on the other side of the country. People within your country are more similar to you than people in other countries. Humans are more similar to you than animals, and animals are more similar to you than insects, and so on. It seems that this is just a really, really strong force in the way humans think about whom to help. And that also has to do with locality, like your community is more similar to you. I'm wondering if you have thoughts on that.
STEFAN: I think that's exactly right. It's not just spatial distance and social distance but also the biological distance that you now covered. People help other humans more than animals. And then, when it comes to animals, we tend to help animals that are more dissimilar from humans less than other mammals, for instance. And then another dimension is temporal distance. We help present people more than future people and there's been a relative neglect of the distant future, which is something that many effective altruists are focusing on.
SPENCER: Yeah, that's a really fascinating one. Climate change seems like a counter-example of this, where people seem to care a lot about the world in a hundred years when it comes to climate change. But for the most part, people do seem to neglect future people that don't exist yet. What do you see as some of the psychological forces there?
STEFAN: That's a good point with climate change. Maybe that should give some hope also (and maybe it could change), that people might care more about going forward. It's complex. One thing is just the general thing that you pointed to, that we help people that are similar to us and that are close to us. But then, one thing that has been on my mind a bit is that the future doesn't feel very salient somehow, whereas people who are spatially distant — thanks to modern technology — you can still make their suffering very salient by showing it on TV, which you obviously couldn't, hundreds of years ago. But now you can, so it can be made salient, whereas the distant future remains psychologically not really on our intuitive radar, if you will. It's hard to picture and imagine it.
SPENCER: Do you think this is because we don't know what the future will be like? It's this vague cloud of possibilities, whereas, when you have a single child who's suffering, it's very concrete. Or is it more around emotional resonance, that it's easier to have an emotional reaction to something that's happening today?
STEFAN: That's a good question.
SPENCER: Maybe it's both factors.
STEFAN: Yeah, probably both. Obviously, there is something else also regarding the more epistemic point that people might have a partially justified concern that maybe there isn't so much I can do now to help people in the future because, if I'm trying to do something now, who knows what effects that will have? Or maybe people would have taken action later on against this problem that seems so important now so, actually, what I'm doing doesn't make a difference. There is all that debate about cluelessness, as people in the effective altruism philosophy have called it.
SPENCER: What's cluelessness?
STEFAN: 'Cluelessness' is that we can't predict what the effects of our actions on the distant future will be.
SPENCER: Sort of like the butterfly effect kind of idea, like I might take this action that seems like it's gonna help us in the future, but there's gonna be so many weird second-order and third-order effects from it that it's very hard to predict what will actually happen from it.
SPENCER: Like if you somehow go back in time and assassinate baby Hitler, that seems great, it's probably great. But then also, the world could be really, really, really different in ways that are completely unexpected today that we just couldn't even imagine.
STEFAN: Yeah, yeah. People might be thinking of things like that so that could be another thing.
SPENCER: Where do you stand on the longtermism thing and the difficulty of influencing the future? I know this is a kind of side point but I'm interested.
STEFAN: Well, I do think it's something to take seriously. For instance, this focus on existential risk within the effective altruism community and the longtermist community, that partly stems from such concerns like, "Well, it's very hard to know what the effect of our actions will be on the distant future. But one thing we can know, and that is, if we all go extinct, then we will stay instinct. So that isn't as affected by these cluelessness concerns. Therefore, it is potentially useful to work on preventing human extinction." I think that strategy choice is related to these cluelessness concerns, and I agree with that. I think there is a case for working on existential risk reduction for this reason.
SPENCER: Yeah, it seems especially strong if we're thinking about near-term risk reduction. You might say, "Well, I don't know if I can prevent existential risk occurring a thousand years from now but we can see some ways in the next twenty or thirty years — which will be in our lifetimes — that we can maybe nudge the probabilities of whether really horrible things happen and maybe even whether humans go extinct. And that seems something that's more concrete.
STEFAN: That's a great point, actually, because if we're talking about trying to reduce existential risk hundreds of years from now, that will then in turn also fall prey to these cluelessness concerns. Because who knows what the effect of my actions on the existential risk hundreds of years from now will actually be, whereas, in the shorter timeframe, you can have a little bit better sense of that. That is true.
SPENCER: So going back to the charity effectiveness question, I want to talk just briefly about the paper that you and I co-authored. Do you want to give a little quick summary about that?
STEFAN: Yeah. And please fill in also on the details of the genesis of this because I remember that you ran a startup, but unfortunately I don't remember the details of that. But the question is, what's the difference between the most effective charities and the average charity? That was the question that we asked and we were looking specifically at global poverty charities. We restricted ourselves to one cause, which is simpler. People might have difficulties even understanding what it means to compare charities working on different causes, but it can be more comprehensible to understand what it means to compare charities working on global poverty. So then we asked what the difference was, like how much more effective the most effective charities were. We asked laypeople and they thought that the difference was 1.5 to two, if I remember correctly.
SPENCER: Right. So that means if you say to people, "Okay, take an average global poverty charity that's trying to help people in a poor country, and then compare that to what you think the very best (in the whole world) global poverty charity is" — in other words, the one that has the most cost-effective intervention — "how much better do you think that best one is compared to the average," and then they're saying, "Oh, maybe it's 1.5 times better or two times better." They're basically saying it's modestly superior. And from my perspective, that just seems wildly off. I think there's quite a bit of evidence that the very best is probably way, way more effective than the average, which I think you probably agree with as well. Right?
STEFAN: Yeah, exactly. So then we asked a bunch of experts and their immediate estimate was 100. And they didn't agree perfectly, but it's clear that their estimate was that the difference is way greater than laypeople think. So laypeople think that it makes a substantial difference if they find the most effective charity but, if they pick a charity at random, they will still actualize most of the potential of their donation, as it were. Because, on average, they think they will find an average charity, and then they will have 50% or two-thirds of the impact that they could have had. But according to these experts, actually, the impact that you will have if you pick a charity at random will be much, much smaller than you could have had. That means, it's more important to do the research and find the most effective charities than most laypeople think. That could be one reason why people donate ineffectively, because they don't see how important it is to find the most effective charities.
SPENCER: Right. Your view on this question actually dramatically influences the way you should think about charity. Because, if you think the best charity is only a little bit better than the average, you're like, "Eh, maybe it's not even worth investing time trying to figure out the best. Just pick a charity that seems reasonable in that space." If you think the best charity is 100 times better, then rather than doubling the amount you're donating, you should maybe double the amount of time you spend looking for the best charity. You're gonna get a way bigger payoff and most of your benefit is in finding the right charity. That just seems like a huge difference. I just want to make a side point here, which is that there may be cause areas where (actually) the average charity does zero good, or maybe even slightly negative. And then this question of 'how many times better is the best charity kind of becomes nonsensical [laughs] because anything divided by zero is infinity. So I'm not even sure that's the best way to think about it — how many times better — but at least in an area where the average charity is still doing some good, then I think it's a sensible way to talk about it.
STEFAN: It's a good point that you're making there also (which we've thought about as well), that a lot of the debate about charity is about how we can get people to donate more. It's about quantity. But increasing the quantity of people's donations doesn't make that huge of a difference. I think Americans now donate 2% of GDP (or something like that) to charity. Suppose that you increase that to 4%. That would be probably unrealistic but that would still just double their impact, supposing that they would donate to the same charities and the impact would remain the same (maybe a bit unrealistic but still). But by making people donate to more effective charities, they could increase their impact much more. But still, the whole chat discourse is about quantity, it's not about effectiveness. And most of the research is about how to increase the quantity of donations.
SPENCER: Well, also doubling the quantity would be staggering. This number has been pretty consistent for a really long time, as I understand it. I remember when, originally, I was running that study (that you ended up replicating and ended up going in that paper we co-authored), one of the things that I did is I asked people to explain their answers, which I find is a really powerful research tool. If someone's giving you a number, like "Okay, why did you give that number? Why did you think that the best charity is only a bit better than the average?" And one of the things that I recall sticking out to me is, it seemed like a lot of participants never even considered that there might be dramatically different ways of achieving the same end result. So a lot of their focus was like, "Oh, well, maybe this charity can squeeze a little bit of extra efficiency, and that's why they're more cost-effective." It's just more about operational stuff.
SPENCER: But, in practice, a lot of times, what makes a charity so much better is that they're just using a totally different approach. For example, let's say you're trying to prevent malaria (which is a really horrifying disease). Do you give people bed nets? Do you give people medicine to treat malaria after they already have it? Do you try to develop a vaccine for malaria? Do you try to genetically engineer mosquitoes so that they die out? And actually, the difference between those different strategies and effectiveness could just be huge, it could be absolutely massive.
STEFAN: That's absolutely right. One hypothesis that we had (that we discussed in the paper also), is that maybe people are thinking about charities a bit analogously with companies that are selling products to consumers. And there, the price for products that have the same quality and the same type, they don't massively vary, right?
SPENCER: Maybe when there's a monopoly, they do, but maybe not.
STEFAN: Yeah, but normally, it's not like you go and buy one kind of toothpaste that's 100 times more expensive than another kind of toothpaste, because the companies that would be selling extremely expensive toothpaste would rapidly go out of business. They would be forced to lower their prices.
SPENCER: Okay, I would bet you that someone sells like [laughs] 100 times more expensive toothpaste that is made with diamond fragments or something.
STEFAN: Yeah, yeah. But then it's not exactly the same, I guess.
SPENCER: Right. It's more like an upscale market.
STEFAN: Yeah, it depends on how you define it, but in a relevant sense, products of the same type usually have the same price.
SPENCER: Basically, I think the intuition is that when you have fierce competition, things tend to get more equalized. You don't tend to find these just massive discrepancies as often.
STEFAN: Exactly. There's an efficient market, more or less, whereas that's not really the case in charity, precisely because donors aren't that effectiveness-driven. So some charities can go on and be less effective than other charities. They can go on year after year, because people will continue donating to them. But the average donor won't think about the structure of the charity market much so they don't consider this, and therefore they fail to see the differences between the for-profit market for consumer goods and charities.
SPENCER: Yeah, I think people are very quick to notice the conflicts of interest that companies have. In some cases, a company can make money by causing harm. A classic example would be a manufacturing plant that dumps sewage in the river and makes people sick. And maybe the benefit to society from the manufacturing is even less than the cost to society of dumping sewage in the river. That's clearly a misincentive — it's not aligned with society's benefit — and in those cases, you want regulators to say, "No, you can't dump your sewage in the river." Or, if you're a libertarian, you think, "Well, someone should own the river [laughs] and that person who owns the river should be able to sue them," or something like that, depending on your philosophy. But anyway, I think with charities, there's another incentive misalignment that people talk about less and that incentive misalignment is that the people that you're benefiting are not the people you get your money from. So an advantage that companies have is that, usually, the people getting the benefit are the people they're providing the product to, who are also giving them the money. So there's this feedback loop between, "I provide a product, People like it. They give me money, and I provide more of the product." With charities, you have the donors giving you the money but then the beneficiaries are the ones you're providing the service for, and those can become completely misaligned where, imagine two charities — one spends most of its time investing in making really shiny presentations that impress donors, the other spends most of its time trying to figure out how to really help the beneficiaries — you wonder which of those is actually going to win. Maybe it's actually the first one that's going to get most of the donations and is going to grow bigger and bigger and bigger, while the second one has trouble raising funds, even though it's really good at helping people.
STEFAN: I agree with that. To some extent, it's also like a pure epistemic problem, if you think about it from the point of view of a consumer and a donor. As a consumer, you know what you like, so you're in a privileged epistemic position to choose consumer products that you actually like and satisfy your preferences, whereas you're in a much, much worse epistemic position when it comes to how to effectively help beneficiaries. I think that's the purely epistemic reason why donations can sometimes be less effective than consumption decisions.
SPENCER: That's a great point. If you're buying a product, usually you can tell, is it helpful to me? Do I like it? Of course, that's not always true. Maybe you go buy some supplement, and it claims it's gonna make you live longer and there's really no way to tell. And I think that's actually a great example where the incentive misalignment between companies and consumers can occur, when the consumer actually can't tell if they're benefiting, and I think there's a lot of kind of scammy products in that way where consumers can't tell. But mostly, you at least have some idea of the products helping you, whereas, if you're a donor to a charity, how on earth are you going to tell if it's really helping the recipients? Groups like GiveWell do a massive amount of research to try to figure this out and it's incredibly thorny to figure out. And you as just a casual donor, you're reading the PowerPoint presentation or seeing the pitch from the charity, it just doesn't seem that likely that you're gonna be able to guess.
STEFAN: Yeah, yeah.
SPENCER: If we think of the philosophy of utilitarianism — we're trying to maximize the benefit to society, or maximize the total happiness or total good — I think that you have a perspective that this actually says that we need to think really carefully (even though utilitarianism as a philosophy doesn't really have anything to say about having good epistemics and about thinking really clearly), that maybe there's some automatic implication of this. I'd love to hear your thoughts on that.
STEFAN: This is something that's been on my mind a bit. When people discuss utilitarianism casually, they often associate it with (for instance) the trolley problem where there's this issue of a trolley coming along some tracks and it's threatening to kill five people but then you can switch the tracks and then one person would die and, in some sense, you would kill them. Should you do that? Well, some people say that you can't kill anyone, no matter the consequences, whereas utilitarians say that you should switch because, that way, more people get saved. Much of the discussion about utilitarianism is about such abstract thought experiments. It's also about potentially radical demands on our material resources, that we should give up a vast proportion of our resources. But when we're actually thinking about how to apply utilitarianism in the real world, it might be that there are other things that are very important — with the epistemics being maybe the one that I'm thinking about the most — that there are all these different opportunities to do good, and some are vast and much more effective than others. They're very hard to find, but they're possible to find. So it becomes extremely important to find them. And to do that, you need to have good epistemics, this truth-seeking attitude, or 'the scout mindset,' as Julia Galef puts it.
SPENCER: Yeah. I think a really interesting thought experiment is to imagine a world where people's values were total utilitarian values, like all they cared about was maximizing the total well-being of all society. So that would mean a world where there's no selfishness, everyone just wants to help as much as possible. But add an additional constraint that people have the same epistemic norms and differences as they do today. What would that world look like? I think it's an interesting thought experiment because you start to wonder: even if everyone was totally altruistic and trying to maximize the benefit, would they even agree with each other at all on how to do that? And would you end up in a situation where there are all these different tribes being like, "This is the one true way to maximize utility. Your way is totally wrong, and you're going to lead to less utility," or, "We need to destroy the world because that's actually the best way to improve utility," or whatever. I'm just curious to hear your reaction to that.
STEFAN: That is interesting. I certainly think that there will be some people who would not be very helpful, or potentially even harmful, even though they were genuinely trying to maximize well-being, impartially considered.
SPENCER: If we had the same epistemic norms, then it could just be this thing where people do a lot of things that seem good on the surface level and try to maximize those, but they might have negative second-order or third-order consequences that are not being considered. So I agree with you that, to try to improve the world in a utilitarian way, you actually have to be able to think very carefully about the world. And this is just an empirical fact that the world is ridiculously complex and the ways to improve it turn out to be really hard to figure out. I recently did a blog post about this, riffing on Julia Galef's idea of 'scout mindset' and 'soldier mindset' (which I've talked about in a previous podcast episode with her). I call it scout altruists and soldier altruists, where a scout altruist is someone who thinks that it's actually really hard to figure out how to improve the world and so, a lot of your time trying to improve the world needs to be in thought and studying the evidence and carefully considering, just to figure out what to do to make things better. Whereas, the soldier altruists think, "Oh, it's actually quite obvious how to make things better. We just need to throw more people, more money at those obvious solutions. And if they're not being done, it's just because people are too selfish, or there's not enough money going to it, or because people are buying into propaganda, but it's obvious how to make the world better."
STEFAN: Yeah, I think that's a good distinction and I certainly agree with the implicit premise here that lots of people are soldier altruists rather than scout altruists. And it's important to change perspective there.
SPENCER: Yeah, I think I'm seven out of ten on the 'scout altruists rather than soldier altruist' side, maybe eight out of ten. I basically think that people way underestimate how hard a problem it is to improve the world. And the world is so complex that, often, what seems at first glance to be good is actually, on further inspection, actually not that good. Where would you say you fall on that?
STEFAN: It's hard to valuate it. To some extent, I'm very much in favor of this scout altruist. I'm almost thinking that I'm in favor of it in almost a bit of a soldier way [laughs], this sort of meta level. One thing I wanted to say also with this point about utilitarians more generally, is that people speculate about what utilitarians would be like, and they point to abstract thought experiments, and there's also a literary tradition of Dickens and others debating utilitarianism, often criticizing them using fictional examples (I think, to some extent) rather look at what utilitarians actually do in the real world. I think that there are many utilitarians in the effective altruism movement. Effective Altruism isn't identical to utilitarianism but utilitarianism is certainly compatible with effective altruism. And those people in the effective altruism movement, they are empirically celebrating the scout mindset or truth-seeking and good epistemic norms a lot, that's very much what they focus on. Empirically, those seem to be the things that utilitarians focus on, rather than the weird kinds of instrumental harm for the greater good, and not even extreme self-sacrifice is something that is practiced that much (I think) among utilitarians in the effective altruism community.
SPENCER: That's really interesting, using this idea of looking at what utilitarians do in practice and that should tell us something about what utilitarianism is really like. But there's another position, which says that, actually, the people of the effective altruism community that call themselves utilitarian, are actually less utilitarian than they think. And in fact, they're doing a lot of virtue ethics and they're just in denial about it. Have you noticed that a lot of people in the effective altruism community are actually very polite and are really against lying, and seem to have an almost impulsive 'you should never lie' kind of reaction that, to me, actually [laughs] smells a lot like virtue ethics. So I'm curious to hear your perspective on that.
STEFAN: Yeah, it becomes a bit fuzzy how to determine this debate in the end. At first pass, I guess the burden of proof is on them who say that, "Well, those people who say they're utilitarians, they actually aren't." It seems like the first pass should be that we accept that these people — they are also very reflective people — they thought a lot and then came to the conclusion that they're utilitarians and they're behaving like this in practice.
SPENCER: Well, maybe they're just aspiring utilitarians that are actually virtue ethicists trying to become utilitarian.
STEFAN: The other thing is, well, it seems to be working pretty well. I guess maybe that criticism would be stronger if one could say that, "Well, if you would have dropped these seemingly virtue-ethical actions and acted in some other way, then they would have been more effective." But I'm not sure whether that's true. Personally, I actually think that there is a case for being virtuous from a utilitarian point of view.
SPENCER: There certainly is, I completely agree. That being said, when I see people make arguments about why you should be a 'virtue-ethics-y' from a utilitarian perspective, I do wonder, is the cart leading the horse, or is the horse leading the cart? Are you actually considering, from first principles, whether utilitarianism leads to that conclusion? Or do you just really like the idea of truthfulness and now you're kind of backfilling, which I personally am not opposed to. I don't identify as utilitarian personally so I don't have a problem with that or at least I try to take an empirical psychological perspective on what's happening here in these situations, which is maybe a little bit different than what a lot of utilitarians would take.
STEFAN: I think it's a great point. There certainly could be something like that going on. I think there are actually other historical examples of that, of utilitarians arguing that, "Well, in practice, utilitarianism just says that you should behave like any old decent person would behave anyway," so common sense ethics isn't very threatened by utilitarianism at all. Because, if you do all the calculations in the right way — not in a naive way where you forget about some considerations, that's when you end up with these absurd conclusions that you're engaging in instrumental harm, and so on — but if you do all the calculations in a sophisticated way, then you actually end up with common sense ethics, more or less. Some people have argued that and that could very well be seen as them doing what you're saying, that they want to arrive at that conclusion and that's why they do it. And I think that's not right. I think actually that, even though utilitarianism is converging with common sense ethics in some ways — regarding lying and stealing and stuff like that — there are other ways in which it radically departs from common sense ethics, like that we should help people in the distant future. But also, I think it requires us to be more truth-seeking than common sense ethics requires. Common sense ethics actually doesn't require us to have this very persistent truth-seeking attitude (I would say), whereas I think that utilitarianism does.
SPENCER: If you want to maximize benefit to society, you have to understand a lot of things about the way the world works, whereas common sense ethics is more local and it's more about what you actually end up interacting with, which we automatically tend to learn a lot about. We tend to learn about the people we interact with day-to-day, and we know a lot about our families and how to be nice to them, and all kinds of things like that, right?
STEFAN: Yeah, exactly. So that's one part of it and another are these amazing opportunities to multiply your impact and the only way to find them is by having this truth-seeking attitude. But then the third one (which I haven't spoken so much about) is that — and here, I'd be interested in your views — I think that it is possible to improve this truth-seeking attitude and acquire more of a scout mindset, to some extent, not fully and it depends a bit on who you are. But it is possible to some extent, and maybe the psychological obstacles aren't as severe when it comes to epistemics as when it comes to sacrificing our material resources, and by being extremely impartial in general. For instance, our instinct to help our family and prioritize them over strangers, that's arguably something that's very strong, that might be impossible to change. So such things, maybe we should just leave them as they are, whereas, with our epistemics, that might be easier to improve on.
SPENCER: Presumably, people 50,000 years ago that valued their family exactly equally to how they valued a random stranger they encountered in the woods would probably not have fared very well, evolutionarily. So you can imagine why we have a very strong instinct to help family members compared to others. They share our genes and a gene that codes for helping family is going to tend to survive, because your family members are also going to have that gene, and they're gonna help their family members who also have the gene and so on.
STEFAN: Right, yeah, exactly.
SPENCER: Also the complete egalitarianism — where you treat all beings as equally important — it also seems very exploitable, just in a game theory sense. If you have a bunch of agents that are like, "Oh, I care about all beings equally, no matter who they are," and then you have other agents that are like, "I care about myself," it does seem like the first group can lose in game theory. I am curious to hear your thoughts on that?
STEFAN: That's interesting. I've never really thought about that so much. I guess I feel a bit that, in these kinds of competitions, actually utilitarianism has an advantage. Let's just put aside the issue of whether utilitarianism is correct or not, and just look at it from a descriptive perspective. So you have one philosophy or ideology which has this maximizing mindset (which is utilitarianism), and then you have all the other ideologies which are not maximizing in the same way, they're not as focused in that way. Then you might expect this more maximizing mindset to take over more and more with time because they will just be more focused on...for instance, if they will acquire lots of resources, not because they want to spend them on themselves, but because resources are useful in order to spread their ideology. And also they will want to recruit people, so they will do so and they will do it very effectively because they have this maximizing mindset. In that sense, I think that utilitarianism has this inbuilt property that it could grow and spread.
SPENCER: It reminds me of arguments around super-intelligence. If you were gonna build a super-intelligent AI, and program it to do something difficult — like let's say, invent new cancer cures — and it really is that intelligent, it might realize things like, "Oh, wait, if I had access to a lot more money, it would help me invent new cancer cures," and then it might instrumentally go try to get money, not because it cares about money, but because it knows that that will help it with its goal. Or it might decide, "Hey, if I made lots of copies of myself and spread myself all over the world, maybe that would help me achieve my goal of curing cancer because then all the different copies could play a part in this," and so it might start making copies of itself and so on. And so the way you described it, the effective altruism community is like, even if your goal is just maximize benefits to the world, you start getting all these instrumental goals, like, "Oh, we want to grow because that's gonna give us more ability to positively impact the world and we want to have resources because that's gonna give more impact" and so on, all these instrumental goals arise from that. It's like a collective of humans acting as a mini super-intelligence, way smarter than any one human. But it is intentional because, imagine a pure game theory scenario where you have a simple little world with a hundred people in it, 50 of them are pure altruists — they care about everyone equally — and 50 of them are selfish. My prediction is, the altruists would get crushed in that little world because they'd get exploited. But then imagine, in the real world, you have a community of effective altruists all trying to improve the world, and it does seem like they're able to coordinate really well, and all these instrumental goals arise, and they're able to work together on those and so on. I wonder what's actually protecting a community that's extremely altruistic like that from being exploited?
STEFAN: I think you're right. Sure, they're sacrificing some resources to the outside world (because that's part of the mission), which the selfish agents aren't. But then probably I would say, the benefits from coordination and from having this very dedicated mindset that they're larger so, actually, in the real world, this group of altruists actually out-compete the selfish agents. That would be my hunch.
SPENCER: It seems like coordination there is key, that there has to be a way of all the altruists really working together to prevent exploitation, right?
STEFAN: Yeah, there's just a bunch of benefits from coordination, division of labor, specialization, and dissemination of information and whatnot. This is another thing that I think effective altruists are doing very well. They built this community, and there is good community spirit and people are cooperative within it and there's a virtue of collaboration within the effective altruism community. That's also something that is a consequence of utilitarian and consequentialist thinking. It's not inbuilt in it, just as truth-seeking or the scout mindset is like a consequence. If you just look at utilitarianism in a philosophy book and look at some thought experiments, you're not going to think about, "Well, utilitarians are going to be very truth-seeking or they're going to be very collaborative," because that's not there in the abstract thought experiments. But when you go to the real world and look at how it goes on empirically, then those things apparently are very important.
SPENCER: What do you think about some of the other norms in the effective altruism community? For example, things around openness to strange ideas and that kind of thing?
STEFAN: I guess that is like particular epistemic norms that would fall under this general truth-seeking, scout mindset heading. That would be a particularly important one, for sure, because it just seems that many of the most impactful things you can do are weird and strange to many people. And that's exactly why they are so impactful (or partly why), because most people neglect them and, therefore, there are great opportunities to do good if you focus on them. AI risk would be one example, but more generally, helping people in the distant future is also an example of that.
SPENCER: Maybe you could say, insofar as the things you can do to best improve the world already are what people would naturally do anyway and there's maybe a lot less opportunity. Those things are gonna get filled up anyway. And if we assume there's diminishing marginal return — the more you put into most things, the less impactful on the margin they'll be — then the remaining really good opportunities that aren't already full (where your dollar still goes far and doesn't have a lot of diminishing marginal return) are going to probably be somewhat weird.
STEFAN: I think that's a good norm. That said, it's also the case that we live in a world where that's not how people in general think and you might need to be a bit careful sometimes, so as not to unnecessarily offend or provoke people. I think it's a bit of a balance; you should be open to new ideas and you should be honestly discussing them. But at the same time, if you overdo it, then people might turn against effective altruism, might actually even harm this norm of openness.
SPENCER: It's an interesting thing that can happen with any community that spends a lot of time thinking and talking together, which is that their ideas that are more unusual get normalized within the community because a lot of people in the community believe them. And then they stop seeming weird to the community and then the community can build on the weird idea to go even weirder and weirder and can get a misimpression of how strange they come across to everyone else. If you ever watch a documentary on a cult, it's kind of amazing how, ten years into the cult, you can't even understand the people. They're just using a new language that you don't understand. But to them, it all makes sense.
STEFAN: That's absolutely right. I know some people are sometimes a bit critical of the effective altruism community, including people within the effective altruism community that like, "We're talking too much within ourselves and not listening enough to outsiders." I'm sure there are examples of that. But by and large, I think we're quite alert to that and relatively good at listening to critics who actually have good criticism. And sometimes I actually think that there's a bit of a tendency of someone coming with some criticism which actually isn't that on point. But then, people are so keen on being open-minded and having this attitude that you should be open to criticism, that maybe they even take criticism more seriously than they deserve.
SPENCER: Yeah, when you have a community where you gain social points by both showing that you're really intelligent but also critiquing things and showing you're open-minded, it can create a kind of funny norm. I think an interesting example is the rationalist community, and people will critique the rationalist community saying, "Oh, they're not that rational," and they'll find examples where the rationalist community is not that rational. But then if you compare them to other communities, it still seems that they're doing a really good job. It's just that they open themselves up to extra critique by being called the rationalists, you know what I mean? Deviations from rationality are going to be really harshly criticized. And I think maybe the same thing happens with the effective altruism community. Because it's called the effective altruism community, any kind of deviation from effectiveness is going to be super harshly criticized. Even when you're being fair and you compare them to other communities, you're like, "Yeah, they seem like they're doing a pretty darn good job at really being open-minded about how to do the most good and changing their minds," and all these different things.
STEFAN: I think that's a good point and that definitely happens. There are these totally different criteria that are being applied to people who are trying to persistently be effective or rational compared with other people, and that can be an unfair comparison. But then, the other point which I tried to make is that other people within the effective altruism community, in particular, are sometimes taking outside criticism more seriously than they deserve. I think it's better that you go too far in that direction. Obviously, like you said, the default is that people don't take outside criticism seriously at all, and that's also very bad. But sometimes, I think it goes too far in the other direction.
SPENCER: Yeah. Well, do you think the norm of internal self-critique actually could be too strong? Or do you think that it should be made even stronger? I can see both sides. Of course, you want a lot of internal self-critique, you want to find the flaws in the system. But you can also see it going overboard. I've heard people express worry about posting on the effective altruism forum because they're like, "I think if I post my thing there, a whole bunch of people are gonna attack me and tell me my thing's not effective enough. Maybe I should just not post?"
STEFAN: That's a good question, and it's hard. Sometimes I feel that it depends a bit on what we're talking about. If it's something that concerns the whole effective altruism community and someone is doing something on behalf of the whole community, then for sure, they should be critiqued if they're doing something wrong. Sometimes there could be a bit of a sense that someone is running a project or just doing something, and then people come with this unsolicited criticism like, "Well, you should do this," and "You should do that." And there, it can feel a bit that this can be some kind of naive consequentialist. Because in a utilitarian view, for instance, it doesn't make any sense to say, "Well, this is my project that you can't come here and say what I should do." Because from this utilitarian perspective, it's all about maximizing well-being and there's no ownership of projects. You just try to improve things wherever you can. But I think human psychology is obviously not utilitarian, and that includes the psychology of people who call themselves utilitarians. They will, in practice, have the sense of, "Well, this is my project that I have ownership of." So if you come and give all this unsolicited criticism, they will be a bit pissed off, right? Sometimes that could be the price that you should pay. But in some cases, I think maybe it's better if people keep a bit to their own things and we have this standard human division of labor.
SPENCER: There are these two mindsets (I feel like) for evaluating projects. Both can be really useful but are somewhat at odds. One of the mindsets says, "Okay, someone's trying something new." Let's say someone's starting a new startup, and the mindset says, "Well, almost all these new things fail, or they're not as good as the existing things. Maybe 90% of startups fail, so I should be skeptical, I should be cynical, I should be pessimistic about the outcome of it." And similarly, if someone's pitching some new idea in the effective altruism community, what's the chance that it's really better than all the other things out there? And you should have skepticism. On the other hand, there's this opposite mindset, which says, "Yeah, but also, that's where the good stuff comes from, these new ideas. A lot of times, new ideas take time to become honed and improve, but maybe there's something interesting there." Imagine you're a startup investor and every time you get pitched a startup, you just say, "Well, most startups fail so this one's probably going to fail." I mean, you have to try to see the best in the idea and you have to see the future potential that it could have. Even though now it's just a prototype or an idea, maybe in five years, it could be something really interesting. That mindset says that you want to nourish these little buds and try to see their full potential and view them in the most positive light to see how good they could be. And I do wonder sometimes if EA's leaned too far into the first thing and can be a little bit discouraging because they shoot things down so quickly without first trying to see what's the best thing about it, what's the best that could be, before tearing it apart. What do you think about that?
STEFAN: Yeah, this is one of these issues where I think you're much more of an expert than I am because you ran so many projects and you've thought a lot about this. I think there's certainly merit to both mindsets. Because sometimes there is excessive optimism among startup founders outside of the EA community and people within the EA community. Sometimes I do think that this approach of saying, "Well, this project falls into this reference class of projects that basically don't make it," so you should consider that. But yeah, I can see that it can go too far. I don't know if you think that there is some way of combining the things. I think I read some posts on the Effective Altruism Forum, like supportive skepticism or something like that, where you would have the skeptical attitude which comes from the first of the mindsets that you listed but then also the supportive attitude of the second mindset. It's a question of how combinable those two things are, but maybe that could be an option.
SPENCER: Yeah, usually, when people come to me with ideas, if they're looking for feedback, I view my role not as telling them that their idea is terrible or telling them it's amazing, but rather as taking what they give me and saying, "Hmm, what direction do I think this could be nudged to increase the value of it?" And I feel like that's usually the most useful place to be. Of course, not everyone wants feedback even at all. But if they do want feedback, it's like, "Oh, okay, what are the coolest elements about this that actually seem most promising and most beneficial?" And then how could it be nudged in a direction that makes it more so, as opposed to shooting it down and telling them that it's probably going to fail, which to me, seems usually not that helpful. But there's a role for that, too. Obviously, some things just are bad ideas and are harmful. And I agree with your point that, if it's something that represents a community as a whole, the bar needs to be higher because it could have a lot of second-order effects that could be really bad. Stefan, thanks so much for coming on. This was really fun.
STEFAN: Yeah, this has been great. Thanks so much, Spencer.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Host / Director