March 17, 2022
How does GiveWell's approach to charity differ from other charitable organizations? Why does GiveWell list such a small number of recommended charities? How does GiveWell handle the fact that different moral frameworks measure causes differently? Why has GiveWell increased its preference for health-related causes over time? How does GiveWell weight QALYs and DALYs? How much does GiveWell rely on a priori moral philosophy versus people's actual moral intuitions? Why does GiveWell have such low levels of confidence in some of its most highly-recommended charities or interventions? What should someone do if they want to be more confident that their giving is actually having a positive impact? Why do expected values usually tend to drop as more information is gathered? How does GiveWell think about second-order effects? How much good does the median charity do? Why is it so hard to determine how impactful charities are? Many charities report on the effectiveness of individual projects, but why don't more of them report on their effectiveness overall as an organization? Venture capitalists often diversify their portfolios as much as possible because they know that, even though most startups will fail, one unicorn can repay their investments many times over; so, in a similar way, why doesn't GiveWell fund as many projects as possible rather than focusing on a few high performers? Why doesn't GiveWell recommend more animal charities? Does quantification sometimes go too far?
Elie Hassenfeld co-founded GiveWell in 2007 and currently serves as its CEO. He is responsible for setting GiveWell's strategic vision and has grown the organization into a leading funder in global health and poverty alleviation, directing over $500 million annually to high-impact giving opportunities. Since 2007, GiveWell has directed more than $1 billion to outstanding charities. Elie co-led the development of GiveWell's research methodology and guides the research team's agenda. He has also worked closely with donors to help them define their giving strategies and invest toward them. Prior to founding GiveWell, Elie worked in the hedge fund industry. He graduated from Columbia University in 2004 with a B.A. in religion.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Elie Hassenfeld about assessment strategies for charities, evaluating effective giving outcomes, and why intervention programs fail. Just so you know, we have once again included links in the show notes if you'd like to donate to organizations that are helping the people of Ukraine. Now, here's the conversation between Spencer and Elie.
SPENCER: Elie, welcome.
ELIE: Hey, Spencer, it's great to be here.
SPENCER: I think almost everyone in our audience has heard of GiveWell, which is the organization that you founded that helps figure out which charities are highly effective. I just want to start with a very brief intro to what GiveWell does (we'll blow through that quickly, since most people are familiar with it), and then I want to dig into a number of interesting topics related to how do you evaluate the effectiveness of charities, how do you figure out how to do good in the world, how do you gather evidence that I think our audience will find interesting, even if they've heard all about GiveWell before. With that, why don't you just start off, give us a quick intro, what is GiveWell, what's your mission with it?
ELIE: Yeah, so GiveWell is the organization I co-founded, and we do research on giving opportunities in low and middle income countries — countries that are poorer than the United States, other wealthier countries around the world — and then direct funding to organizations that will, in our opinion, help people as much as possible. When GiveWell started, we functioned more as a website that people came to and use our research to decide where they will give — we still have the website because there's a lot of information there for anyone who's interested — but largely function more as a grantmaker, which means that by and large donors are giving us money, and asking us to direct it for them or asking us for advice, and we're giving them recommendations about where they should direct their money in low and middle income countries.
SPENCER: Great. I think one thing that really sets people apart from other charity evaluators — people might have heard of like Charity Navigator, or GuideStar, things like that — is that rather than trying to give a rating to some huge number of charities, you're really trying to focus on a small number that you think are very likely to be extremely impactful, and we can be reasonably confident in that impact, rather than let's get ready to everything. Do you want to just comment on that sort of approach? And how does that differ from what else is out there?
ELIE: Yeah, I think there's a couple of ways that GiveWell is very different. The first is what you're describing. To make it concrete, last year, we raised about $500 million, and our mission is to try and determine how to give away that $500 million in the way that will help people as much as possible. Ultimately, that meant that we directed those funds — I don't know the exact number, but to something more like 20 organizations than to very many. The majority of those funds went to a very small number of organizations working in areas like malaria, and other child health programs that we believe will use that money to do a great deal of good. We are not trying to be a database of charities, we're not trying to offer a objective or a fair rating to every charity that exists in the world, or even every charity that works internationally. Instead, we're trying to figure out how to get that money to the places that will do the most — to that end, we're focused on a few criteria, we're focused on organizations that either have or will generate significant evidence demonstrating that the programs are working, we're assessing the cost-effectiveness of those programs. We're saying if this charity receives X dollars, how much good will it do in terms of lives saved or improved? And we're very focused on how organizations will use additional money, meaning we're not just saying on average over the past five years, how much good does this organization do with the money it received, but instead, how much good do we think it will do with a marginal influx of donations?
SPENCER: I remember reading a blog post from someone in the sort of more traditional charity world about how they [laughs] first found GiveWell. It was pretty hilarious, because they were talking about how they discovered you all. They were super excited by your approach, and then they went to look at your list of recommended charities, almost fell off their seat because they couldn't believe that, for years and years of operating, how few charities are listed as your recommended ones (that just blew their mind). What's your thought on that? People might think, you've spent so many years analyzing charities, why aren't there way more that you're finding that are cost-effective?
ELIE: We're trying to find the places to put money that will do the most. For example, malaria is a disease that kills a huge number of people every year (more than 500,000) and there are really cheap, effective commodity solutions — malaria nets, preventive medicine now a vaccine that can prevent a great deal of the number of cases from malaria and then the resulting deaths — but nonetheless, it remains a huge burden of disease because there's insufficient funding. The question we're trying to answer is, “How do we use money to accomplish the most good? and when we look at malaria, we see the existence of a place where additional dollars can do a lot of good — we don't really care [laughs] if there's 20 other things we could do — we're fine recommending funding or a lot of funding to malaria, as long as the dollars that will go there are going to save and improve more lives than then dollars going anywhere else. So to be clear, we don't only recommend malaria, but there's nothing about our model that would require us or would make us prefer to have more recommendations for the sake of it. Instead, it's about where will charitable dollars have the highest return on dollars donated?
SPENCER: Wouldn't it still be valuable, though, to have more — if you could find more that are about on par, because then there would just increase the capacity for the amount of money you can move, you wouldn't kind of hit these capacity limits.
ELIE: Other things equal — I think there's a lot of reasons that ref would be better — you're pointing to one, which is one of the things we wrote about at the end of last year is that we felt like it was a struggle for us to find enough what we call room for more funding — basically programs that can absorb as much money as we would want to direct that. We're running up against some of those capacity limits, we need to find more things — breadth could be one way of addressing the capacity constraint, another one is to just find a small number of programs that are really large. For example, to us, we would be indifferent between 10 programs, each of which could absorb $500 million, and one program that could absorb $5 billion, those are equivalent to us. There's other reasons that rafting can be helpful, one of the really challenging questions that we grapple with is something we refer to as moral weights, how do you weigh the good accomplished by increasing someone's income against the good accomplished by averting a death. We take a particular perspective on how to differentiate or what the exchange rate is between those two outcomes as best we can. But if donors have very different opinions, they might want to have more options available to them. Because if someone values income much more highly than GiveWell does, then they would significantly prefer some option that we haven't recommended to them — more breath could increase capacity, more breath could also give donors the opportunity to make their own judgments when they have different opinions based on underlying philosophical values.
SPENCER: Yeah, I'm imagining now kind of a two axis system (like a chart) where, let's say, on the x axis is the number of lives saved per dollar, and on the y axis is, let's say, the amount of wellbeing for people alive per dollar, something like this. You could imagine, there could be this efficient frontier of charities, where on the efficient frontier, you can move to increase the number of lives saved, but sacrifice some well-being,or you can go the other way. You could have reasonable people disagree, because it's sort of totally unclear, which is better — it's really a philosophical consideration. On the other hand, a lot of charities just don't fall on the efficient frontier at all right, a lot of charities, you could actually just get strictly more of what they're doing for the same amount of money — if they're saving a certain number of lives and increasing Well being a certain amount, you could just get more lives and more well-being saved without sacrificing anything. I'm wondering is this kind of what you think about it?
ELIE: We use a threshold, and the way that we think about our threshold is in terms of multiples of the impact one could have by just giving people a very low income's cash, then we'll talk about the opportunities we're giving to as being some multiple of that threshold. One of the organizations we've looked at a lot over the years is Give Directly, a group that drives cash to very poor people, that would be 1x cash, because you're giving cash. A lot of the malaria recommendations we made, we believe, are around 10x cash — about 10 times as much good accomplished using the sort of exchange rate between income or consumption increases and deaths averted that we've used in our framework. I think, to your point, when someone were to show up and just say, “I actually value income, much more highly”, that can lead them to way income increases much more highly than we have, and prefer something else, relative to a malaria recommendation.
SPENCER: Let's go back to these kinds of more intrinsic goods — like saving a life preventing someone from dying that would have died versus increasing the well being of a living person, right — there's a philosophical debate about how valuable each of those is, it sounded like you were saying that there's some kind of internal exchange rate used between the two to get your final number. First, do you want to give us comment, how do you actually navigate that?
ELIE: Like how do we come up with the exchange rate that we'll use between wellbeing?
SPENCER: Yeah, so like giving people cash. There's some chance that it saves people's lives, maybe it saves the lives of their children because they can afford medicine, but also increases well-being presumably. So you're getting kind of these two different benefits in different amounts versus, let's say, giving people bed nets where there's a different amount of life saved and well-being increased. Essentially, you have these two fundamental goods.I was wondering, how do you actually trade them off against each other in practice?
ELIE: Yeah. There's several things that we've done — I think the most important point to make is that this is not a question where we feel like we have the right answer, quote, unquote, or one could have the right answer, because it relies on answerable questions — but the types of things that we've done are the following. So to try and come up with some sort of like, “how many dollars?” or “how much of an increase in consumption is worth averting a death?”, we've looked at academic literature that tries to value a Statistical Life, and say, “You know what the cost of regulation our government's willing to undertake for the sake of saving a life,” you can look at data along those lines, that's like one category of information. Another category is we've worked with external research organizations, to survey people living in low income countries — Kenya, Ghana – and ask them, “How would you trade-off between these two things?” “How would you trade-off between the potential loss of a child and some increasingly,” — these are really hard questions to ask and really hard questions to answer. It's another one of the inputs into how we're trying to arrive at this exchange rate. A third thing we've done is survey people in our donor community, and basically say, “you are deciding where to give your money, let's try to get an aggregate of how you see this question and what you would do.” Finally, trying to do some of the empirical analysis that you're gesturing at, saying, “if someone were to receive cash, how do we think that would play out in practice? How much would go to children's education or food or medicine? And what would that lead to long term, and then essentially, trying to put that all together into something that is like an exchange rate between the two things?”
SPENCER: It's super interesting, because there's [laughs] this fundamental problem in philosophy that's been debated for, 1000s of years. What is the good, right?, and you have to actually make decisions about it on hundreds of millions of dollars — I don't envy you that [laughs] part of your job. That just sounds incredibly difficult — it's fascinating that you actually ask people things, “how much would you be willing to risk your life in order to get this much other benefit?” And essentially, that's almost like pushing it into a kind of a preference utilitarianism frame saying, “how much should we value dying versus wellbeing?”, ask people how they value it, and that's, base it on that. I'm wondering, does this idea satisfy people's preferences? Are you thinking of that explicitly? Or do you have a different way of looking at that?
ELIE: I think it's an important input, we're trying to improve the lives of people in low income countries — like you said, we're faced with the choice of, to what extent do we choose to support their incomes and ability to buy the things they want? Versus provide health commodities that prevent illness and death? [laughs] It's really important input, what do they want? What did the people who were trying to help, what would they prefer? We want to take that into account. We haven't just taken the literal results of the preference survey we've done — and just kind of like made the plug all the way through for a couple reasons — but one of the most basic is I think that I'm not even sure that the results we got are high quality enough and like would be replicated if we were to try that we want to take the sort of quantitative outputs of those surveys at face value — We've written all about that on our website, and people are interested in can go and take a look and sort of dig into it. I definitely think that there's no doubt that the preferences of the people we're trying to help are an incredibly important input into the question of, “how should we trade-off between the different goods?”
SPENCER: It sounds, right now, it's kind of a synthesis, you're using a bunch of different techniques, trying to put them together into something sort of mostly coherent? [laughs] Use that?
ELIE: Yeah, exactly. Right. I think it is trying to take a look at this problem from a lot of different angles, recognizing that we're not going to sort of get the right answer, there is no right answer. Then over time, continuing to hear from other people about what we're missing and how we need to update it so we can keep moving closer, to get closer to the better direction of how to make this trade-off.
SPENCER: I think if I were you, I would be tempted to assign different buckets of the portfolio to different moral theories, since it's so hard to know which ones to buy, like, Okay, this part of the portfolio is going to be straight up hedonic utilitarianism. There's another bucket, we're going to like to use more of a preference,utilitarian frame or whatever.[laughs] Just do it that way. Monitoring has that ever considered that approach?
ELIE: We've thought about it a little bit — Open Philanthropy, which is an organization that any outlet started within GiveWell, now as independent organization has talked about its own work on worldview diversification, or it does something that's somewhat like this — I think for us, there are two big obstacles that prevented us from going on this path with Google's work today. The first is, I think it's just hard to draw clean lines around these buckets in ways that we would find satisfying. One could say, “let's draw the line around the health bucket in the income bucket, that's how we framed it”, It's not clear to us like where those lines would end, andwe didn't love that path. The second reason is, to the extent that we do have a belief — that was only the challenge of answering these questions, I think like one move that kibaale has made pretty strongly over the last 10 years, is slowly moving towards valuing health more and more over time. The move towards valuing health more has come from pretty much all of the different inputs that I mentioned before. As we move towards valuing health more than we did previously, we really believe we shouldn't be directing more money to health-oriented programs, because we believe they are accomplishing more good, and creating the constraint of having to allocate X some money to a bucket that's inconsistent with what we think would be the best way to allocate money means we're just leaving utility on the table.
SPENCER: Could you explain a little more about your change in thinking around health? And why do you think health interventions are more effective now than you used to think?
ELIE: Yeah, it really came from all the different inputs I've mentioned. I think like, in our earliest days, we realized we had this problem, how do you trade-off between income increasing and death averting programs, and at the time — we're going way, way, way back into sort of ancient GiveWell history, probably more than 10 years ago, now, we just I don't know, let's try to come up with our own rough rule of thumb for what that might be. We had some exchange rate — I don't remember what it was _ but it was largely based on intuition and maybe like a small degree of surveying some academic literature. Over time, we did more surveying of our donor audience, we did the work to go out and fund a group to go out and then report back to us on preferences of people in low income countries did some more of the empirical work, looked more closely at the evidence, and all of that shifted us more in the direction of valuing health more highly relative to income than we have previously.
SPENCER: One framework that people often use in the developmental world, as I understand it, are “QALY” and “DALY”. Did you want to just briefly explain what those are? And, what do you give the world's view on those are and how much they factor in your work?
ELIE: The one that's used most often in Global Health is the DALY or the “disability adjusted life year”. The idea is one wants to first take into account the number of years of life remaining for any health program. This sort of intuition that we have that the death of a young person is more tragic than the death of an elderly person may come from the fact that the young person literally has more years of life left to live. The other goal of the framework is to be able to put death averting programs and health improving programs into similar terms. One program might avert a case of malaria, for which someone might have flu-like symptoms for a week. Then, another program might avert the death of a person (the averting the death), how do we put those into similar terms, so we can weigh them against each other. The disability adjusted life, your framework is doing both of those, it's putting everything in terms of life year equivalence, where we're looking at life expectancy remaining, and then for each non-death causing disease, everything gets a disability weight — where that disability weight is some percentage of complete death — something that would be like extraordinarily severe could be practically death could be 0.9 practically a full year of life lost something that is very, very mild, could could be a very, very, very small number. But you can then use those to aggregate up and say, “in total, how many years of health improvement are we getting an expectation from the program for funding?”
SPENCER: How do you think about these in your own work? Do you rely on them? And if not, why not?
ELIE: We heavily rely on the DALYs in a lot of ways but don't fully take them without any adjustment. When it comes to applying a disability weight to a condition, so, how bad is it to have anemia? How bad is it to have some upsetting disease? We tend to just take the disability weight, because I don't think we have a better way of coming up with a [laughs] disability weight, that was the people who put the DALYs together have already come up with so we tend to use them. The main place that we have adjusted, is I think a lot of people have this moral intuition that the death of — and we're gonna get sort of into like somewhat intense territory here. I don't know if there's a trigger warning for deaths of young children, but I think this is a topic that is hard for a lot of people to think about and talk about a huge portion of the child mortality. Deaths of children under the age of five come from deaths of children in the first 24 hours of life, the first week of life or the first month of life in low income countries. I think a lot of people have the intuition that the death of a one-day old is less tragic than, say, the death of a five-year old. To some extent that may be due to pure life expectancy, projection, where in many places, literally like one-day old — your life expectancy may not be as high as for a five year old, though I'm not 100% sure of that — so one should fact check me before taking that at face value — I think beyond that we have like an intuition that there is some difference in the “moral value” of the one-day old than the five-year old.
SPENCER: It seems like there's sort of these competing intuitions we have, right? On the one hand, I think people tend to feel that someone who's very, very old, that it's less tragic if they die than someone who's younger, because they sort of have less life ahead of them. But on the other hand, a baby that's just born, they don't have goals, they don't have as much social connection to the world, and maybe that makes it less bad to a lot of people. Do you think that's what's going on partially?
ELIE: I think it's some of that that's going on. There was a paper whose title I don't remember that someone from the effective altruism community wrote on this topic — and I think that the intuition you're describing about connection to the world, personal goals is also part of this. But this fact that many people — but certainly not all — many people have this intuition that we should sort down weight, the moral weight of the extraordinarily young is another reason we just don't take the daily figures, literally our work. Instead, we have age weights that we apply for the very young, that effectively, the moral weight of a very young person increases rapidly in their first few years.
SPENCER: It's easy, you're almost doing a kind of empirical philosophy where you're like, “Well, we don't know the answer. It's a really difficult philosophical question. So let's see what people think about it, and then we'll kind of like to use people's intuitions on that topic to answer the difficult philosophical question.” I don't know if you look at it that way, but it's kind of how I see what you're doing there.
ELIE: It's sort of what we are doing. I think that even in this conversation, the fear I have is that someone will think that, we think we have the answers, or this is the “right way to do it” and it's not how we feel like we know that we don't know, but what makes GiveWell interesting is we're faced with this practical challenge – that on an ongoing basis, we have money to allocate, and we could ignore these questions, we could ignore the fact that we could just choose to say, “Let's pretend that we don't have constraints, and we should not try to deal with the trade-offs between the opportunities we have,” And we could also abdicate and say, “Well, the daily framework exists. So let's just use that and ignore the intuition that we have, and that our system that I have,” (because I don't even think this intuition is held by everyone in GiveWell) but that let's say, “I have other people have that we should treat the extraordinarily young somewhat differently than the very young”, or we could choose to ignore that but we don't want to. Instead, on an ongoing basis, we're trying to make the best decisions we have with the information we have, and then update over time. I think what's challenging is that we just have to make those decisions on an ongoing basis. Every year they are going to be giving away money, directing money to the things that are best, and that's just part of the work we do – this sort of practical exercise of acting based on some of this, like empirical information, but also some of the philosophical values that we have.
SPENCER: Does this mean that people who follow that framework – they end up doing a ton of work that's just, basically saving the lives of newborns?
ELIE: Relative to what we do, you'd be more likely [laughs] to focus on newborns than on older people.
SPENCER: Okay, but it doesn't change the calculations dramatically where it ends up being a big factor.
ELIE: It changes at some I think, like, roughly speaking, 50% of child deaths in low income countries occur before the age of one month — I believe that's correct, even though that number sounds pretty amazing, like pretty crazy. So even if you “down weight” the value of saving a one-day old heavily, you still might end up putting a lot of your attention towards the very young because the potential impacts that are so large. But these differences in philosophical values, could lead people to arrive at very different conclusions than GiveWell has. If one says — I think, again, very roughly, and I don't have the numbers up in front of me — but if we were to say, maybe we currently give a one-month old 50% of the value of a five year old, sort of in our free will. That's not so different to the point that differences in the cost of a program, the burden of the disease, but that you're attacking, those could easily overwhelm that moral difference, but if someone were to instead say, “I wouldn't apply 50%, I'd apply five, a 5% threshold”, that could really change things relative to what people are doing today. Similarly, there was a time several years ago, when there's sort of a line of thinking within GiveWell that said, “Adults should be really highly valued. They have fewer years of life remaining”. But to some of the points you were raising earlier, they have integrated into society, so it is potentially harder to measure impacts of their dying; therefore, maybe adults should be valued at an even higher level than children. Either of those cases, if one had a view that was extreme relative to like your most current view, you could easily end up in a different place. So you said,” Oh, man, like I don't want to be getting a lot to save you newborns.” Yeah, that probably you would end up doing fairly different things.
SPENCER: Am I remembering correctly that GiveWell once discovered a big mistake in the daily calculations of deworming interventions that was being widely used?
ELIE: Yeah, there was this calculation in a report called the Disease Control Priorities Report, which was put out by the group that had put together these disability adjusted life years that said “A certain type of deworming program was roughly $3.40 per DALY averted.” And that number had been arrived at based on some like erroneous underlying calculations. It was essentially a spreadsheet error, if I remember correctly, and it should have been $341. Instead of $3.41. We wrote a long blog post on our website about factoring in 2011.
SPENCER: Let me guess, then they were like “We were wrong, you're totally right.” and then everyone updated immediately.
ELIE: The funny thing is that there were other mistakes that were made, that actually made it even made it better put it closer to (again, I don't remember all the details, but something like 100). It's just sort of a big mess, but at the same time, there was this other set of research going on that was not directly related to this daily calculation, which was research done by Michael Kramer and Ted Miguel, with a randomized trial of the effects of deworming, and they followed people over a few years, children and saw that their school attendance was up, and then 10, 15, now, gosh, I think almost 20 years later have seen very large income effects in meeting people who were in the treatment group, and received this deworming treatment back in 1980, '98, '99, are earning much more as adults than they were as children. Basically, even in the blog post, I think that we wrote ourselves, what you see, as we say, there was this giant error, and, wow, this calculation was way off. We still plan to recommend funding to new organizations for these other reasons that make it look attractive to us.
SPENCER: Yeah, it's wild. It was off by a factor of up to 100. Or you said, maybe it ended up only being a factor of [laughs] 30 or something. It's just kind of amazing that even with that huge discrepancy, it was still highly effective. What was your reaction when you released this? I mean, you think a lot of people were making decisions based on these numbers.
ELIE: I think the real question is I'm not so I don't remember particularly well, I'm not sure how many people were making decisions on these numbers.
SPENCER: Okay. I thought there was like a big program funding these that had done these calculations, but maybe I'm misremembering,
ELIE: You know, there's a lot of attention paid to the numbers, and certainly plenty of high profile institutions were behind the report. I think that, in my experience, GiveWell is one of the few institutions that's, I don't know, trying to make decisions based on cost-effectiveness analysis in doing that in a sort of, consistent and principled way. GiveWell cost- effectiveness estimates are not the only input into our decisions to fund malaria programs and deworming programs, there are some other factors, but they're certainly 80% plus of the case. I think we're relatively unique in that way. I don't think there are other groups, certainly I can't think of any ones as I'm sitting here now, that are using numbers in that same way. In some ways, I think that is why we have real value added in the world, because I don't think that explicit cost-effectiveness estimates is the only way to give effectively, but it's certainly a strategy that I think should be employed significantly. I'm glad that we can be the ones to come in and play it.
SPENCER: Is there something kind of bizarre about this? There are so many groups that are trying to make an impact in the world, right? There's so many foundations, there's so many wealthy individuals giving money away. It's sort of just shocking on some level, that there don't seem to be that many that are really saying, “Okay, well, given that we're trying to help the world, given that we're trying to do something altruistic, we might as well use our dollars as effectively as possible to achieve those goals.” What's going on here?
ELIE: Yeah, it's surprising. There's probably a few different things going on. One is that I think people have, as you know, a lot of motivations for their giving, maximizing impact is very rarely the main one — that fact alone filters out a lot of people who have other motives, either ones they're conscious about or not conscious about in they're giving. I think the sort of second big reason is even doing this in a remotely credible way is just like really, really, really hard. I think maybe it'll be interesting to dig into a little bit, but I'll give one example, and then it'll lead to the sort of third reason I think this might be happening, which is, a big question we have about the malaria recommendations we're making right now is to what extent is the money we direct displacing money that other funders would give to malaria, and those other funders would be the biggest global funders of malaria programs — the Global Fund, which is funded by many country governments; the US, the UK, etc., and the United States government's President's Malaria Initiative, which is another big funder of malaria programs. Our best guess is that, roughly speaking, every dollar that we direct displaces about 40 to 50 cents of another funders malaria program. And our cost-effectiveness estimates take that into account in our final numbers, that we're just placing some money, it's going to something else, that's something else is less cost-effective. Just grappling with that and trying to work through it is really hard. I think there's a huge obstacle that is imposed by the recognition that making progress on explicit cost-effectiveness estimation is very challenging. Finally, even in that example, I just gave in and we talked about moral weeks earlier, there's no right answer. I think, very quickly, the best argument that I can make for the other side is that some versions of these cost-effectiveness estimates are, they're great, but someone could make an argument, they're still missing — I don't know — 80% Of what really matters. One could argue that we're over relying on this very blunt instrument. Other approaches are equally good, whether those other approaches are trying to bet on great people, or trying to invest in innovation, other approaches that I think are not prima facie worse than what GiveWell's doing.
SPENCER: Okay, that's [inaudible], I want to kind of go through those, again, there's three points you make there — each of them is fascinating, and I think, really, really important to discuss. The other mode is forgiving, that's one reason why people don't use your style of giving money away. You know, one could interpret that as, “Oh, they're really doing it selfishly”, I think some people when they do altruism, they're doing it selfishly — they're really just trying to socially signal or whatever, but there are tons of people who are really truly altruistically motivated. They're not just doing it selfishly, and if you were to pin down most people and say, “Well, given that you're given $100, would you rather save 0.1 person's life or 0.2 people's lives?” If they would say, “Of course, I want to save more lives, not less. “ and I think they genuinely mean that. So it still seems to me like a little bit of a mystery, because I genuinely think that if the people do want to do more good for the same amount of money, rather than less good. So what are your thoughts on that?
ELIE: I believe that the decision they're making — sort of their revealed preference is not what they're saying. When they answer the question rationally and say, “I value saving two people instead of one person”. There's probably two things going on to some extent, — everyone at GiveWell has this experience, we had this experience so early — in GiveWell we pitched donors, and the entire time, we would talk about what we're doing. It makes perfect sense. They love it, it's great. And we're like “Sorry, you've got to give.” and they're like, “No, we've got to, keep giving our local thing,” that “We like where we know everybody.” I think there was a disconnect between some version of rationally understanding what the correct thing to do is, and taking that action and somehow those two ideas were divorced from each other.
SPENCER: Could it be that altruism in the human mind is primarily a kind of community-oriented idea where you know, if you think about the survival of the human species, the benefits to altruism when it comes to survival have to do with your local group, like helping your local group and them helping you as opposed to helping people around the world. And could it be that effective altruism essentially is like a hijacking of our cognitive architecture to do something that's just totally unnatural for humans?
ELIE: Yeah, there could be all sorts of reasons that people prefer giving closer to their own committee you're describing. That's one. I think the other one is, well, in the end, there's a lot of signaling going on, even when it's not obvious, even when it's not getting your name up on the building at a big university – you still get within your community and people know that you gave her people see you get in, you're on some list, and that is giving you some of that local benefit. A third one relates to another point of need, which is when you write a check, — and so it's only there's two more things — when you write a check, and it goes somewhere overseas, I think people have a little bit of this, latent skepticism – that anything's really happening. If you see the unhoused person on the street, and you offer to buy them a sandwich, you know for sure that you help that person. That process is also running for people, where they don't really believe that something good is happening when they just write a check overseas. Relatedly, when you buy the person on the street a sandwich, you feel really good. [laughs] You help someone you see them. The person is probably grateful and makes you feel like you're a good person. I have given money to give all recommendations for many years in a row. I would probably get more warm fuzzies, giving food to someone on the street than I do writing a really big check — I always write a really good check – and because I know that that's good, but it doesn't make me feel the same way of just, you know, helping the person in front of me.
SPENCER: Really good points. But the second point you made about why people may not give the way that you all do is that it's actually really, really hard to do it well. People really don't get the extent to which this is true. We've talked a bit about some of the incredibly challenging philosophical problems and come up about sort of ethics, how do you balance different ethical things, how do you figure out life saved versus well-being and things like that. We also talk a little bit about displacing other funders — that's such a complicated thing to consider, and these seem like consequences that come up. As soon as you start saying we want to do this as effectively as possible, you start introducing all these really challenging things you have to solve. But one of them we haven't talked about yet is evaluating the evidence. From my point of view, evaluating the evidence of what works seems really, really difficult. So I'd love to hear your perspective on that. What are some of the challenges there?
ELIE: Let me just walk through one example. It's going to be a little wiedzy, but it will illustrate it really well. One of the problems we recommend is vitamin A supplementation — giving children between the ages of six months and five years, a twice annual vitamin A pill. There were a bunch of like seven or so randomized trials, in the 80s and 90s, I believe, that showed that this program reduced child mortality by 25%. So that's pretty good. That seems like a pretty cost-effective program. Then later, a really large trial was run in India, and it included vitamin A supplementation. That trial found, if I recall, correctly, zero effect of the program. I think — again, I'm quoting this from memory, but so it means some of the facts may be a little bit wrong. It's all on our website, but broadly this is a good illustration of the challenge. I think the sample size in the India trial was about half of the total sample size from the other seven trials. It also happened 15 years later (the India trial). The question is like, what do you make of this evidence base now that should we take it all, as one big body of evidence? And just, I don't know, reduce the effect size by half, say it was 25%? Well, now it's probably 12 and a half percent. Should we try to figure out whether there was something different about the underlying context of India, relative to the underlying context of the other countries where the effects were bigger? Another thing we can do is we can dig in and look at the underlying characteristics of the people the situation and the program in India, relative to that found no effect versus the other seven that found a large effect that could be underlying rates of Vitamin A deficiency that could be underlying rates of child mortality. It could even be how effectively the program was implemented. That was actually a big question raised about the India study – was this program effectively distributed or were we picking up no effect because there had been programmed failure rather than program delivery failure rather than program failure? Finally, maybe you just want to say the world has changed. You know, this is a sub-bullet, perhaps, of trying to understand the underlying mechanism, but maybe something else was much worse 20 years earlier in the original studies that had found. This bigger fact that had gone away, and now if we scale the program up today, we should affect a result, but it's more like the most recent study — and that I mean, there we go down into each of those even further, and there's 10 other examples. Those questions — that's just like a quick sampling of what makes it hard in something like vitamin A (which in the scheme of things is probably one of the easier programs to understand, because it's been evaluated in a series of eight randomized control trials.)
SPENCER: It's such a good and interesting example. And I think this is the norm, as far as I understand it, not like a weird outlier. Would you agree with that, that very often the evidence is very mixed?
ELIE: I think, most often, [laughs] there's insufficient evidence. In this case, it's relatively rare to have so much evidence about a fairly straightforward program. What's more common is to have three studies of different methodological quality, looking at programs that are slightly varied, and it's really hard to use that to inform program delivery. Often, the only solution or a great solution is to fund a program with additional evaluation on top of it, so that we, as the funder can learn from the results of the program we're supporting, and scale it up further from there. Of course, that itself requires a huge degree of investment, time, energy, money to be able to be in a position to evaluate that going forward.
SPENCER: In one of the early episodes on this podcast (when I was [laughs] still figuring out how to do podcast), I had on Uri Bram – we talked about GiveWell, we talked about how the deworming interventions that you recommend — that you're essentially you're giving children pills to try to get rid of parasitic worm infections, that it's easy for someone to think that GiveWell is absolutely sure that these things work. But what Uri was saying is that if you read carefully the blog posts you put out about this topic, what you find out is that GiveWell actually thinks there's like a lot of uncertainty about the effects of these programs, and that the recommendation is really based more on saying, “Well, on average, or in an expected value sense, if we take the probability of different outcomes, times how good those outcomes are, we think an expected value since it's a really good intervention, but there's actually a huge amount of uncertainty.” So, I'm just wondering, like, Is that a fair characterization of how you feel about deworming?
ELIE: Completely. There's a huge amount of uncertainty. A colleague, Sean and I wrote a blog post that was titled “Deworming might have huge impact, but might have close to zero impact,” which maybe we could have titled better. We were trying to really hammer home this point, that deworming is not a sure thing. Deworming is a great, I think expected value bet, but I think it's more likely than not, but it has relatively limited impact – perfectly fine. The real juice in deworming is the possibility of huge impact.
SPENCER: Because I think people you know, when they consider an intervention like this, they're like, “Wow, they're all these children. They've got parasitic worm infections.” You give them some cheap pills; it kills the worms. Clearly, that's a huge benefit, but when you look at the studies, you're like, “Well, what did the studies really show?” It's like they show an impact on income like a long time later, when you expect to see an impact. It's like much sooner and more direct rather than sort of indirect income impacts if an intervention really works. I'm just curious to hear your thoughts on that aspect of it.
ELIE: I think you're saying exactly right. You look at this deworming program, and the strongest piece of evidence is a single randomized trial that found huge impact effects many years later. There is some evidence of meaningful short-term effects, most notably with weight gain, but nothing that is clear and decisive, or what I think you would weigh you're putting it intuitively expect to see. At the same time, we've done a lot of work to try and figure out what is going on with this study of actually, there used to be a couple other studies that we would talk about that supported deworming – both looking at deworming programs in the American South, when the Americans eliminated hookworm, and then some additional retrospective studies have on test scores in Africa. For a long time, we weren't fitting those in our case for deworming, but as we kept digging into an economist named David Rodman, I think, largely falsified those studies, but he did a ton of work on this study also. And there's nothing that we can find, you know, it was not a tiny study — about 3000 children were enrolled, I believe. And no matter the ways in which we poked and prodded and pulled apart the data, we ran it for the study, its results held up. So, we look at the warming and we say there's a single particularly robust piece of evidence for this program. The effect is really huge, and there's this somewhat intuitive case that you can treat a lot of children for a parasitic infection for very little money. That's a bet we're going to take, even though, my personal best guess is it's more likely than not that that program is not having such a big effect.
SPENCER: It's such an interesting example, because it sort of changes the frame that I think some people inappropriately put on GiveWell. Because some people think of GiveWell as, “Okay, we're looking for the things that we can be really confident in, are really effective,” like “Sure, there's 10s of 1000s of possible opportunities out there that you could give to, but we're not confident it's really effective”, but when you look at something like deworming, you're like, “well, but we're not confidence really effective.” What is the right framing? And I'm also interested in relating that to Open Philanthropy because one frame on Open Phil is GiveWell was trying to do like, the really certain stuff, and Open Phil is trying to do like, more experimental stuff that's like maybe even more scalable, and like they're willing to accept, higher expected value as a trade off against like, more uncertainty. Where do you feel is the correct framing given these kinds of uncertainties? And then how would you relate that to what Open Phill is doing?
ELIE: Yeah, let me answer the framing one first, and then see if we move on to Open Phil. The correct framing for GiveWell, is we're trying to maximize impact, and we're working within an expected value framework. So we're going to treat 10% chance of 100 as the same as 100% chance of 10. We're not aiming at “high confidence giving.” We're aiming for maximizing impact — within some constraints giving — that said, I think, to be honest, the way that people perceive us today, or though, frankly, the way that we have allowed that perception to persist is problematic. I think it's like one of the mistakes that I think we have to fix is it should be much clearer to people, especially people who want high confidence giving or high confidence, high impact giving that deworming is not that, and that's something that we aim to fix, because I think we're just not doing your job of living up to our value of transparency.
SPENCER: To defend you a little bit, you did entitle the blog post [laughs], saying that it might do nothing, right.
ELIE: I'm not trying to solve [laughs] a terrible job. But look, [laughs] we aspire to be really clear. And I think this is a great place where there's a lot of confusion. It's just a good example of somewhere where we know that GiveWell has this “brand of high confidence giving,” that's something we do want to offer donors because we can, but it's not the complete view of what GiveWell is. Recently, we've made even more grants that go well outside the bounds of high confidene giving in areas that are like public health regulation, giving money to support efforts to improve the regulation of lead in low income countries. This is definitely a risky philanthropic recommendation that we're making based on an expected value calculation — it's a very small amount of what we do. You know, when donors get him to give his maximum impact fund, it's going to GiveWell's sort of higher confidence, top charities, but this perception that people have is off base, and I think we can do something to fix it.
SPENCER: We say if someone does want high confidence giving, which of the given recommended organizations are you most confident that does good — not maximum expected value — but maximizing, like confidence that is actually working.
ELIE: I think for us right now, Give Directly is the organization that has the highest confidence of doing a lot of good, but in my opinion, when you give to Give Directly, by taking that confidence, you're giving a lot, you're giving up a lot in terms of expected impact. And then, I guess Malaria Foundation with malaria nets, Malaria Consortium with seasonal malaria chemoprevention (the two malaria organizations) and vitamin A supplementation, incentives for cash incentives for childhood vaccines – we see those all as high expected value, high confidence; maybe Give Directly is extremely high confidence but lower expected value, and then deworming – it's actually not higher expected value than the malaria organizations. That say it's kind of in the same ballpark but lower confidence. If I were pointing someone to just the higher confidence set of organizations on our list would be the malaria, vitamin A and incentives organization.
SPENCER: Got it, and it Give Directly, you say, it's like extremely high confidence that it actually does good. Is that more based on studies of it, or is it actually more based on a priori arguments that, if you take extremely poor people and you give them money, that's going to benefit them substantially?
ELIE: The starting assumption is that giving poor people money does a lot, but there's also a lot [laughs] of evidence with different types. Both studies of Give Directly itself, studies of other cash programs and also, I don't know, qualitative analyses of how people with very low incomes manage their financial lives — all of which confirm in a good way, the underlying assumption that by and large, very poor people use money in broadly speaking, productive, beneficial ways.
SPENCER: I think, sometimes people, when they think about poor people and giving them money, they worry they're going to do things like spending on drugs or alcohol or things like that. It seems to me that those perspectives are really misguided, especially when you're looking at something like the global poor, where you have whole villages where everyone's extremely poor, and it's like not doing it all, at any deficit in their own part. It's just due to the fact that they were born in a really poor part of the world.
ELIE: I totally agree. One of the books that was most informative for me here is a book called “Portfolios of the Poor”, which is more of a walkthrough of how people with very low incomes manage their financial lives and manage those financial lives in a very sophisticated way. I think it might surprise people whose intuitions are based on how they perceive people who have extremely low incomes, in say like, high income countries.
SPENCER: For example, I think in low income countries, poor people often do sort of multiple side gigs, as I understand it, where they're like making money in a whole bunch of different ways. Is that accurate?
ELIE: I'm no expert on this but I think that in general, if people have to figure out how to live on extremely low incomes, and balancing food, education expenses, etc, and really just like finding ways to get by, an influx of cash that might amount to doubling one's annual income can make a really big difference to people who are already stretched thin, and being very thoughtful about how they manage their finances.
SPENCER: Okay, so then, if we're not in the realm of trying to maximize competence, if we're in the realm of expected value maximization, one could ask, why is GiveWell not doing much weirder stuff? If all you cared about as expected value, why are you taking more, 1% shots at like enormous amounts of good, or maybe you would say that you are? I'm curious about, your reaction to that?
ELIE: To some extent, this is the difference or part of the difference between being Open Phil and GiveWell. Open Phil now is working on global health and well being, so there is this overlap on giving to support people who are living today or in the near future in low income countries. I think I've Open Phill as the group that is taking this explicitly hits base giving approach, looking for opportunities that could have crazy impact, that might have that 1% chance of really high upside — we're open to that we're not like, fundamentally opposed to that line of thinking it's sort of consistent with our values and approach, but I think our comparative advantage is being somewhere north of, I don't know, 25% chance of impact. Rather than being in the sort of lower likelihood really harder to think through opportunities. I shouldn't say, if we saw those opportunities, we would consider them. There's nothing where we say, “Oh, like, this thing is just too weird. GiveWell can't touch it”, because we're not allowed to and we don't feel constrained in that way, so we're open to it. They're just not opportunities that you know, that we see, because our underlying framework and experience are sort of pulled more towards the less crazy opportunities.
SPENCER: What would you say is really going on here? Is this about methodology diversification – it's like, well, it's good to have one organization that hits base, and that's one strategy, and it's good to have one that's sort of a little more conservative in that way – or is it more about some difference in sort of priors, maybe you think the hits based off, maybe you think that a lot of stuff like seems really good at first, it doesn't have a lot of evidence and you say, “Oh, maybe this has, crazy amount of impact”, but in your experience, actually, that almost never pans out, and when you're in that realm of hits base, it's usually not as good as it seems, or I'm just wondering where this difference really lies?
ELIE: It's a combination. Temperamentally and culturally, more people at GiveWell have more skeptical priors. I shouldn't say more skeptical priors than anyone, but we tend to be on the skeptical end of the spectrum, and also on the sort of confidence end of the spectrum. We're a little bit less likely to believe in the 1% chance of big impact. I think that's definitely part of what you know, GiveWell is culturally today. If you say, why not change? Why don't you want to put more effort into increasing the scope of what GiveWell takes on, and it has a lot to do with diversification. There's another institution that's extremely values aligned, in large part, and we're just trying to do the same thing, we believe in the same things in terms of what would make the world good, but we're just approaching it in different ways. GiveWell is doing deeper due diligence, and focusing more on the empirical case, looking for higher confidence, Open Phil is looking more for the hits, and moving more quickly. If Open Phil didn't exist, it would be great to create Open Phil but I'm really glad it exists, and they're doing their thing. I think that diversification of methodology is a strong reason, to not go further. In all that said, I think one of the open questions for GiveWell going forward is to what extent — I think we are going to move over time, into taking more lower confidence bets; I'm not sure exactly what those will look like, or what direction that will take, but we are likely to move more in that direction, no more slowly than Open Phil, because it's just not the main thing that we're focused on.
SPENCER: I remember a number of years ago, I believe it was Holden who had a blog post about how you can't just naively apply an expected value calculation because basically, if you do this, you will essentially be duped into putting money into a whole bunch of things – like imagine you look at 100 opportunities, and there's some noise in your evaluation process. And then you just pick the three that look the best. Well, those are most likely to be the three that like to have the biggest noise where your measurement was just bad, and you're just buying the thing that had the positive luck instead of the negative luck — so he talks about this idea of, you really need to work your priori to that. Also over time, I think that you all have seen that your expected valuation calculations tend to go down as you get more evidence [laughs], not go up as you get more evidence. I'm curious, did I characterize that, right? Is that how you look at things?
ELIE: Yeah, that's exactly right. I think the second one, especially that in our experience, by and large, the back of the envelope cost-effectiveness estimate, we have after looking at our program for a week, is much [laughs] higher than the calculation that we make three months later. That experience is just, so invariably correct that for me, if someone says, “I just found this new program, it's ED-X cash.” ED-X cash will be eight times as good as the things we're recommending today. We pretty much never find something that's good. I can say, with high confidence that's coming down, and then it does come down. That rule is pretty common. I think that I'm not sure that we are — I should say, no, we're not perfectly adjusting, but it's hard to be well calibrated on how far those will come down in every case, especially when the more familiar we everyone gets with that dynamic, that we're they're trying to adjust on their own. But I think that's a part of the skepticism about multiplying a really big number by a small percentage and taking it because we've just seen the dynamic of the relatively shallow investigation can lead you astray.
SPENCER: It's so interesting that that happens. What are some of the forces that you think lead to that happening? Why do I think it seems better at first?
ELIE: I think there's basically infinite ways that programs can fail. For some reason, the initial calculation often takes into account — it's kind of like, I'm not sure if it's the right analogy, but planning fallacy seems like the right analogy where you almost imagine, you think you're taking the median expectation, but really, you're just assuming everything goes perfectly, and all it takes in your plan is for I don't know one person to be sick for a week, and that there's some chance that happens, that bumps the whole timeline out. Similarly, when in a program all it takes is some new thing that you learned that you didn't know about before, to bump it down, and over time, we continue to learn those things. I'll give like one silly it sounds silly because of the subject matter, but seems simple type of problem that we learned about. I mentioned earlier that about 50% of child deaths occur in the first month — and I hope that is correct now that I keep repeating it — and many child health programs like immunization programs, they don't even begin until a child is older than one month. One might do something really simple and say, “Well, the measles vaccine has X percent reduction in mortality, the pneumonia vaccine, y percent reduction and apply it to all child mortality.” Of course, they couldn't have any of those effects on deaths before the age of one month. You could just do this like a simple calculation error, where you apply an effect to an outcome that couldn't possibly have a causal relationship, because it happens afterwards. That's just like one example, but I think for any particular program, there's like 100, things like that, that could pop up, and they tend to be on the negative side, because that initial outline of the sort of quantification of the impact of the program tends to be a fairly optimistic one.
SPENCER: That's so well said. It really jives with what I think is true about doing good, which is that doing good is very brittle, like, you don't do massive amounts of good by accident, unless you're extremely, extremely lucky, and you don't do it with a moderate amount of planning, unless you're really, really lucky. You have to set all these different variables into place just right. So, I think about this program where they were trying to get people to put in — I think it was like chlorine, and they're drinking water, because the drinking water is contaminated. If you were to pitch this program, you're like, “Oh, well, look, people are drinking this contaminated water. We know chlorine kills the germs in it. All we have to do is put these chlorine sponsors at the drinking wells, and then people won't get sick.” Boom, slam dunk, right? But then you actually learn about the things they went through to get this program to work. For example, when they installed these things, people weren't using the chlorine, and they were finding all these things – even if people weren't using the chlorine, if the chlorine dispensers, like ran out of chlorine, and then people would just stop using them, and then they would refill them. But then people wouldn't start using them again. I guess, because they would just like to follow the habit, or maybe they no longer knew those clients. It's just the crazy amount of effort to make that simple seeming plan actually work because in fact, that plan has about 50 hidden steps that you don't think about when you first just describe it. I don't know, does that sound like I'm describing the same thing that you did?
ELIE: That's right. Just imagine there's 50 things, all of which have to go right to achieve impact and when someone does their quick back of the envelope calculation. They just assume that all those 50 happened. And you can think, “Oh, maybe it should cut that a little bit”, but you have no idea how . But all it takes is any one of those 50 sort of yes switches to flip to know, and it demolishes the impact.
SPENCER: In this case, it could be well, maybe chlorine doesn't kill the type of bacteria, that they're the people who are getting sick from. Or you could just think of so many weird, wacky ways that like this could fail, and each one of those is probably not gonna be the reason it fails. But there's so many of them that they kind of stack up and then, things don't actually work. [laughs]
ELIE: I think the thing that's somewhat I'm not totally sure I understand, but is somewhat surprising is that we're you find there's all these things that have to go right to have the impact. But why aren't there other surprises that are positive? Why are those there? And I think it could be that they don't exist. It also could be that we're missing them. One of the things we sometimes think about in recommendations we make is what we might call “unmodeled-upside” — good things that could happen because of this grant that we're not including in the cost-effectiveness analysis. I think it is possible that — if I'm wrong about this, the way I'd be wrong – is we just don't look for upside after the fact we don't really think about it, it feels like this extra thing we don't care about, so therefore we don't find it, but my guess is that if what you were looking for was sort of upside, you might want to go about your work in a very different way than what GiveWell does. You might do something that looks more like what VCs do when they're investing in lots of really small things that could get really, really huge. Therefore, in the type of work that GiveWell does, it actually is more likely that finding one of those 50 things that stands in the way of impact is significantly more likely than some really surprisingly good thing happening that magnifies the impact of the program.
SPENCER: So maybe there's a kind of entropy argument here, where if you're talking about preventing some specific bad outcome or creating some specific positive outcome, you need things aligned just right. It's sort of like clocks don't assemble themselves by chance, just by throwing together a bunch of parts. So maybe, the strong default is that, no change occurs and like almost all configurations of things (no change of curves). To get that change, you need things just perfectly aligned. If that were true, it could help explain why you don't get these sort of unexpected upside that often, but you often do get the unexpected downside, because the unexpected downside is just nothing happened, which is sort of the default anyway.
ELIE: I think that's right. Even when I think about our current recommendations, we talked about the deworming for a bit — it's not out of the realm of possibility that at some point in the future, we'll learn something that helps us understand the mechanism through which the effects in that original trial took place. I don't know, we learned that someone went around. And to be clear, this is totally speculative data, not something I believe, but the type of thing we could learn. So we went around and you know, tutored all the kids in the treatment villages because they really wanted them to succeed. It's hard to imagine something coming out on the other side, that made the effect size look even larger than we think it is today.
SPENCER: What do you think about second order effects? You give children deworming pills, and maybe now they don't have parasitic worms, which maybe means they have, all these different potential benefits, like maybe they're not as tired or maybe better cognitive processing or whatever, actually have no idea what kind of [laughs] effects the worms have on people. But you could imagine there's lots of different things. And then, you find, years later you find they have higher incomes, but you can imagine, second order effects, or third order effects, or fourth are effects of like, well, if they have higher incomes, maybe that positively affects people in the village that are not even in the treatment group, or maybe it causes their children to be more likely to survive, and so on and so forth.
ELIE: This is a really [laughs] challenging question, and relates to discussion we were having about the extent to which the enterprise of trying to estimate cost-effectiveness is, like how well it works, what's knowable, and what's ultimately unknowable. The way that I see it now is that there's just a limit to how far you can take cost-effectiveness analysis. In some ways, you have to just decide how far you're willing to go to model things out to second order effects, third, or fourth order effects, etc. And it's really hard, because the motivation to go further is [laughs] very strong, because you're like, “I'm missing something that matters to the case for this opportunity. I want to capture everything.” On the other hand, practically speaking, the further you go, the more complex your model becomes, and the harder it is to understand, and the more likely you do something wrong. Ultimately, the approach that we aim to take is to say, we see this model in our estimates, effectively as a tool for ordering the potential opportunities that exist in the world, and then we want to fill them in from top down. In using that tool, we want to apply it in the most practical, useful way. Often, that means not going too far, because if we go too far, it will no longer be practically useful. Even though we know that we're giving up some “truth” by not taking the model further to those, second or third order effects, and then we're using it, in this way, as a tool. What can be exceptionally challenging is when it seems like the second order effect overwhelms the first order and you really need to take it into account in order to see the full effect of the programming on the world and that's a real challenge for us.
SPENCER: Do you think it's a reasonable heuristic to say that usually, the more degrees out you go, the less large the effect is, like the second order are smaller than the first order and third order even smaller, and so that, usually, it's kind of okay to neglect the higher order effects.
ELIE: Usually, that's been our experience, most of the time — there are some exceptions, so again, I don't have the number at my fingertips — but roughly speaking, when we recommend malaria organizations, we think about half of the money we're directing, is displacing money that would have gone to that intervention, and now goes somewhere else. That's sort of a second order effect, but again, it's a pretty big one – 50%. Similarly, malaria causes severe childhood illness. There's a pretty substantial body of different types of evidence, deworming is one piece. but much, you know, others as well for malnutrition in other places that health and wellness in early childhood supports improved life outcomes, and therefore, about a third, I think, of the impact estimate that we make from malaria programs comes from what we call developmental effects. (You could call that a second order effect. It's definitely like a loose estimate, but it's something that is a pretty big deal.) All that said, there's a lot of other things that we consider those are those are the two examples that come to mind that are the biggest, there's a lot of really small ones, the burden of disease for malaria or lost economic opportunity, averted treatment costs, and etc. that are benefits of the malaria program – all of which are pretty small. And we do try to explicitly go through these and say, “Do we think this would be a really big effect? If yes, let's look at it. If not, at least try to be explicit. And we have this in our model about the fact that we don't think it's that big. Here's the effect that we added. And then not put a lot of time and energy into trying to specify it precisely.”
SPENCER: On a slightly different topic. I'm really curious to hear your thoughts on the average charity. Let's say, I were to go to one of these charity recommendation systems like Charity, Navigator and GuideStar. And just literally just pick at random, I give money at random to these charities, give $100 to them at random, what sort of distribution of impact would you expect to see? Would you expect to see, for example, more like a normal distribution, where a lot of things are kind of in the middle, or would you expect to see more like a power law where like, a few of those $100 donations go much, much further help people much, much more than the others?
ELIE: The charitable universe is so broad, that maybe it helps to just homing in on organizations trying to help the very poor.
SPENCER: True, let's focus on that, because you also have more experience around that.
ELIE: So what I'm trying to think about is what work does, in addition to Harvard University, let's say, within helping the very poor — my best guess is that there is something where the very best organizations are kind of like having this massively outsized effect because even within, when I think about GiveWell, we've been looking for this for for many, many years, the sort of very best dollars, we recommended in 2021, we probably spent at 50x or so, there's a very small amount of money that we can spend at that level.
SPENCER: So 50x, referring to 50x giving cash.
ELIE: 50 times as good as giving cash, and then our last dollar was spent at around EDX. And then, you know, there's GiveDirectly at 1x. I was just comparing GiveDirectly to sort of average US based charity. I think the median income in Sub-Saharan Africa is about 100 times lower than the median income in the United States. So you might assume that GiveDirectly is 100 times better. In some, you're just increasing someone's ability to earn money than the average US-based charity. So when you look at like the GiveWell universe, and then sort of expanded out and tried to do that, like quick and dirty give directly to other comparisons, you can see how many multiples there are separating sort of like average US from top of the GiveWell list.
SPENCER: That's a really interesting way to think about it. If we're thinking about the relative impact of giving $1 to someone who has 100 times higher income, compared to someone who was like 100 times that income, how much more beneficial is that? Because I assume it's not just linear. It's not like, it actually helps them 100 times more, or does it? Do you think it is linear?
ELIE: I mean, I think like a loose working assumption is it helps them about 100 times more — I don't know for sure that's like where the math comes out exactly, but I think it'd be like, roughly – that basically, increasing someone's income by 10%, is about the same amount of good in both cases. Giving $100 to someone who has a $100 annual income, that's doubling their income and giving $100 to someone who has a $10,000 annual income while you're only increasing their income by 1%.
SPENCER: That's so interesting, really puts into highlight why you might get a sort of power law effect. [laughs] Even just from where in the world they're giving the money used to get this enormous difference. If we again, think about randomly sampling charities in let's say, in poverty, reduction space is given to the very poor. If you think about the average charity that's trying to help people in the developing world, do you think the most common thing is that the charity just helps a little bit? Or do you think it's actually maybe really common attorneys don't help at all, or like, they essentially have zero effect?
ELIE: I say, most often, they're helping a little bit. They're helping some, I'm sure it's like, very frequent. They're not helping at all, or even causing harm. I don't think that is — that would not be my guess, about the most common outcome. I don't know what the average effect is of this sort of randomly chosen charity. I'm not even entirely sure how to go about thinking about that, because it's so hard to completely understand what they're doing.The heuristic that I use is, I say, GiveDirectly seems really great overall, in terms of the impact they're achieving. There's a lot of reason to believe that cash is really great — to give to someone to allow them — and so I would just by default, assume but in a hand wavy way that the average organization is not accomplishing like that amount of good because that's a pretty high amount of good, but far from sure about how to even, reasoning through that or how to estimate whether it's like closer to 0.9 or 0.1 or 0.01 I really don't know.
SPENCER: Because I tend to think that doing good is very brittle, that has all these different ways it could fail, and very similar to what you were saying before, it makes me suspect that actually, a substantial percentage of charities don't do good at all. It wouldn't shock me if half of charities don't do any good or do so little good — that's like, essentially rounding errors that can be ignored — because I think there's so many ways that they can get close to doing good, but then either, you know, step 37 doesn't work [laughs] and the whole thing just doesn't go through. So I suspected that the brittleness model implies something like a lot of them do nothing.
ELIE: Yeah, I think, a little bit of a different intuition. I think a lot of the brittleness affects the magnitude of good more than like the accomplishment of any meaningful thing. So even you can go back to the example you brought up, about like chlorination. The chlorine doesn't hit the right bacteria, but I don't know, I'm pretty sure that drinking cleaner water is better in some way. Now, is it a rounding error? I'm not sure, and not to use that as, like a specific example. It's hard to think about the rounding error. I think another thing that's going on is your needs are so great in many parts of the world, that some pretty basic programming does some good. So what are some of those things, digging wells? A lot has been written about the extent to which digging wells is insufficient may not solve the problems, infrastructure can go into disrepair. But also, if you dig a well near someone's village enough to walk as far as, I would count that as meaningful, not rounding error. Similarly, organizations that donate medical supplies, there's a lot of ways that donating those medical supplies are not doing nearly as much good as one might imagine — if you kind of think about the best case scenario — but bringing basic medical supplies to highly resource constrained settings, doing some good — I'm not sure how much of this like difference in intuition is just around what counts as a rounding error versus what counts as meaningful, but sort of like my very qualitative intuition is that it's more on the side of meaningful good, but far less than could be accomplished if optimized, rather than running error.
SPENCER: It the examples you gave, I would agree with you. But I would say those seem already unusually [laughs] impactful, like from the get go. If you're already talking about putting chlorine in people's drinking water that is like polluted drinking water, or you're talking about digging wells and areas where people have to walk a long way. Those seem like “Oh okay, that actually that's significantly better than average.” A lot of cherries that I see, and maybe they certainly would come across your radar though. But, a lot of the ones I see are more like, we're gonna go retrain these people for like a different type of work because they're struggling in this current kind of work, or, I guess things that are in the happening in wealthier countries where they're, the impact is much more indirect and feels there are many more steps to explain, how it goes from, like, what they're doing to like the world being better, if that makes sense.
ELIE: I'm just realizing like, I think two of the things that I really I'm completely ignorant about notwithstanding all the time I've spent in this work over many years is what the average charity does, literally the average because I just don't spend any attention on what the — I've got a list of 1000 charities what like number 500 does — but I think what I might be having more interesting driven by is, where the median dollar is spent, where the biggest organizations are capturing, a larger amount of the total dollars to be totally transparent. I also don't know exactly how those dollars are spent, but I think some of the intuition about the types of programs that are being run probably comes from being more familiar with some of those really large institutions that are doing some of these basic provision of basic needs.
SPENCER: That makes a lot of sense. So I just went on the Charity Navigator website, and they have this thing called the “10 Most Followed Charities”, so this gives us some idea of, there's most followed doesn't mean they're most donated to but it probably is pretty well correlated with people donating to. The first one, the number one ranked is Doctors Without Borders, do you know about them or how they operate?
ELIE: Yeah, and I would say, in my opinion, they're good, on the good side providing basic medical services to people who need it.
SPENCER: All right, great. And then okay, probably now, you probably say, “Okay, compared to GiveDirectly, they're probably not that cost-effective, but, they're definitely on the good side, not like doing nothing.”
ELIE: I don't know. I think they're a hard one. Like, how are they compared to GiveDirectly in terms of cost-effectiveness? Like, I'm not sure. I think that basic health care is really great. I know relatively little about Doctors Without Borders, but I wouldn't be shocked if Doctors Without Borders, like surpassing that 1x bar like that, would not be like a shocking outcome of deep analysis.
SPENCER: But I suppose it just might be hard to evaluate.
ELIE: It would be very hard to evaluate.
SPENCER: Yeah, it's very hard to evaluate. So that's why it's not on [laughs] the table recommended list. Okay, what about the American Red Cross? That's number two.
ELIE: I really don't know what they do.
SPENCER: I think they're just like a massive organization, right? They probably do, like so many different things.
ELIE: Yeah, I mean you see them in response to natural disasters, and also do blood drives and things in the US. I think one of the strange things about the charity world, when people got started, we went through multiple versions of believing we could do the equivalent of, look at the company's financial statements and figure out what they're doing. You can look at a company seriously, where's the profit coming from, you can't really figure out where the impact is coming from, or what the most important things are that organizations do. One of the most surprising things about the charity world to me looking back now is how hard it is to understand very basic that you're asking, like, very basic, logical questions. What does the American Red Cross do? How good do you think it is? And the answer is I really don't know, I wouldn't give it there. I give somewhere else is like the best I can do rather than like some loose guess about how good it is relative to other things.
SPENCER: My understanding is, it literally might take hundreds of hours from someone extremely experienced to really wrap their mind around, like one charity [laughs], like what it does, and even have a reasonable ballpark of its effectiveness.
ELIE: It's not only the researcher would need to spend the time but the information isn't publicly available. Therefore, to understand a charity's impact would require a lot of time, and then a lot of participation from the charity, and then you could come up with something for sure.
SPENCER: Some people might be surprised by that. Because they might think, “Well, doesn't the charity track this information that's necessary? Why does it take so much time, they already have a report that they give to their donors.” It is just that the information they track is just not the information, you need to figure out how effective they are?
ELIE: Very often information is tracked, like a project level, rather than the aggregate level? If you want to know what the American Red Cross does, I would be surprised if there were some report that was like, “Here's our overall impact, in anything approaching a sort of technical way,” I'm sure they have an annual report every year, it isn't a report that talks about impact. But I mean, that, frankly, is more marketing than evaluation. And very, very, very, very rarely, will there be an organization that has an aggregate report that is trying to approach like a technical evaluation of impact, to accept exceptions that I remember from people's very early days are the Carter Center and Population Services International (PSI), one of the reasons we were real attractive to them back in 2007, 2008, 2009, they were the only organizations we found that were doing that. Doctors Without Borders is one of the organizations that that tries to do the same thing, sort of a approaching technical, detailed articulation of impact. When organizations have a more detailed technical articulation of impact, it tends to be project-based. You might have an organization — I won't even name an organization — a big international NGO billion dollar budget, they might have a lot of semi-technical reports, for $5 million projects. So no one is like adding that up into an aggregate. Finally, the underlying methodology for the semi-technical report is still insufficient to really know what the impact is because it doesn't address questions like, what's the counterfactual or that report outputs like, number of wells dug without giving sufficient information to understand what was the impact of those wells? To what extent did it provide cleaner water, shorter travel time, et cetera., and that's all missing. I say like, there's been multiple points in GiveWell we've done things, ask charities' replications, talk to a lot of charities, dig through charities, websites, I spent a whole week of my life in 2009, poking through 300, charities, websites. I don't know from experience how limited the information is that you can find that would help you answer this question.
SPENCER: It suggests that that information doesn't really matter to donors very much, because presumably, if donors were demanding it, they would have it and it's more around “Okay, we need to tell a story that donors feel really good about, we need to point to maybe some numbers but they're not necessarily impact numbers. They're like just numbers that make a donor feel like we're doing real work,” or something along those lines.
ELIE: Actually I think there's like some underlying dynamics that are operating here. It is the case that when organizations are funded by many small donors. Say like a big organization, like Save the Children, gets a lot of money from people giving 100 bucks a pop - those donors are not looking for or demanding this sort of information, and therefore, Save the Children doesn't need to provide it. Another thing that happens is organizations will be supported by really large institutional funders, those could be the US government, it could be the Bill and Melinda Gates Foundation — those funders are demanding reporting, but it's largely for the projects they support. So, if the Gates Foundation or if the US government gives Save the Children money for a project, what they want is reporting on that particular project, and they want reporting that meets the criteria that the US government has. Some of their focus is on maximizing impact, and then there's also other criteria that they're looking for that matter to them that flows into their reporting, but they're not asking for reporting on the entity as a whole. I think the final challenge is, this is even a challenge for us, and even for a very impact aligned well-meaning donor is not wanting to ask organizations for too much; meaning, it's very helpful and necessary to have some degree of monitoring information and evidence so you can decide whether to keep giving. At the same time, charities tend to be understaffed, and every dollar spent on evaluation means fewer dollars spent on programming. From my perspective, I think there should be a lot more sense on evaluation, because I think it's really worth it, especially over the long run to understand how programs are working but I don't think it's obvious that I'm correct and I think there's a lot of people who would say, “These are so great, we just need to pour more money into programming.” And I said, “I think they're wrong.” But I think there's certainly a good discussion to be had, and that also plays into the reticence of donors to push on better evaluation materials.
SPENCER: Insofar as the effectiveness of charities tends to be parallel distributed, as opposed to normally distributed, it seems that would actually push towards the value of collecting evidence more and more, because you can't just like hope to get lucky, if you actually get higher quality evidence, you can find things that are like 20x, or 30x, or 100x better, and that would be worth it, whereas if most things were like around the same amount of good, and then maybe collecting that extra evidence wouldn't be worth it.
ELIE: Yeah, that's exactly right. I think you're right. I think an intuition that is common is that most charities are doing about the same amount of good as each other. Therefore, what are we doing? Why collect evidence, because it will make a difference, it's not worth it. The underlying principle of GiveWell is our assumption that there are these large differences, and evidence collection can make a big difference. So I'm repeating what you said. I fully agree.
SPENCER: I'm realizing a kind of interesting contradiction here, which is in venture capital investing, it's widely accepted that there's sort of a power law of startup returns. Something like 90% of startups are going to go out of business or not make money for their investors, but then occasionally, you're gonna have that huge win and stuff like 100x, or maybe even 1,000x. That's going to kind of make up for all the losses from all the others. This kind of power law phenomenon pushes venture capitalists to say, “Well, the thing I really don't want to do is miss the next Google.” So it's fine to buy a lot of duds, I want to spread out my portfolio across things, sure, I want to eliminate things that are definitely not gonna be the next Google. But the real thing I don't want to do is miss out on that 100x in my portfolio, that's what I got to achieve. So there's maybe a bit of a more spraying approach. It's interesting contradiction, because if there really is something like a power law and charity results, you might think, well, maybe that actually means [laughs] we should be spreading out the money and create a whole portfolio because all that matters is that we don't miss those like 1,000x effectiveness charities rather than if we concentrate too much, maybe we actually will end up missing them. I'm just wondering, like, what's going on there. I'm actually kind of confused about it.
ELIE: The biggest challenge with spreading out funds is that it's difficult to know which programs succeeded. In venture capital, you know which company became Google because it became Google — that company is literally worth a huge amount of money now, and that was your goal as a VC to get that return. In charity, your goal is to have impact, because we're here to know whether you have a path at impact, and the only way to know whether or not you've had impact is by ultimately conducting some sort of evaluation that demonstrates that this impact occurred in the world. This is honestly like one of the big challenges GiveWell faces today, I wish we were in a place where we could – GiveWell raises a lot of money now and has the ability to give away a lot of money, we are not risk averse, we're [sic] ready to make mistakes. I would love it if we could give a million dollars to 25 things, or more, and just see the ones that let a 25 flowers bloom, and then support the ones that succeed the most over time. The key challenge is knowing after the fact which of those succeeded. In reality, it's, of course, hard to predict ex-ante, but the challenge here is you can't even tell after the fact which one had the impact without doing a lot of work to assess it.
SPENCER: It's just an interesting pointu, but I still feel like it doesn't fully address this idea of like “Well, okay, maybe you can know [laughs] how much good you've done, like a VC that can know how much money they've made after the fact. But if you just kind of allocate randomly, as long as you're avoiding areas where it's very, very unlikely to have kind of really, really good effectiveness.” So maybe you want to avoid the things that are definitely not effective. Among the things that could be really effective, you just kind of let 1000 flowers bloom, if we really are in this parallel world, shouldn't I get pretty good returns, because [laughs] that one, that's 1000 times more effective than average is going to make up for a lot of the duds.
ELIE: We have to work through the math. If you take 100 things, and you give a million dollars to each one — and let's say one of them is 1000 times more effective than average. In order to really access that impact, you have to keep funding that one and 100, but in our world, we don't know which of those one in 100. We have to keep funneling. We have no way of distinguishing between the 100, so we just have to equally spread across all the 100.
SPENCER: Right. So surely, it would be better if you could, like there are some VCs, as some of their startups are taking off to put more and more money into that one. I think that's what you're getting at.
ELIE: Well, and I say like in truth, it's actually necessary to keep putting money in because they can't grow without you putting more money in.
SPENCER: Yeah, it's interesting, I feel it doesn't fully resolve the contradiction, because there are startup VCs that only invest in seed rounds, right? Through just every year they're investing in a new batch of startups, and mostly they're failing, but occasionally, they give such big returns that the fund is still doing great. I'm not saying this is as effective as what you're doing. You know, I don't think it's just a kind of fascinating idea that like, if we really are in a parallel world, maybe actually, it's not that dumb to just pick the area where you know you've limited a lot of like, really dumb stuff, and then just kind of spread money around. It's like, not as far from efficient as one might think, because that's sort of an extra power loss. That being said, it might also depend on the parameters of the power lot. If it wasn't one intense service that succeeded, if it was one in 100, maybe just spreading money around among reasonable startups would actually be a terrible idea. It actually depends on the dynamics of how rare or how hard to find, are these like 1,000x opportunities.
ELIE: Yeah, like how rare are they? How outsized is the impact, and how likely are they to keep getting funded? One fact about being, I don't know, like an early stage funder in the VC world is there is someone else that wants to pick up the program and take it forward. I think aiming to get someone else to take over the funding needs for a program is a goal of many funders. They imagine that either the country's government, like the government of Nigeria could take over the funding of a program in Nigeria or a large aid agency, like the US government could take over funding. In that world, I could see how this strategy would be very effective if you could fund a lot of things, some of which would be very good, and they get picked up. I still think the big challenge is that no one has a mechanism for determining which ones are worth continuing — that seems like the biggest difference between the VC analogy and the charitable sector analogy. It's like we don't know which ones to stop funding and what made Google Google or what enabled Google to be Google was the recognition that it was succeeding over time, and then it continued to be able to gain funding because of that success to continue to grow at scale.
SPENCER: It seems like charity work is a lot easier to delude yourself [laughs] for a long time, because you're never forced to deal with the feedback — that is not working.
ELIE: Yeah, I think there's a decided lack of inarguable evidence about the impact you're having or not having.
SPENCER: Okay. So before we wrap up, I want to do like a rapid fire round or just ask you a bunch of questions and get your like, quick response, how does that sound?
ELIE: Yeah, how quick?
SPENCER: [laughs] Well, let's see, but, okay, so first question for you. Suppose that you were not allowed to give to any given recommended charities, how would you decide what to give to? Like, how would you approach it just and suppose you're not allowed to? Your answer is not allowed to be like, I go spend a year investigating, it's just like, you have to make a reasonably quick decision.
ELIE: I'll cheat. I'll just give to Open Phil?
SPENCER: Ah, do they accept money?
ELIE: I don't know, [laughs] I've come up with something.
SPENCER: [laughs] Okay. It's okay, if you don't have an answer on that, but I'm curious.
ELIE: Just outside the bounds of what I should get my answer? I mean, I think about saying —
SPENCER: Let's say someone's not going to GiveWell charity, not even to Open Phil, what should they think about in terms of how to decide where to give?
ELIE: The biggest things I'd be thinking about, or like, what do I potentially disagree with GiveWell or Open Phil about that could be things about moral values. We talked a little bit about that earlier on, and it could also be strategies that someone can undertake that, GiveWell or Open Phil might not and I think the best advantage a regular person has is the people they know, and the small funding needs that people they know have. GiveWell is not looking at every $10,000 funding opportunity that exists. Certainly, when you know someone really well, you may believe in them in a way that you couldn't believe in something that's further away. Obviously, GiveWell got a lot of help. In that way, when we got our initial funding from people who knew Holden, and they knew him really well, and therefore were willing to take a bet on us.
SPENCER: One way to think about charitable giving, is breaking it into sort of the sector area you're giving the money to. From there, the specific charity in that area. It's like, okay, if you're trying to help poor people, that's the area, and then you can say, “Okay, now which charity to help poor people.” I'm wondering, if you had to break down the impact, how much of the impact just comes from like, the focus, like, Oh, we're gonna focus on the poorest people in the world? How much additional impact do you think you get from choosing this specific charity within that realm?
ELIE: I'll go back to that math we did before, so I think a huge amount of the impact comes from choosing to help some of the poorest people in the world, maybe that's moving you from getting you like 100 times the impact you get, or just helping an average poor person in America. Then, choosing the best thing there, you also can sort of defer maybe the current margin by a factor of 10. I don't know, you factor about 100 by choosing the poorest people in the world, and an additional factor of 10 by choosing the best things available at the current margin.
SPENCER: That suggests that if people couldn't give to GiveWell charities, one thing they could do is just try to find things in the same ballpark area of you're already in the realm of helping the poorest people in the world, something like that, and then just look for other charities, and that could still be pretty good.
ELIE: Yeah, that seems right.
SPENCER: What are the views of you and GiveWell on helping animals?
ELIE: It was easier for me to speak for myself, which animals deserve significant moral weight. I wrote some things like when I was younger, that a disavow now about not taking animal welfare seriously — I do take it seriously. The reason GiveWell is not looking into animal welfare is I don't think we have a comparative advantage over what the farm animal welfare team at Open Phil and animal charity evaluators are doing — something that we'd like to look into super deeply. But I would just guess that we don't have a lot of impact or a lot of value to add above them relative to us doing more internationally given.
SPENCER: Do you think that most really good giving opportunities are already fully funded? In other words, if you were to take the set of stuff that's highly effective, is it like the vast majority of it already funded, and so then you have to look really carefully for things like the underfunded stuff? Or do you think there's actually it's actually not the case, and that stuff that's really effective is not especially likely to be funded?
ELIE: There's still a lot of stuff, that's really excellent, that's not funded. But I also think that you have to define what you mean by good and here and want to do something quantitative. So when I think about it — we've been talking about this I've given talks, where we put a threshold of cost-effectiveness on things as multiples of cash transfers, and you're looking a lot recently for things that are eight times or better than cash transfers. But if we instead are looking for things that are only half as good for times or better than cash transfers, there's a ton of opportunity there that we haven't looked at that exists that is underfunded. When I think there probably is just lower confidence, higher risk, higher upside stuff that exists in the world that we and others haven't looked at yet and found. But when you think about it, is the good stuff funded, it depends heavily on what your threshold for cost-effectiveness is.
SPENCER: So would you say that there's sort of an exponential drop-off, or even super exponential drop-off, like, as you go from 1x, to 2x, to 3x support, like, you're getting just an extremely fast diminishing number of opportunities.
ELIE: That's our best guess. But based on very loose analysis that or maybe put differently, I would say the opposite way, as you go down the cost-effectiveness curve, the available set of opportunities opens, gets like very, very, very large, and it's larger, sort of like an exponential curve, as you go down where it gets larger and larger as you go down that curve, all the way down to GiveDirectly where that sort of like unlimited capacity to give money to people who are like much poorer than you are and I. As you get more and more people's incomes go up and the you know, the marginal value goes down, but there's effectively like unlimited opportunity to give money to extremely poor people.
SPENCER: Because it's also consistent with the kind of the power law — we're not too precise about it. This is a more personal question: you have in your life helped an insanely large number of people and an insanely large amount — if you hadn't been born, there would just be a hell of a lot of people who would be much worse-off than they are. I'm wondering, do you feel that on a visceral level like you think about that, like, how do you relate to that?
ELIE: I'm not sure it's true. I think that something like, co-founding GiveWell and Toby, and well, we're starting getting what we can at the same time, that Holden starting GiveWell, Peter Singer was writing a lot about effective altruism, I'm not sure how to think about like, the literal marginal contribution that I made. But I say that, honestly, I am really proud of the work that I get to do and feel really lucky that I get to do something every day that is challenging. It's fun. And when I sat back, and I said, like, what is it I'm spending my time on, when my kids are asking me, “Why are you working?” Like, “Why are you playing with us?”I can explain what I'm doing. I feel really proud of the work that I get to do. And that enabled us to try and help people.
SPENCER: Very nice scenario where your kids are like, “Oh, no, if he plays with us, another 10 children are gonna die somewhere.”
ELIE: Yeah, it's not very compelling for them.
SPENCER: [laughs] I'm wondering about cases where GiveWell has changed his mind on charities, and what sort of most common scenarios are like when you change your mind when you think something looks really good? And then later, you decide it's not? You know, we've already talked about this a bit, is it usually because you discover some new facts about them that you didn't know before, or do you often find that charities actually change what they're doing and like, or is it more that they just like, they are at capacity now, and they used to be really good, but now they just can't use money effectively?
ELIE: God, there's so many reasons, I think probably the most common one is a new piece of factual information. We recommend an organization called the No Lean Season on the basis of evidence that shows that incentivizing migration from rural areas of Bangladesh to urban areas, during the lean season, when there was no work in rural areas, led to large increases in income, funded the creation of that organization, also funded additional evidence generation — that additional evidence showed that in multiple runs, the program was not increasing people's incomes. So that caused us to change our mind, and there's a lot of examples like that of opportunities we've recommended or opportunities we haven't yet recommended that have caused us to change our mind. Other things that lead us to change our view — to some extent, we also talked about moral weights. So GiveWell, as I said, given those views of the relative, I don't know, philosophical value of health, relative to income has shifted over time with us putting more weight on health, and that's shifted the allocation of funding, but also the priority, we get health programs relative to income generating programs. Finally, there's just changes to GiveWell's situation. A couple of years ago, we would have said, we're only going to find programs that are above 10x cash, that's as a construction of the threshold. But as GiveWell raises more money, we just think we should go down further further down the curve. That's not a change of mind, as much as it's a change in circumstance, but there's a million other other examples, including organizations that have changed how they operate, in order to make us more confident about. An early early example is the Against Malaria Foundation, which we looked at first in 2008. At the time, there would be very limited monitoring after they distributed malaria nets, and we said, “Hey, this is what's really holding up our recommendation.” And after that, they started doing post distribution surveys. And after that, we started recommending them, so we've also had the ability to influence organizations and help them operate and what ways we think are better.
SPENCER: Would you consider funding new projects? Because as far as I can tell, that's not something GiveWell has done. You're looking for existing projects, and if not, I'm wondering what the thinking is there, just because obviously, you're not going to have a wide evidence base on new projects. But with new projects, you could have a lot of influence and for a small amount of money coming to exist that may not have otherwise.
ELIE: We've done a fair amount of actual funding of new projects a little bit targeting our website.
SPENCER: I didn't realize that.
ELIE: We found New Incentives, that's now a GiveWell top charity, this was given them $100,000 to start up in 2014. I mentioned No Lean Season, we got them off the ground. And additionally, they shut down, Fortify Health and Charity Science Health – organizations that kind of came out of the EA community, starting programs on iron fortification and SMS reminders for immunization in India supported those. I think we probably will do even more of this over time. The current reason we're not doing more is limited capacity. You know, our big institutional or strategic objective or strategic challenge is finding great ways to give away a huge amount of money, we think we might have to, we might be in a position to direct as much as a billion dollars by 2025. Small organizations tend to have very small budgets and are not getting as close. We ultimately will have more impact in the very near term, by getting to more established organizations that are able to help more people more effectively more quickly. We've also had some success hiring recently, that's a big thing. We're aiming to hire more senior researchers who are hoping to sort of be the intellectual leaders on our team, as we do, we're able to do more. One of the things we most want to do more of is funding early stage programs and early stage research that can then lead to programs that can absorb more down the line.
SPENCER: One thing that's come up when people try to think about giving to GiveWell charities is a sort of counterfactual impact, because some very, very large donors like in theory, they may not be able to give directly, because maybe that's just such a large capacity, potentially. But some of the other recommendations if the really, really large donors wanted to, they might be able to feel essentially the full amount, as I understand it, or maybe I'm wrong about that. This is sort of a concern or confusion, some people have been saying, “If GiveWell is not directing these really large donors, if they have like tight knit relationships to fund it fully, why should we fund it? And sort of the right way to think about that?”
ELIE: There's a couple of answers to this question. First off, the biggest donor that we work closely with is Open Philanthropy and open philanthropy sets its budget by making its own decisions about how much to give those recommendations every year and has its own research staff. We can make an argument to Open Philanthropy about why we think it should give more, but we're certainly not in a position to convince ourselves and Open Philanthropy about what should it be doing. In the very short term, I think in 2022, we expect to be in a position to raise a little bit more money than we're able to give away, we think it will amount to about 10%, of our annual giving, so we're sort of what we call this is rollover funds, where out of about $750 million, that we aim to direct this year, we think about 75 is going to be rolled over to 2023, but in the very near future, 2023-2024, just expect the overall needs to surpass the capacity of donors to give. It's possible, we'll get Open Phil to give even more, but Open Phil decides that it wants to hold out for a higher cost of [inaudible] threshold. I think we'll be aiming to get donors to give to things that are really great, maybe things that are five times as cost-effective as cash transfers — that I think is a really good deal that people should be excited about.
SPENCER: Some people have made the argument that if you give to a GiveWell charity, that Open Phil could be filled to capacity, in some theoretical sense, you're like putting money into Open Phil coffers, which of course, will then try to use to help the world. It's not like a bad thing per se, but that there may be like, the counterfactual impact may not be what you think, do you think that that is flawed reasoning?
ELIE: I understand where that reasoning is coming from. It is like a very reasonable question to have. Open Phil made a change this year, where they committed large amounts of funding to GiveWell recommendations over the next few years, 300 this year, 500 or 300 in 2021, 520 in 2022, I believe, that was intended to be clear about the sort of maximum amount that Open Phil would be willing to give in the short-term. That said, there's a possibility — it's still, I think a donor could be concerned and say, “Well, isn't it possible that as GiveWell finds more opportunities, Open Phil will continue to increase its giving, and therefore the true counterfactual effect of a gift to GiveWell today is adding to [inaudible] to be sent in future years. I think that's actually like a reasonable concern to have on February 16. (The day we're recording this February 16, 2022.) The reason that I am giving to GiveWell, and I still would recommend that donors do, is we have grown the room from our funding we have — meaning the sort of size of the opportunity we have extremely quickly over the last few years. What we're aiming to do is surpass Open Phil's ability to give, and I think we will, we'll be able to do that within a few years. I guess basically, in my opinion, obviously biased is that we're going to surpass that level, and won't be an issue, within a few years, and some of our trajectory of finding more opportunities, I think, is good evidence that we'll be able to keep it up. To be brutally honest, like, I don't think it would be crazy or wrong for a donor to say, “Right now, I'm a little concerned that given the GiveWell is supporting Open Phil, I really don't want to do that I'd rather hold on to the money myself, I can trust myself to give in two or three years, and I'll just give him two or three years.” I know donors, I talked to donors who are planning to do that. It's not the thing that I think is optimal, but I don't think it's highly problematic as long as they are going to trust themselves to actually give again in the future.
SPENCER: So final question for you. Sometimes people think that over quantification comes with lots of problems that it creates. I feel that I've seen this myself where people want to model something out, so they put numbers in the spreadsheet for the thing, but then if you were to really carefully track the uncertainties, you'd realize that like, the actual range of estimates is like over three orders of magnitude or something. At that point, you know, what's the point of a model at all, or you realize that there's some assumptions made in some of those numbers, where it's like you'd made a different assumption that could have been reasonable, you would have gotten totally different results. So I'm wondering, is quantification going too far sometimes? And like, how do you think about adjusting that?
ELIE: [Laughs] Definitely goes too far, sometimes. It's a big battle that I feel like we're fighting because on one hand, you want to quantify and on the other hand, the over reliance on some number in a spreadsheet is really problematic. We're trying to fight against that in two ways – the first is, you're really always trying to think about, like, the simple case for a grant that is independent of the model itself, and that's not itself in any way sufficient, but I think it's really helpful as a gut check on the sort of quantitative, or the number that the model spits out. Second, trying really hard and to simplify models down and to say, “Sure, we have this sort — of website 538, I had this a few years ago, I really liked it, when they had the deluxe version of their political predictions, then I don't know, the advanced, and then the really basic, and it's really helpful to have, the super deluxe version of the model because it is super quantified and tells you something, but then also just give yourself like sort of an overly simplified gut check on what, the complicated model shows as a way of trying to triangulate your view and not be overly reliant on just like one particular set of numbers.
SPENCER: I like that approach a lot, like really trying to hone in on okay, what are the few factors that are driving this? And does that make intuitive sense? Or is this thing out of nonsense?
ELIE: Another question we try to ask is, like, let's say the numbers didn't exist when we still made the grant, if not, why not? If yes, why? And that's just certainly we're not making grants qualitatively. But we're always trying to ask that qualitative question as a way of thinking more critically about the decisions we're making and not being overly reliant on some number.
SPENCER: Essentially, these two different methods are like a qualitative evaluation of the impact and a quantitative one — they both have flaws, but they're sort of not the same flaws [laughs], so together they kind of enhance each other.
ELIE: Yeah, totally.
SPENCER: This was awesome. Thank you so much for coming on.
ELIE: My pleasure. Thank you for having me.
JOSH: Before we wrap up the episode, though, here is this week's listener question. “Listen to write in to ask, how do you choose guests to invite on the podcast? “
SPENCER: When it comes to who to invite in the podcast, I think about a few different things. One is, does this person have valuable ideas to share — ideas that will help people understand the world better or improve their lives. Second, I think about whether these ideas are novel to the listener. There's some great ideas that everyone's already heard many times, and so it's not that useful to bring that on. The third is a kind of eloquence of speaking because someone can have brilliant ideas, but just not be very good expressing them in real time and so it just wouldn't work very well for a podcast.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms: