CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 305: What beats intuition when it comes to doing good? (with Marcus Davis)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

March 28, 2026

If you enjoy our podcast, we have some exciting news – we’ve just launched a new membership called Clearer Thinking Plus.

Members get this podcast completely ad-free, as well as two professional coaching sessions every month, access to our advanced cognitive assessment, and seven other exclusive perks.

Clearer Thinking Plus is one of the most affordable ways to get access to a high-quality coach - whether you want to improve your habits, find more effective ways to work towards your goals, or get assistance making difficult decisions. It is also a more affordable and convenient way to get all the perks we offer.

If you're not interested in coaching, you can still get ad-free access to this podcast and the other perks with our explorer plan.

Access www.clearerthinking.org/plus to become a member today. We hope to see you there!

Can radically different forms of good really be compared? What makes two charitable outcomes commensurable? When does cost effectiveness become a moral argument rather than just an economic one? Is helping the global poor often cheaper for reasons that are ethically relevant? How should we weigh temporary enrichment against preventing severe suffering? At what point does refusing comparison become morally evasive? Are some value systems too implausible to treat as equally serious? How much should location matter when the same intervention works in multiple places? Does the ability to compare causes require a single theory of value? What do we lose by pretending all forms of good are incomparable?

Marcus A. Davis is the co-founder and CEO of Rethink Priorities, a think-and-do tank that uses rigorous empirical research to help philanthropists, policymakers, and cause-focused organizations direct resources where they'll do the most good. Marcus writes about effective charity, EA culture, and arguments around doing good at his Substack, Charity for All, and you can also follow him on Bluesky: amarcusdavis.bsky.social.

Links:

SPENCER: Marcus, welcome to the Clearer Thinking Podcast.

MARCUS: Thanks for having me, Spencer.

SPENCER: Now when people think about doing good, some people are going to devote their efforts to improving educational outcomes. Some people are going to try to save the lives of children living in poverty. Some people are going to try to reduce suffering on factory farms. Many people think these are totally incomparable. You can't really talk about what charity is better than what other charity. They're in such different domains. They're doing such different things. But it seems to me that you think that actually you can start to think about how to compare charities. So how do we resolve that?

MARCUS: Yeah, there are a couple of ways of thinking about this. You can start off by considering pretty narrowly similar ideas. For example, I'm from Chicago. There is a major lead pipe problem in Chicago. In fact, it might be the city in the world with the highest proportion of lead pipes of any city, just for bespoke historic reasons. In fact, I had lead poisoning as a child, and I'm fine. But you might think, "Okay, suppose one wants to do something about lead. What should they do?" Chicago is a relatively rich city in a very rich country on a global scale. Not surprisingly, if you consider trying to help improve that problem, it's actually a lot cheaper to help people in low-income countries facing the same lead problem. You might say, "Okay, if your only goal was to improve lead exposure or lead poisoning as a problem, you can probably say, 'I'm going to compare low-income people and high-income people, and it's cheaper to help low-income people.'"

SPENCER: Can you unpack that a little though? Why is it so much cheaper to help poorer people with lead problems than wealthier people?

MARCUS: Oh, because it literally just costs a lot more to do anything in rich countries. To a first approximation, if you have an average income in the United States, you are maybe in the 80th or 90th, sometimes 95th percentile in the global income distribution. If you're a relatively poor person in the United States, you may live on $25,000 to $30,000 after taxes, something like that. But if you're among the global poor, you may live on $3 a day. Services tend to just cost less in those places, even if the infrastructure you might imagine has some fixed costs to build things and so on. It still ends up being the case that it's just so much cheaper to help people in poor countries. This applies not just to lead, but to many, many problems. For most issues you run into this type of dynamic apply.

SPENCER: You could imagine that part of that could be the cost of labor. If you're trying to get something done, it's much cheaper to hire people to do it. You can imagine that part of that is due to diminishing marginal returns from having money. If you have quite a bit of money, you can already spend money in a lot of ways to help yourself. If you have very little money, there may be many things you can't buy. For example, people in the US, if they were in an area with lead poisoning, could afford to buy a lead filter or something like this, whereas maybe in a poorer area, they couldn't. But it could be very cheap for them if someone were to provide it. It could be done very cheaply. Is that the kind of issue we're talking about?

MARCUS: Yes, in fact, I'd say two things there. One is, when I lived in my prior apartment, which is not too far from the University of Chicago, it was not at all a poor area. The apartment I lived in was over 100 years old and did, in fact, have lead pipes. I bought a lead filter. I never drank the tap water once while living there for more than two and a half years. But this cost 50 to 60 bucks a month. So to have the water for my wife and me, this is just not something someone in a poor country could afford to do if they encountered the same problem. Stepping back from that very personal level, it is still also true in relatively rich countries that the government steps in and does a lot of things for you. For all the problems Chicago has currently, the local government and the federal government realize this is a problem, and they're trying to take some steps to fix this over time. Over the next few years, they've actually promised to eliminate this problem and replace all the relevant pipes. This is the type of thing that just would not happen in a poor country because, before they get down the list to something like this, there are a ton of other problems that are more pressing that they need to address first, and/or sometimes they might not even realize that this is a problem. I mentioned pipes, but there's everything from lead paint to lead being in the soil and so on, and not every country or every local government realizes these types of things are a major concern. As a result, they don't address them, even if it was the case that it was at the top of the list. If you live in a relatively rich place, government officials have a lot more time to think about this type of thing and the money to actually do something about it if it is a problem.

SPENCER: So this tells us that if there's a particular intervention you really care about, whether it's reducing lead or something else, there's a good chance that even doing that exact same thing, you could do it much more cheaply and therefore much more cost-effectively, getting a lot more bang for your buck by maybe switching where you do it, for instance, doing it in a much poorer place. But that also seems like an easy situation. That's the easy case where we're saying, do the exact same thing, but do it more cost-effectively.

MARCUS: Yeah, this is definitely the easy case. You might imagine different circumstances where you're comparing two different things. I like to begin again with a very simple example, but the point, I think, stands nonetheless. You might say, "Okay, there are different cost buckets, like, oh, these are very different things. How can you compare them?" On the one hand, you might say, considering putting on a fancy art performance for relatively rich people in, say, "Chicago or New York or London, these are things real charities do. This is the type of thing that can happen if you donate to a museum or to a symphony." This is a thing people will enjoy, but it's something relatively rich people would enjoy, and it would be kind of a momentary pleasure, something that may pass in a couple of hours, even if it's pretty good, right? You might say, okay, that's kind of a high-end art bucket charitable activity. On the other hand, you might imagine a charity, and this is totally hypothetical, that has a really high chance of literally saving the lives of children. A good example here is the Against Malaria Foundation (AMF), which provides insecticide-treated bed nets to families. This intervention is one of the most evidence-based things you can possibly imagine. There are dozens and dozens of RCTs on this type of thing. It's very clear it works, and it's very cheap. Relatively speaking, it doesn't cost that much, so we're talking maybe a couple of thousand dollars to save a life. These are totally different interventions. Someone might say, "Well, they're so different. How can you compare them?" I say in response, "Really? Does anyone really hold the opinion that you can't compare these at all?" I think this type of refusal to consider the merits is very implausible.

SPENCER: I know people that will reject that. I do know people that will say, "I think they'll make a couple of types of arguments." I think one type of argument they'll say is that they're just incomparable. These are different values. You might say that most people will value the reduction of suffering or saving lives more than an enjoyable art performance. But you could imagine someone who has a value system where they don't do that. So maybe that's sort of the first counter I get. What would you say to that?

MARCUS: It depends exactly on how they press this point. But I would say, "Not all value systems are equally plausible." You could extend this further and say, instead of being a high-end art performance, it's a momentary chance of a high-end art performance, maybe for the richest of the rich in the world. So, this is just for Jeff Bezos and Elon Musk, and so on. Okay, these people have a 1% chance of achieving, getting some slightly above-average art. At the other end, you get a 99% chance or greater. You press the button here harder and say, "Okay, 99% chance or greater is going to improve." You save a bunch of lives. We know what happened. At that point, someone could say, "Yeah, I still deny that these are comparable things and say it really is the case that these don't count at all." I think a lot of people do drop their guard at that point and go, okay, in that extreme circumstance, then I would say, "Okay, those things aren't comparable in some traditional sense." But obviously, I would prefer to do one of them. So that's one. But if they're pressing on the other side, I would say, "All philosophical beliefs or views of values like this aren't equally plausible. Just because someone can say, yeah, that's just my belief that these things are incomparable, that doesn't mean I have to take that seriously." Someone could make this type of claim about anything. Someone could say, you know, that's just my values. I don't value people who are left-handed. They don't gain any more weight. You can make this claim, but that doesn't make it plausible. It doesn't make it remotely believable that I should take it seriously.

SPENCER: So on the point that you could push this to the extreme, you could say, well, what if it's some tiny chance of making the richest people in the world feel slightly more pleasure? I think you will eventually get to the point where people will say, "Yeah, okay, clearly saving children is more valuable. We can all agree on that," but they might argue it doesn't necessarily make the point for more realistic artistic endeavors that are not just about making a billionaire happy for a second. They might say, "Well, it becomes a harder comparison." Or do you feel that just admitting that there's some case where it doesn't work kind of gets you more traction?

MARCUS: I think once you admit there's some case it doesn't work, then we are arguing about why that is the case. You can say, "Okay, in this circumstance, saving people is more important." How much more important? Once you get to that point, you go, "Well, you know, it's important. It's implausible that I think, you know, would you prefer saving 100 children or improving the life of one person one time for five minutes." They go, "Okay, I definitely prefer saving 100 children." Then we're just negotiating about where the line is. Once you're in that domain, I think the pretense that it's completely implausible to compare these has gone away. We're just talking about which of these trade-offs is more plausible. I would say, basically, on any plausible ethical theory that any professional philosopher or ethicist would endorse, you're going to end up a lot closer to, "Oh, actually, you can make certain types of trade-offs." Now, of course, philosophers and ethicists disagree about where exactly those lines are, but even the classic disagreement is about how demanding morality can be. How much effort can they respect that demand of you? I think the answer is, for most views, a lot closer to actually, you should do some good things, particularly when the question isn't about some major trade-off. It's like you're deciding to spend your money to do this thing. You should want to do some good with it. If you can do more or less good on that view, the view usually says, "Do more good." Of course, there are exceptions here, but I think this is the general principle that would hold.

SPENCER: Another kind of counterargument that I've heard is the rejection of thinking on the margin, saying, "Look, if what you're saying is true and we should always be doing marginal thinking, then we'll end up in a world with no art, for example, because it will all go into treating malaria and other immediate reduction of suffering things, and that would actually be a really bad world." So, there's something wrong with the sort of on-the-margin approach.

MARCUS: Actually, this is a good example of, if you turn the dial all the way up. Sure. I think it would be worse for humans if there was no art at all. But that's not actually the trade-off we're talking about. There's a kind of response to this. It actually reminds me of a famous thought experiment where someone's walking by a pond and they see a drowning child. They realize that they're wearing a particularly fancy outfit, and if they dive in to save the child, they'll ruin their outfit. They're down a couple hundred bucks or something, but the child's alive, right? So common sense typically says, or so the thought experiment goes, "You should save the child." People have objected to this because if you take it seriously, you might say, "In reality, there are no practical purposes. There are thousands, millions of drowning children. There are thousands of people out there you could be helping at any given moment." I think the idea is that if we say morality is maximally demanding, you have to do everything you can to help everyone all the time, then you end up in a place where no one would ever do anything nice. Everyone would be living at the margin. But I think a lot more theories just don't demand that. Common sense wouldn't demand that. Instead, it would demand you make some effort some of the time at least. Perhaps you might say a good example would be, if you think about all the income you have. As I mentioned already, if you live in the United States, you're probably relatively well off. You might be in the 90th percentile or higher in the income distribution. So you might think, with some portion of your money, you should spend some of that effort on these types of things. This isn't to say you must give to the point where your margin is as bad off as those other people. It might say, "Hey, you know, 2%, 5%, or 10% of your income," where you aren't really getting that much out of it, you should spend that money to help improve the lives of others. I think that's a much more plausible place. And this is even putting aside, "Yeah, I very much value progress myself. I don't actually think you should go all in on one worldview." But I think this type of "you should do at least some good for people who are much worse off than you, or people or animals who are much worse off than you" is pretty widely held across a wide number of philosophical worldviews. I think the reason that this is the case is because it's very feasible.

SPENCER: A variant on the drowning child thought experiment you gave, which you might think of as the river of drowning children, where you imagine you're walking along the bank of a river and there's a drowning child. You go in and save them, and you drag them out, and then there's another drowning child as you walk down a few feet later, and another and another. It turns out that there's a million miles of river, and there are a million drowning children. You literally could spend the rest of your life saving drowning children. At any given moment when you choose not to do that, you're letting a child die. Yet that sort of implies you should save, that you should literally do nothing else for the rest of your life except get food and sustenance and save drowning children. Most people would say, "Well, that's above and beyond. That's too demanding. Most people shouldn't be willing to do that, or shouldn't have to do that, at least." What does that say about the original drowning child thought experiment, if anything?

MARCUS: Yeah, I guess philosophical thought experiments have their limits. I would say, "Yeah, I think it's possible you shouldn't spend literally all your life doing that." I think we're kind of shadowboxing the idea of maxing out on consequentialism or utilitarianism, which are particular philosophical theories that say you should always do the thing that ends up with the highest consequences. Even in those theories, it might say, "For your own psychological safety or health, you might only want to spend so much time doing this type of thing." Other worldviews, other philosophical views, would not be so demanding of your time. You might say, "In many circumstances, you should try to help." But it wouldn't say, "The only things that matter are these. Your own children might say other things that matter, like things that make your life go well, do have some value and importance, not just your own happiness, but the happiness of your friends and family, your country doing well, living in a democracy, those types of things — or just gaining knowledge and so on." How exactly these theories hash this out varies wildly, but I would say many of them do not end up in the place where they say, "Oh, actually, the thing to do is to devote 100% of your resources all the time to saving the 10,000,000th drowning child." Even though, I guess I should caveat, I run the charity that tries to help people when they're considering helping the drowning child to do that effectively. But even with that caveat, it's not like I spend literally 100% of my day thinking about this. I take breaks. I have other things I do. I have hobbies and so on. It's just not the case that once you commit to, "Oh, it's a good thing to do X," that you must only do X, even when it's important. Though, of course, I think it's something worthwhile, and something I have pursued with as much vigor as I can over the last decade or so.

SPENCER: One thing you mentioned before is that you consider the plausibility of the moral theory, saying, "Well, it's more plausible that the right moral theory is one that says you should reduce suffering a lot than one that says you should give small amounts of joy to billionaires or something like this." How do you think about the plausibility of moral theories?

MARCUS: Starting with the easy question to see here: this is hard, so I think there are systematic things you could try to do. You could say, "Okay, here are all these moral theories, and here are all the premises they have, and here are all the results." I want to investigate the logical arguments that go into the premises that lead to the results and assess how I feel about each of these things. This is very tricky because for every philosophical theory, there's probably some variation on it that proposes something different. Nevertheless, I think there's probably something general here about making good arguments for your position and whether those axioms seem plausible. I am not a professional philosopher, but I've spent a lot of time reading professional philosophers or engaging with them, actually employing them as well to help solve these types of problems. I would say it's just really tricky to come to some definitive take about comparing consequentialism and deontology, and at some super macro level, there isn't a formula you can use to determine exactly how plausible they should be. Use some balancing act of looking at these types of arguments for them and assessing, okay, what are the results and are these implications plausible? A phrase for this might be something like reflective equilibrium, where you balance how good this argument is against how you feel about it after thinking about it. I personally think, because this topic is difficult, morality is hard, you should think, you should kind of at least do some deferring to experts, people who think about these things all the time, and if you do that, there's very non-systematic or systematic polling of philosophers, and they kind of sharply disagree. It might be like 30-30-30 virtue ethics, deontology, consequentialism, and as a result, I feel like, as a normal person, even someone who feels relatively informed, it's really hard to push that actually the correct answer is this particular version of consequentialism or something like that. I think people should just be humble. The evidence in this space is relatively weak compared to other domains. As a result, I just don't think you should be, "Oh, actually, I'm 95% in on this particular version of this thing I read." For one thing, you haven't read all the things. I've been thinking about this topic for a decade, largely as part of my job, and I still encounter new variations or new theories and new ideas, and I just don't think it's that. I don't think it's the type of space where you're just really going to come to definitive conclusions.

SPENCER: How important to you in terms of the work you do is whether there is objective moral truth at all? If there is no objective moral truth, and it turns out that the whole idea is kind of nonsense, then does all this sort of thinking not matter? Or does it change how you think about it? Or do you treat the possibility of no objective moral truth as its own moral theory? And then there's something you "should" do even in that case?

MARCUS: This is a good question. I would say a couple of things. One is, I take that principle about philosophical evidence being weak quite seriously. I guess one way of thinking about this is, I don't think you should be that certain one way or another about this question that's really abstract, like objective moral truth, which itself is contested by what exactly that means, or things that are subjective. Two, I would say, I guess one way of framing this would be — I spent a lot of time thinking about evidence in general, and in particular, we mentioned there's a lot of evidence, randomized controlled trials for the charity AMF, which I mentioned that does bed net distribution. That type of evidence is pretty strong. A randomized control trial is a great way to isolate the effect of something. Before they exist, it was really hard to know whether or not any intervention actually worked. But after doing the science, science has rapidly progressed, and we've come to strongly believe that a lot of modern medicine is based on this. Nevertheless, a single randomized controlled trial isn't that strong a piece of evidence. You might say, "Well, if I see one, even given all the problems that publishing has in scientific journals, like all the problems that people have with p-hacking, the idea of trying to finagle the results to get the way you want, even when you see a published study, a single RCT isn't that strong evidence." I bring all this up because a single RCT is way stronger evidence than in most areas of philosophy. So maybe if you see a single RCT, you might be like, "I'm reasonably confident this thing works." But the types of things people bring up in philosophy are like, you know, I have this intuition. You have a counterintuition. Here's this example that demonstrates my point, often not even being systematic about things, and saying, "Okay, here are all the relevant possible examples. As a result, I just don't take that strong position on many philosophical questions." There are things that are definitely better or worse, and there are things that are like, "This is a terrible idea, it's logically impossible, or something like that," but within the high-level questions of morality, whether I should embrace some type of objective morality, being objective or subjective, or that type of thing, it just is not the case that the evidence is super definitive, and as a result, I just don't press too hard on this question. I don't think, given the nature of the possible evidence, you're just going to get to some definitive answer. If it turns out to be the case, the classical response here is, why does it turn out to be the case? There's a theory that says none of this matters, and there's a theory that says it matters a lot. You probably should pay attention to the theory that says it matters a lot.

SPENCER: And can you still make decisions in a world where you don't even know what the good is, where you're trying to do the good, but you don't know what the good is, and there are a lot of different opinions on it? How do you approach that? Do you put it in baskets? You say, "Okay, this is my virtue ethics basket, and here's my utilitarian basket." Or do you try to do things that are sort of robust across different views? Or do you take a different approach?

MARCUS: Yeah, this is a great question. I think the answer here to this you're hinting at is that there's a relatively infant field in philosophy about aggregating across different moral views. So the question might be, you have these moral views you're not certain in and you want to combine them. There has been, for decades, a field kind of like social welfare, which is doing something similar. You might imagine similar questions arise when thinking about combining the views of a public in a democracy, like what type of voting method should you use? How should you consider people's opinions? How much do you weigh the intensity of preferences versus the number of people who hold those preferences? This field is young. I say it's an infant field in philosophy because it started roughly around the year 2000, and there have been a number of proposals about how to deal with this. Given the things I just said about the trends of evidence and philosophy and the fact that it's really young, philosophy moves slowly. Decades go by before someone comes up with a counterexample or something like that. I just think people shouldn't be that confident about whatever the answer is. I actually do think you can do this. In fact, we, being my organization, Rethink Priorities, have spent some time trying to refine how exactly you would approach this type of question if you were to say, be uncertain across different moral theories. You say, "Okay, I give 50% to utilitarianism, I give 25% to this version of deontology, 25% to this version of, say, virtue ethics or contractualism or whatever. This view would do X, the second view would do Y, and the third view would do Z. There are different ways of combining them together." You can run a program that says, okay, one way of combining them might be to do weighted averages. Another way of combining them might be to split the budget by, if utilitarianism gets 50% of your credence, it gets 50% of the budget and just does whatever it wants. There are other ways of going about this. There are some hilariously complicated methods, but I think that type of approach is totally viable and something you can do. In fact, I think if you're taking seriously the idea that you shouldn't be super certain about what you should do, you should do this, not as an individual making decisions about where to spend $100 or even a few thousand dollars, but if you're a foundation spending millions of dollars, you should be taking this seriously. You should be thinking carefully about this type of thing because it's just very difficult to assess how those things will interact. You shouldn't be confident that you have the correct answer for how to do this. One last thing to note on this is these aggregation views disagree strongly about what ends up happening. One view might give all the money to one particular area, while another view, like the proportional splitting view, may end up totally in a different spot, where some money goes to very different things. I'm pretty sympathetic to that view, in part because you're uncertain. But again, I'm not 100% confident that this is the right thing to do. I feel better about this, both for practical reasons and for theoretical reasons, but it is very complicated. I do think this is possible.

SPENCER: I find something amusing about just how different this is than almost the way anybody thinks. How many groups are actually thinking about the moral philosophy behind their giving? It's so much more work than just saying, "Hey, this thing seems good to me. I'm going to try to promote that thing." The most effective focused organizations are going to at least try to be effective within that domain, but almost nobody is thinking so broadly as to go back to the moral theories underlying it.

MARCUS: Yeah, I'd say a couple of things to that. One is, I actually spent a lot of time recently thinking about this topic about aggregating views. In a paper I read, the author noted a reviewer objected to them doing these crazy procedures, asking who has views about this obscure thing that you're talking about, who's actually going to make a decision this way? I thought to myself, I'll make a decision this way. I'll think about that. I'll take it seriously. The author was also similarly, like, "Yeah, this is important. You should take it seriously. Something being complicated isn't necessarily a reason not to do it." A couple of things come to mind here. Last year, my wife and I had a baby, which was exciting and great.

SPENCER: Congratulations.

MARCUS: Thank you. But this presented some challenges. We have a sedan, and it just turned out super inconvenient to get the car seat in and out of. It was so annoying. We faced a challenge: we should get a new car. What car would be good? In this circumstance, the stakes aren't that high; getting a new car isn't the biggest decision in the world compared to spending millions of dollars. Nevertheless, I took it seriously and looked at a lot of options. It happened to be the case that the rebate for electric vehicles was expiring in a couple of weeks, and I was comparing hybrids to electric vehicles. There were costs for getting installation done in your house for the charger, different maintenance costs for these different types of cars, and different dealerships had different extra deals going on. I thought, "There's just no way I can hold all this in my head and come out with the answer." So I built a spreadsheet model that tried to account for this type of thing. It worked out, okay, this car has these features, it costs this much, etc. This is the return over the course of several years, and we did this, and that's how we picked which car to get. In some sense, this is a trivial difference; having a marginally better car just doesn't matter that much. If you're trying to do good with your money, if you're a foundation moving millions of dollars, the least you could do is this type of thinking, where you take your considerations seriously. You should think about at least three different domains. There's empirical stuff: what happens when I give money to this thing? There are normative uncertainties: what matters and how much it matters. Then, as we were just discussing, there are meta-normative things: how should I decide under uncertainty? How should I think about aggregating these different views that have different opinions about what to do? How should I think about integrating views that say you should be completely risk-neutral with views that say, "You should not be risk-neutral and should give some weight to avoiding the worst outcomes." Rethink Priorities has spent time trying to build tools that do exactly these things because the stakes are very high, and you shouldn't be that confident about your views. In this circumstance, you should build models to help you think through what to do here.

SPENCER: Let's apply that to the car example to illustrate the point. So you've got empirical facts, maybe things about the fuel efficiency of different cars and the maintenance costs, etc. That's sort of the first bucket, and then the second bucket, as I understood it, was basically how much you care about these different things. Is that right?

MARCUS: Yeah, what matters? How much we were talking about earlier, about how much it matters to save the life of a child versus doing some momentary art show or something like that.

SPENCER: So for the car, it might be how much you care about how many miles you can drive without having to refill it. Some people might care a lot if they're going on long-distance travel; it might be inconvenient to have to refill it. Others don't care at all because they're only using it for daily stuff, and it's easy to get gas. So you could have all of these different categories where you care a little, you care a lot, or you don't care at all. And then the last one is sort of more meta about how you put it all together. How do you go from how much you care about the different factors and the empirical facts to your final conclusion? Is that right?

MARCUS: Yeah, the car example is a lot harder to categorize because, in some sense, I am the determiner of how much I care about safety versus fuel efficiency over long distances. We never drive far distances, so that type of concern, like, do we need a car that has a 300-mile range if you get an electric vehicle? The answer was no, it does not. So it just went off the table. But in the case of morality, it's much more complicated. There are experts that disagree about what you should do and how you put these together. One way I like to think about this is you might imagine a very complicated puzzle, and there's more than one way to put the puzzle together. These different approaches are about how you should think about putting the puzzle together in these different ways, and they produce different images. Sometimes they produce very different images. Philosophers vehemently disagree about this, but the field is pretty young. I have interacted with or know many of the relevant parties here, and they strongly disagree with each other about what to do. If you're a normal person in that circumstance, I think one way to think about this is, if you were a normal person looking at a field like biology and experts disagree strongly about something, maybe they have ten different theories about what to do. Knowing they disagree, they don't just disagree about the outcome; they disagree about how you should even come to your decision. They disagree about what facts matter and what the facts logically imply. Even when they do assume the facts, as an outsider who's not a biologist, my opinion here would be, "Oh, I should be really uncertain about what to do. I should take all of their views seriously and try to account for them in what I do." Isn't to say, again, to go into nihilism. Instead of saying, no one knows anything, I'm just going to wing it. Because in that circumstance, you're just implicitly picking a side. You're going to pick a view, whether you know it or not. It's like the old Keynes quote about people thinking they're uninfluenced by the people around them, but actually, they are heavily influenced by the people around them and heavily influenced by the ideas that are in the water. When you don't make it explicit, you often end up making implicit decisions that are more difficult to investigate or more tricky to pin down and say, "Why did I actually make this decision?"

SPENCER: How do you think an individual should deal with these things? Because obviously they don't have the time or resources or usually not the knowledge to consider all these things. Should they just pick a domain they think is important and then try to work within that domain? Let's say they're going to give away a thousand dollars.

MARCUS: If they're giving away a thousand dollars, I think you should mostly ignore this. This concern for the meta-normative stuff is for other decision makers making much bigger scale donations. However, the empirical uncertainty, the numerous uncertainties, that stuff does matter. You might say particularly empirical uncertainty. Almost every view is going to care about efficiency. It turns out, in reality, some charities, even charities doing the same thing, as I mentioned about lead, can be way more efficient than other charities. The difference between charities even working on something that you think works might be 10x or 100x, so you should care a lot about what happens. If I were a person giving away a thousand dollars and I was thinking, "What should I do with this thousand dollars?" The easiest answer for me is usually to tell someone to just give the money to GiveWell, which is a charity evaluator that considers evidence very rigorously, and they try to decide what to do with the money. As a result, they have identified really high-impact charities, like the AMF, which provides bed nets for children. That's a good example. But there are other charities they find that also work on malaria or other things. These charities are highly evidence-based; you're pretty confident you're going to do something efficient, certainly something way more efficient than you would guess, or by just spending the money randomly or finding a charity you know nearby. You're way more likely to save a life, and efficiency is good on basically every view. Very few views say, "Oh, actually, you shouldn't do this" for various reasons. Even if there's some optimal thing that the view might hold, "Oh, this is at least going to be pretty good on almost every view."

SPENCER: Unless you had a moral view that says you really should help people close to you rather than far away or something like that. Then if you can get more bang for your buck, you can help more people per dollar, and in that sense, it's probably going to be better.

MARCUS: Yeah, I also would say that someone who's compelled to help people near them, in some sense, again, there are more or less plausible ethical views here. You might say someone feels compelled to help people near them. In fact, obviously, if you walk down the street and see someone suffering, it is very difficult and important to take that suffering seriously and to try to help them out. In another sense, if you're imagining this as some portion of your yearly giving and you're giving out some fraction of your money, you might think, "This is exactly the time where I should try to optimize." But I also would say to someone who has a really strong view, like, "Oh, actually, I should only help people close to me," there's a famous thought experiment by Derek Parfit, a philosopher, that came about "within-a-mile altruism." This theory would say the only thing that matters is people you know within a mile of you, and everyone beyond a mile doesn't matter at all. I think this is obviously silly, but he also thought it was obviously silly. The trick is, "Okay, that's silly, but how is it different? If you really pin it down to just nearby, what about soft borders around your city or soft borders around your state?" The answer is it's very tricky to pin down what really distinguishes this. You might pivot from distance to something about location, connection or interrelatedness, but even those are hard to pin down. This isn't to say these types of things get no weight, but it's hard to defend a view that highly prioritizes people just because they're close to you.

SPENCER: Yeah, and yet it does seem to be a very natural inclination that many people have that they owe more obligation to those close to them than to those far away, although maybe it's mediated more through some kind of affiliation rather than distance. Imagine that they were right at the border of another country, but they might view themselves less obligated to help people cross the border, even if it's very close, than people within their own country, because I think a lot of people view they have the most obligation to their family, then their friends, then their neighbors, and then some kind of decaying obligation as they go outward in affiliation.

MARCUS: Yeah, I think that's right in that people feel this way. I guess a couple of things. One, I should back up and say, at a high level, the thing I'm describing about caring about is this whole problem is genuinely difficult, and any attempt at rigor quickly runs into empirical and philosophical uncertainty. To the extent I'm expressing you should care about these things, I'm thinking about large-scale actors, particularly, which is particularly relevant for individuals. This concern still gets complicated, though, because you might say, "If someone lives really close to the border, what's going on there is both a sense of ethical consideration that they matter less, but also a practical sense that you can control a lot more what happens inside whatever your border is, if your border is a country." So just democratically speaking, I have a lot more control. I'm much closer to Canada in Chicago than to California. Nonetheless, because of the way democracy works, I can talk to people in my country, I can talk to people in my state, etc. I can affect them more than I can other people who are close. I'm certainly closer to those people than I am someone in Hawaii, right, living in Chicago. Nevertheless, again, because of the way practical, empirical facts play out, you can't have more control over the lives of people around you. It's hard sometimes to disentangle those things from the moral fact itself. I guess I would say, again, it's hard to justify this type of thing. You can tell some stories about why you should care a lot about things you control. You could tell some story about some unity of purpose or mission. But if it doesn't cash out into someone's life somewhere, some individual's life, it's going to be difficult to pin down. But if it does cash out into someone's life somewhere, then you're back to the challenge of, "Am I really going to value this person 4x or 5x more than some other person just because they were born on the wrong side of the border?" And that can be, obviously, very difficult to do, to overcome. In fact, I don't think it's practically overcome in most circumstances. Just because someone has your same nationality, it's just not a good reason to favor them over someone else.

SPENCER: Do you think that we should have essentially two ways of thinking? One for when we're doing more universal altruism, like we're giving away money as part of an annual donation, versus our everyday lives, where you might say, "Of course, I'm going to value my child more than five times a random stranger across the world."

MARCUS: First, I would say that almost no worldview holds the position that you don't care about your family, don't do anything for them, don't help them out, don't support them. I think it's very plausible that, if you're thinking about what to do in your personal life, you may have big decisions to make about your career, or you may be thinking about what to do with your donations. You can even take this back, practically, to things I was suggesting about voting — like how much you can help the world, that type of thing. In those types of structural decisions, it might be really important to think carefully and try to optimize across the best worldviews you can in those senses. Day-to-day life is not that I run a think tank. We think about these things all the time. But I also watch TV, I watch sports. I am not constantly optimizing my life in this way, even if, for some practical purposes, I might have been tilting towards that with buying a car. That's just because, honestly, for cars, a big multi-thousand dollar decision, whereas these other things aren't like that. But even there, it is the case that in my personal life, the same type of constraints, or more constraints, may not apply in the same way. I don't want to suggest to people that the optimal thing to do is to spend 100% of your resources on whatever theory you think is best, partly for the same type of uncertainty reasons. It's not obvious that that is what you should do at all times.

SPENCER: Changing topics a little bit. When you think about trying to make big decisions, whether it's about where to give money or starting a project, some people are of the view that you should explicitly try to model out all the factors. You should make a giant spreadsheet with every consideration and try to turn it into a calculation of some kind. Other people think that's silly, that essentially, with these kinds of really major decisions, you're never going to be able to model every factor, and attempting to do so might actually make it worse than if you didn't at all, if you just thought about it and felt it out and considered multiple sides but didn't try to explicitly model it. Where do you land on that?

MARCUS: So in these big decisions, I'm definitely more on the side of thinking about making your models explicit. One of the primary reasons for that is the people who are like, "Well, the explicit model is going to get something wrong." Yeah, it's definitely true. Naive explicit models get things wrong all the time, even complicated ones, because the models get things wrong all the time. My counter would be, why do you think if a complicated explicit model can't capture it, your intuition is going to capture it? Why do you think you have an implicit model that's vague and not transparent? You can't dig into those choices. At a minimum, the explicit model tries to make it transparent where you're making decisions, and implicit models, or not having a model at all, just using your gut, you're making a decision. Nonetheless, you're making some representation of how the thing you're going to do is going to lead to an outcome. The question is, if a super sophisticated model can't capture this, why do you think you have a way of just intuiting the right answer? I think the answer is, in most circumstances, this is a bad idea. We can let the optimal be the perfect be the enemy of the good here, and I don't think that's a good idea.

SPENCER: It seems like there is a major failure mode where people can trust explicit models too much because they think I did this. This is a calculation. It's not just me making stuff up. I put in a bunch of numbers. I got a number out. I think there are two ways they can over-trust that. One, they can think, because I put it through this explicit process, the output is probably reasonable. The other thing, and I think this can happen a lot, is if the model doesn't have uncertainty models in it, they can really underestimate the amount of uncertainty in the output. The output might actually be a thousand plus or minus 950, right? It might be so uncertain that if you treat it as a point estimate of a thousand, it might be very misleading.

MARCUS: Yeah, I actually agree entirely. Those are, in fact, good, reasonable, and common failure points of explicit models. A very common mistake is treating point estimates as though this is the only thing that matters. You have an intervention that says you can save a thousand lives, and someone's like, this is what matters. But it may be two different interventions. One, in expectation, saves a thousand lives, and in 99% of cases, it saves a thousand lives. In 1% of cases, it saves zero, or roughly something like that. You might imagine a totally different case where, in the vast majority of circumstances, it saves no lives, but in my 0.01% of cases, it saves a very large number of lives. Those types of interventions are very different. If you make an explicit model and you don't take that seriously, if you just use the average, you're going to lose a sense of what matters. This is something I've thought about a lot. You want to capture how different views treat risk differently in a circumstance. You don't want to just go with your best guess, particularly in circumstances where you shouldn't be that confident about what your best guess is. This is a good example where, if you make it with the model, you can catch something like this, and you can be honest with yourself about how confident you should be about this thing. In certain circumstances, you should be super confident, but in other circumstances, you should not. Similarly, when you have a sense of, "Well, this model says this, so therefore it must be true, just not accounting for model uncertainty," like the sense that I did something wrong, the inputs are wrong here, and so on. It's so easy to trick yourself. But again, I would ask, is it easier to trick yourself in that circumstance than it is to trick yourself if you didn't build a model? If you just tried to guess, particularly when the situation gets more complex, it becomes harder and harder to capture all the things that matter and see how they interact. You might care a lot about helping people, but there's diminishing marginal returns. Now in your head, you have to model diminishing marginal returns on one view, and then on a second view, you might say, I care less about diminishing marginal returns, or I care more. The thing I care about has a different risk profile. There are two interventions in all these types of circumstances. As soon as it gets more complicated, it's so easy to fool yourself into thinking, "Yeah, my intuition is tracking the right answer, but there are many different ways of implementing them." Oftentimes, there's no good reason to believe your implicit model is going to do better than an explicit model. At least with an explicit model, you can be transparent to yourself about what's going on, and then you can say, "Okay, this premise can be challenged. Maybe I should make this model better." Of course, this is also a possibility. The thing you're deciding isn't actually that important, and you don't need to do all this, but at the very minimum, it's unclear to me that going implicit actually helps you here.

SPENCER: I think we would agree, if it's not an important topic, why are you spending the time? It's a big time investment to try to explicitly model it, so we're already limited to the realm of important decisions. A counterargument that people sometimes give is that trying to explicitly model everything ends up leading to lots of time spent focusing on minor details, whereas you can use your time more wisely by really thinking about what are the one or two or three factors that this turns on that will really make a difference, and spending time focusing there rather than on, say, "Okay, I need to get all 100 inputs into this right, even though most of them are not going to move it at all."

MARCUS: So I have to say a couple things here. One is, it's hard to know what the factors are that everything hinges on unless you actually go through the process and do some type of modeling. An example here would be, suppose you hold to a theory and you say, "I'm not certain this is right. Maybe I'm only 90% sure it's true," and 10% would be given to some other theories. In that circumstance, there are many different things you could do. You can hedge across the theories. For example, you could give 90% of resources to the dominant theory and 10% to the alternative. You can hedge within the theory. You can say, "I'm going to avoid the 10% most extreme outcomes in the dominant theory." Whatever that means, though, that's complicated. You can act on the dominant theory and just say, 90% good enough. Though, of course, you know the theory is less than 90% if you go down to 50% or less. This gets complicated. Or you might do some type of informal negotiation, in which the choices you make are not intuitively obvious. Importantly, several of these moves are themselves very complicated. I mentioned GiveWell already. You might think about GiveWell; they have these. If you ever visited GiveWell's website, they're an excellent organization. However, I will note, if you want to see how they make some of their decisions, how they think about cost-effectiveness, they have spreadsheets. Those spreadsheets have dozens of inputs. I don't think the researchers could have told you in advance what the decisions most hinge on because they can't hold all the multiplication, all the division, all the exponential decay functions in their head. As a result, how would they even know in advance that they captured the most important things? I think that's very difficult and presumptuous to think, oh, in advance, I just know these are the three things that matter the most. As a result, I don't actually have to look at the details. I definitely don't spend your time on things that don't matter once you're aware they don't matter. Unless you build a model, how would you even know that?

SPENCER: Maybe it depends, to some extent, on what you're trying to model. If I think about real-life situations where someone has shown me an explicit, very complex spreadsheet that's modeled something, what I often find is that when I'm looking through it, I disagree with some assumptions they made, and then I throw away the whole model. Unless I were to then model myself how it hinges on those various assumptions, I no longer believe it. I think my biggest concern is brittleness, right? You can make a lot of assumptions going into those models, and they can be brittle based on them, but maybe you're going to say, "Well, you should be mindful of those assumptions, and you should be looking at how it changes across them." Is that what you respond to?

MARCUS: This is fun. I actually agree. I should state upfront. One of the first things I did when I started my organization was I tried to estimate the cost-effectiveness of researching and developing vaccines, and this involved gathering a bunch of data. It involves a bunch of modeling. One really hard lesson there is it's so easy to make a mistake, even ignoring the uncertainties you have about whether you believe this assumption. It's easy to make a mistake. On top of that, some of the assumptions where there's no data, there's no actual answer, or there's some uncertainty, you have to just make an assumption. Those can be important. I would say, in those circumstances, make those variables; don't make it a point estimate. Make it a distribution as possible, or have it be a spider where you can change the answer. I think you're right to be skeptical of people building models, particularly if it's someone building a complicated model by themselves that has never been checked. It's not something that's gone through peer review. Yeah, be skeptical. I agree. Be skeptical in that circumstance, particularly if someone has a particularly controversial assumption. One of the fun parts about this, though, is that one of the reasons you realize the conclusion is wrong or you disagree is that, well, I don't agree with this assumption. That is itself valuable information because sometimes you can just, instead of throwing it away, copy their spreadsheet and say, "Okay, I've changed the assumption. Here's the input I would have used instead." If the results don't change, then actually, you know, you intuitively disagree; you might end up in the same place that they did. Or maybe you do change it and the results change, and you go, "Okay, I do think I should behave differently. Here's what I should do." A very easy thing to do for people in this space is to overestimate the chances that they capture all assumptions as reasonable. The classic thing to say here is, all models are wrong; some are useful. I think that's still true in this domain.

SPENCER: Would you advocate for any parameter that has a reasonable amount of uncertainty to model it as uncertain? For example, using Monte Carlo simulation, there are now some nice tools that make it a lot easier to do that kind of thing.

MARCUS: Basically, yes, there are a couple of complications here. Again, how important is the decision? Are there other ways of capturing the thing you want? But to first approximation, I do advocate for this, and my organization has, in fact, many times done this type of thing where we're building a model. A lot of parameters are uncertain. Instead of giving an individual input, you use Monte Carlo simulations or some other approach like that, where you get many different samples of what would have happened over this distribution, and then you get an answer. You don't have to make it explicit. Sometimes it doesn't have to be explicit in the model, where you can directly maybe by the Monte Carlo itself, which might change the result. Sometimes it could be the case of, well, we drew the answer from a distribution, and it's helping on the back end. "Oh yeah, maybe you can test this, and maybe you could test the inputs or something." But the general principle here is you should not just use point estimates when things are really uncertain. Oh, I completely agree with that.

SPENCER: Yeah. Tools like Guesstimate really help simplify that kind of process. It seems to me that if you think about what sort of decisions are complex spreadsheets or explicitly modeling things better for, it seems to me that they tend to be better when you have a clearer sense of what the input values are. There are situations where there's so much uncertainty about so many inputs that if you were to actually model it explicitly and do it properly, the output at the end is just such a wide distribution that you realize you don't know anything. I wonder if, in that case, intuition actually outperforms trying to model it.

MARCUS: I'm sure this definitely happens, and it definitely represents what happens. But again, the classic question here is how do you know that in advance in defense? How do you know the situation in defense? I think some people would look at even something as complicated as what GiveWell does and say it's totally impossible. You can't model this at all. Too many assumptions, too many inputs; you never get there. I think that would be empirically incorrect about what you can learn about this type of thing. I also would back up and say, "I'm not a scientist, but tons of times, scientists have some challenging questions they want to tackle, and they build a very complicated model." That model may be difficult to explain; it may have a lot of uncertainties, but at minimum, it might narrow the space of options that are available. Okay, we were uncertain about options one through ten. But actually, after building this model, even though we're still uncertain, it's actually only two through seven or something like that. I think that itself is a valuable piece of information in this circumstance. I'm thinking about, or at least thinking about doing good, and say, "Thinking about theory. I think this is still the case." One of the things about building models is it makes your disagreements more explicit. But in this particular case, sometimes I think it's fine to say, "Well, actually, after doing all this analysis, we are uncertain." A wider set of options are on the table. That seems totally fine to me. I don't think the answer has to resolve to giving all your money to this particular group and that there are no other options. I don't actually think that's how things work out when you account for all these uncertainties. One of the reasons to account for the uncertainties is because the views disagree. If the views disagree and you're using some way to combine data, you're often going to end up with a number of possible outputs. All of these things are possible options; that is itself a useful piece of information. Going forward, you might say, "Okay, instead of trying to narrow this down even further," you might say, "Actually, I know each of these six things are reasonable. Or I know this is the variable I'm most uncertain about, or this is the variable I'm most certain about, and that's susceptible to further research." This is the thing I'm going to focus on going forward. As a result, I can narrow this down even more. I think that's the upside case for this type of thinking.

SPENCER: I think where I net out on all this is a little bit different than where you do, and I'm curious to hear what you think of what I'd say, which is, "When it's really important, it can be valuable to try to model it out, put it in a spreadsheet, but you should pay really close attention to where it disagrees with your intuition and do a kind of feedback loop." Say, "Okay, my intuition was that x would be better than y. The model says the reverse. The model says y is better than x? Let me inspect why that's happening. What are the factors driving that?" When I look at those factors, does my intuition say, "Oh yeah, actually, I was misunderstanding it?" Or does my intuition still push back and say, "That doesn't seem right?" Try to reach an equilibrium between your intuition and the spreadsheet, rather than thinking of the spreadsheet beating your intuition.

MARCUS: Yeah, I think a couple things there. In some sense, I actually agree with you that this is a reasonable way of thinking about it. In another sense, I guess to start back where we get to the modeling conversation, why would you think your intuitions are particularly refined in the circumstance of building some complicated model? I'm quite sympathetic to that deal. I have a pretty intuitive sense. I watch a ton of movies, and I have a great sense by now. I've read the two-sentence description of this movie. I know who's directing it. I know roughly what the genre is. Will I like this movie? I'm pretty good at this. You don't need to do any more complex analysis. I have a pretty good sense of this. My intuition is very refined about that. This is the area where I think intuition is very useful and reliable. Will I like this thing? Sure. What should I think about the way my moral views combine giving some complicated aggregation method? That's not a domain where I think human intuition is going to be that useful. The more you refine it to a smaller subset of the problem, I feel your intuition is going to do better. You might say, "Yeah, I care about risk. This model says if you apply this risk filter, this thing comes out of this other thing. Does that roughly sound right?" In that type of domain, maybe your intuition is going to do better. But the more complex it gets, I think the farther you get away from the reliability of human intuition. I'd say, yeah, you should always compare this to your intuition. Think about what it says. Also, in general, I can't believe I haven't said this. You should still use your judgment. I'm not saying blindly follow the model wherever it goes. You should still think about it. "Do I believe this? What's missing?" One of the most important things to do with any model is just build out the limitations. What's missing from the model? How would I think those things change my opinion? Then make a decision. Nonetheless, I do think in a lot of circumstances, your intuition is just not going to be that helpful.

SPENCER: Another point I wonder whether we disagree on is, I think you should try to start with the simplest model that captures the phenomenon. In other words, rather than saying, "Let me gather every single factor that matters," you start with, "Okay, what's the crux of this?" Try to model just that first, and then you can add on all the bells and whistles later. For example, if I was thinking about bed nets and malaria and trying to reduce malaria with bed nets, I'd be thinking about, "Okay, what are the key variables? What's the cost of getting a bed net to someone? What's the chance they use the bed net? What's the chance if they use it, they won't get malaria? What's the chance they would have gotten malaria if they hadn't used it? What's the chance some malaria would have killed them?" Okay, I've got five or six variables that seem like the key things, and I would try to just model that before I get into these complicated second-order effects and diminishing marginal returns, et cetera. I think the reason for that is that I tend to think that simple models have one huge advantage of being much easier to introspect on, and because of that, it's a lot easier to find issues with them. You can kind of keep the thing in your head, and if something doesn't make sense, it stands out. You can be like, "Oh, that's the thing that doesn't make sense. Maybe I have the wrong number there."

MARCUS: Oh, actually, this is a pattern at this point, I'm just going to agree with you. I think, in this circumstance, if you're modeling bed nets, start simple. What are the key factors that matter? Then attach additional things to it. One of the reasons I'm advocating for complicated models is that the thing we're modeling isn't just bed nets. It's bed nets versus giving cash to people directly, versus helping chickens on factory farms, versus thinking about people in the far future, versus climate change.

SPENCER: It's an incredibly hard modeling problem to deal with, right?

MARCUS: Yes, it's an incredibly hard modeling problem to deal with. The approach we're taking is, what's the simplest way? But you try to capture all those things, and that's trying to get all those factors in that matter, and then from there, build out. A good example would be something where, in an ideal reasoner model, you might capture things like interaction effects between grant recipients, not just interaction effects between donors. One donor might do something that impacts what another donor does. You might also imagine, if I give money to this group, how will this group respond? That type of dynamic definitely happens in reality. But modeling that is a nightmare. How would you even go about it? You might say, "Yeah, you should ignore all that." I agree that for at least initial stakes, you should ignore all that. On the other hand, suppose you are, I don't know, I am not Bill or Melinda Gates, but if you are in the Gates Foundation and you're giving away literally billions of dollars, you have time, you have resources, you have the ability to think about what you do. You can start with the simple model. You can build it out over time. But you don't have to be the Gates Foundation for the stakes to matter here. If a foundation is giving away a couple million dollars a year, that could be significant. A small improvement can be worth several hundred thousand dollars, so you might think it's worth spending a lot of time trying to refine this. That's kind of my approach here. As the situation gets simpler, you should probably start with simpler models. As the stakes get higher, you might want to say, "Okay, it's actually worth investing more and more time." Maybe I'm advocating for this complicated approach mostly because the stakes in the situations I'm thinking about are people. There are many foundations we interact with who are, in fact, giving away millions of dollars and do not take this approach at all, or don't take it seriously. I would push back against them. I wouldn't push back against you certainly. If you think about how to think about bets, this can be a simpler situation. You don't have to start with GiveWell's decade-plus developed spreadsheet. They didn't start like that. For more than a decade, I think they've been recommending based on their findings. It wasn't because they completely guessed that up front. It just got more sophisticated over time as the stakes got higher and the money moved to get bigger.

SPENCER: Do you think people should be doing this kind of approach for important but everyday life decisions, like, "Hey, should I buy a house? Should I get married? Who should I marry? Should I have a baby? Or are we hitting against domains where maybe the house is okay, but it's more concrete? But whether to have a baby?" Maybe now we're beyond what you can model with this kind of thing. What do you think?

MARCUS: This is fun. So obviously, in some senses, running a think tank will put me in the upper percentile of people who think about doing models. I can tell you, in certain situations in my actual life, I use models. The cars are actually a really good example where your options are, in some sense, very similar but differentiated. Whereas I think a house can sometimes be hard to pin down. Nevertheless, there have been times in my life when I think about what I care about in an apartment. The thing I must have, for the record, is a dishwasher. It saves a lot of time. You might say, "Okay, you have that type of thing." But even there, it doesn't get as complicated or complex. I think in many circumstances, it can be useful, but in some circumstances, it is not. I did not have a spreadsheet about whether I should have a child.

SPENCER: But why not? Isn't that a big decision? I am teasing a little, but I'm curious.

MARCUS: I did, in fact, think about it very carefully for a long time. It's a long-term decision, right? It's something where I had to feel the same way about it for a very long period of time for me to be comfortable with having a child. This is exactly a good example of getting married, having a child, or buying a house. These are decisions you want to be careful about because the stakes are high. Over a long time, you want to feel confident about whether you're going to feel the same way after some time going into this. Buying a house or deciding to rent an apartment is the type of decision that's pretty straightforward. The variables are obvious. What's the rent going to be? What are the repairs? What are those types of costs? Where is it in the city? If you start thinking about transportation, that type of thing, those are easy to plug into a spreadsheet. The more complicated things about your personal life, like how much will I enjoy this thing I have not done and I don't have experience doing, get more complicated. I guess, weirdly enough, that type of question can be more complicated than something like how many people are helped by this. That's a much more straightforward thing to plug in than what will my experience be of this five years from now, or some situation where it's a dynamic feedback loop. A good way of thinking about this would be, what deontology or some philosophical worldview thinks about the world. It's kind of independent of the in-the-weeds facts about it. Probably the people who come up with the theories think things matter, but it's not like a political model where someone's trying to model voter behavior, and the voters respond to that thing, and it has some weird feedback loops. Those types of things actually do happen in your personal life. I wouldn't say in those circumstances you should try to model it all out, I would say, do your best. Think about it very seriously. Take it seriously if it's important to you, and do your best. But it's not necessarily easy to put into a spreadsheet.

SPENCER: We're actually planning a study where we're going to ask people making a big life decision to let us randomize them to different decision-making techniques. One of the things we want to test is whether giving people really simple decision processes, like making a pro-con list, leads them to underperform or maybe outperform a more complex one, like a weighted factor model, where you have all the factors, score every option on each factor, and score every factor based on how much you care about it. I'm really intrigued to see whether the more complex model, where you actually get into the nitty-gritty of scoring lots of things, actually wins over simple methods.

MARCUS: Yeah, this would be pretty fascinating. I think this is one of the things where it kind of depends on what the set of decisions are, where you might have really good intuitions about certain things. Another good example is you might imagine you don't have to try every flavor of ice cream to have some sense of the types of ice creams you like. You can look at it and ask, "Do I like those flavors? Do I have things similar to this?" You can kind of imagine it, even if you haven't actually experienced it. I think a lot of things in life fit closer into this category than the type of thing where it would be super beneficial to have another way, where the comfort or the factors are well outside your normal, everyday experience. But I actually think in the charitable case, this is not true, and a lot of things are well outside of your experience and are too complex to hold in your head. I think this type of dynamic would be pretty interesting. The interesting thing here would be, at least in your personal life, you might imagine you could more easily track who was right, and in morality, what would you do? Ultimately, they made different decisions. Who was correct? That might be the problem.

SPENCER: Yeah, it's a really tough one. It's even tough with decisions because suppose you randomize people to do A rather than B, and you find people who did A are happier with the decision. It doesn't necessarily mean the decision was better; it just means they were happier with it. So it actually is pretty tricky.

MARCUS: Yeah, and then there's probably lots of bias here about being forced to do the thing. It will bias you. Of course, I made the right decision about that thing because it worked out. I learned this lesson that you can't run the counterfactual in your head, what would have happened in another scenario.

SPENCER: Before we wrap up, I'd be curious to know, why is it that you've decided to devote your life, your career, and your money to trying to help others effectively?

MARCUS: Yes, this is a great question. Very basically, it's possible to make the world better, and I think I owe so much of what I have to people before me who came, who tried to make the world better for me. So I think a lot about the fact that for hundreds or thousands of years, all across the world, people fought, scratched, and sometimes died, trying to make the world a better place for other people. And I stand here today, or I sit here for this podcast today, able to do the things I can because of those people before me. I sometimes think about giving back to other people what I've been given. A couple hundred years ago, most of the world lived in extreme poverty, and people fought really hard for that not to be the case. I know if I was in their position, I would want people to fight for me to have a reasonable life. I think the Gates Foundation has kind of a model of everyone in the world who deserves to live a decent life. I think that type of thing really rings true to me. It's possible to improve the world, and as a result, we should try, we should do things that help other people. I am in a position where I think I am particularly attuned or suited to do this type of work that analyzes how we can do that better. As a result, I feel very inclined to do so. I think in expectation, this is a much better way of spending my life than to do something that merely affects myself or makes myself marginally happier. I love my job. My job's not a grind at all, and on top of that, I get to work with some of the smartest people I've ever interacted with, some of the most caring people, and those people also try to help others every day. I think this is really inspiring. I don't take it for granted, not even one bit.

SPENCER: Would you say that you found a way to work on helping others that is not a big sacrifice for yourself because you enjoy the work, you enjoy your colleagues, et cetera, or do you feel like you actually are making a big sacrifice?

MARCUS: Personally, I don't feel like I'm making a sacrifice. The things that I think about are things I think about for free. The reason I ended up in this position is because I thought about these topics. I thought about how I should improve the world, how we should do good, and how can you know you're actually having an effect? Who should you help and why? These are considerations that I had been thinking about for a long time before I started Rethink Priorities or even before I found out about the idea that people were doing this more systematically, not just in their personal life, not just going to book clubs where they live and debating these topics. So, yeah, I don't think I'm making a sacrifice at all in some generic sense. I'm sure it's the case that if I was being extremely selfish, I would maximize my happiness. But I just don't think that's super relevant to making a decision.

SPENCER: Some people get discouraged because they feel like helping the world effectively is really difficult. You kind of bump your head against the world, and then you're like, "Ah, screw it." Do you think that it's really hard to help the world?

MARCUS: Relative to doing something else that would immediately make myself happy, it is definitely hard. Definitely harder to help the world than to find a lunch I like, but I don't think it's particularly the case that I ever feel discouraged by how difficult it is. A lot of charity work is hard to figure out what works because of all these complications we've discussed. It's hard to know what's good and can be difficult to pin down to know you're actually doing the right thing. But that makes it more important to actually do the work. In some somewhat queer sense, the work we do is not obvious; otherwise, someone would have done it already. But in another sense, I don't think I'm doing rocket science here. I don't think the thing we're doing is so complex and incomprehensible that it could not be done or that we can't make progress. I feel quite confident that a lot of the work we've done has directly led to improving tens of millions of dollars in grants, sometimes possibly hundreds of millions of dollars in grants where people were spending money in a way that they had not thought through or that they could do better. I don't think it's hard and impossible to help improve the world that we should give up. Far from it, over the last few months, particularly as I've been thinking a lot about how to make some large decision-makers make those decisions go better, I've never felt better about trying this because I think we're well positioned to do it. I am personally well positioned to do it, and the decisions are very likely to go better as a result.

SPENCER: In the for-profit world, there are these feedback loops where if something's making money, it can use that to expand its business, and if it makes no money for long enough and doesn't have some funding source, it goes out of business. So there's a sort of self-correcting mechanism, if you will. Even if making money is not necessarily helping the world. In charity, this doesn't necessarily happen. You could have a charity that doesn't do good for the world, but it's good at marketing itself to donors and continues to exist. Do you think this means that there are actually more opportunities or more inefficiencies in charity, which means that it may be even easier to help than to try to make money, where it's kind of more cutthroat and has a lot more feedback loops?

MARCUS: I think this is right. As you suggest, in a for-profit market, there is a clear sense that dollars in, dollars out. Is the number going up? Did the number go down? There's a stock price sometimes, if you're a for-profit company. In those circumstances, it can be pretty straightforward to tell whether you're being efficient or whether you're being effective. The nonprofit world is not like that, but this is an obvious downside to tracking impact; it's not that easy. On the other hand, because there often aren't super large financial rewards for being in the charitable sector, there aren't as many people trying. So an individual who cares and wants to do better in this space can make a significant difference. To be honest, it doesn't just apply to nonprofits. I guess this type of logic is one of the same reasons that people can do a lot of good when working in local government, state government, or federal government in various countries. The answer is because when there are financial incentives, the best people are competing for it. I definitely think in some senses, myself and Rethink Priorities benefit from this fact because the best people are trying, so you can start off taking, as I guess we suggested, maybe a simple approach, and then work on it and improve over time. Maybe even though simple things haven't already been tried, that means you could do a lot of good right away.

SPENCER: Final question for you, Marcus, what do you want to leave the listener with?

MARCUS: Progress is possible, even with hard questions in charity, in thinking about how to make the world better. You can do better than your intuition. You can do better than just guessing about it. If you think carefully, if you reason carefully, we can improve the world more than we otherwise would. Rethink Priorities, my organization exists to help other decision makers improve their decisions to do better at improving the world. I think we've done a lot of good this way, and I think bringing more rigor and analysis to these topics is highly worthwhile.

SPENCER: And where can people find more about your work?

MARCUS: So obviously, Rethink Priorities. You can find it at rethinkpriorities.org, my organization, but also I have a Substack, Charity for All, which you can find. We'll link it in the show notes.

SPENCER: Fantastic. Marcus, thanks so much for coming on the Clearer Thinking Podcast. Great to have you.

MARCUS: Thank you, Spencer.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media:

Listen ad-free! 🎧