with Spencer Greenberg
the podcast about ideas that matter

Episode 042: Utilitarianism and Its Flavors (with Nick Beckstead)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

May 17, 2021

​What is utilitarianism? And what are the different flavors of utilitarianism? What are some alternatives to utilitarianism for people that find it generally plausible but who can't stomach some of its counterintuitive conclusions? For the times when people do use utilitarianism to make moral decisions, when is it appropriate to perform actual calculations (as opposed to making estimations or even just going with one's "gut")? And what is "utility" anyway?

Nick Beckstead is a Program Officer for the Open Philanthropy Project, which he joined in 2014. He works on global catastrophic risk reduction. Previously, he led the creation of Open Phil's grantmaking program in scientific research. Prior to that, he was a research fellow at the Future of Humanity Institute at Oxford University. He received a Ph.D. in Philosophy from Rutgers University, where he wrote a dissertation on the importance of shaping the distant future. You can find out more about him on his website.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Nick Beckstead about the advantages and limitations of utilitarianism, social agreement on the values of utility, and approaches to decision theory and population ethics.

SPENCER: Nick, welcome. It's really good to have you here.

NICK: Thanks for inviting me. I'm excited.

SPENCER: So part of my inspiration for having you on is one day at a party I pinned you down and I said, "Nick, explain utilitarianism to me. I want to know all the things." You spent I think 90 minutes breaking down the complex aspects of utilitarianism that I've never even heard about before. And so, I thought that was super interesting. I'd love to dig into that topic with you today, as well as other topics. So first, do you want to tell us a little bit about what utilitarianism is?

NICK: Yeah, I think utilitarianism is a moral theory that was most famously introduced by Jeremy Bentham. And basically, the idea in the shortest version is that doing good is about doing the greatest amount of good for the greatest number of people. So it has this kind of flavor where morality reduces to doing what does the most good for individual people, or more broadly sentient beings. And I think there have been three main moving parts to utilitarianism. One part is consequentialism, just the idea that doing what's right reduces to doing what does the most good. By utilitarian theory of value it's like an account of what's good that says, basically, goodness reduces to what's good for individual sentient beings. And then, there's a third thing that's basically a theory of wellbeing. So it says, "What does goodness of an individual being consist of?" And there's a hedonist tradition that says it's about feeling good. There's a preference version that says it's about getting what you want. And there's an objective list or a flourishing version that has the more substantive account of it and says, "A good life doesn't necessarily just reduce to getting what you want or feeling good." But I think of those as like the main moving parts of utilitarianism.

SPENCER: That's a great way to explain it. I'm really interested in this topic, partly because of my own personal journey. When I was about 17 or 18, I was reading Jeremy Bentham for a class. And I was just really struck by some things he said which had a profound effect on me. And then later, eventually, I started identifying as utilitarian in my moral philosophy when I was a young adult. I then actually stopped identifying as utilitarian. So I've always been interested for a really long time in utilitarianism, and I'd never quite understood until talking to you — that time I mentioned — just how many flavors of it there are. I knew there were a bunch of versions, but I just didn't get the full complexity. And so, when you break it down into those three parts you mentioned, are there different philosophers that actually take different views on each of those different parts?

NICK: Yeah, there are. So the consequentialism part, there's what we call act consequentialism. And there's rule consequentialism. And there's other versions than that, like, Toby Ord has his own version he calls global consequentialism. But basically, act consequentialism is the theory that when you're deciding what to do, you do the thing that actually does the most good right now, on the single occasion, even if it violates a rule that is generally a useful rule for people in society to follow. And rule consequentialism says, instead, you should first work out what the best rules would be if everyone followed them. And there's some variation of how exactly you pin that down. And you say what you ought to do is follow the rules that would have the best consequences in general.

SPENCER: Even if in that particular instance, that rule didn't lead to the best outcome, you want to follow the rules that, on average if followed, would produce the best outcome. Is that right?

NICK: Yeah, that'd be the idea. I also maybe just want to react to the other thing you just said in terms of relationship to utilitarianism. I think when I first encountered utilitarianism (I guess I was an undergraduate) And I think maybe the first time I really thought about it a lot was when I had this class in the history of moral philosophy. And we read a bunch of Hobbes, Mill, and Kant and maybe some Hume also, and we talked about the different theories that people had. And I remember at the time feeling like, "Wow, this utilitarian one really clicks with me a lot more than these." It's this simple thing and it really gives you crisp answers to a lot of questions that feel like they make sense.

SPENCER: It seems to say you can just turn through these calculations (and maybe they're hard to do) but, at least in theory, you can figure out what's good. And you can figure out what's bad. I mean, there's something really appealing about that.

NICK: I guess something's appealing about that. And I think for me, I had this very Christian upbringing and a part of Christianity that I still like, which is, the golden rule. I don't want to give Christianity too much credit in particular. But just "Treat other people the way you would like to be treated," or "Love your neighbor as yourself," and who's your neighbor? Basically everyone. And there's something about the utilitarian perspective where what's to treat everyone else's wellbeing with the same seriousness and care as I would my own wellbeing or wellbeing of people I love. And there's something intuitive about that, as that seems like a noble way to live, but also a surprisingly productive framework. It's susceptible to calculations. It really feels like you can derive a lot of things from the framework. Whereas like some of these other ones like "Well, what would Kant say you should do about the situation?" I think it's tough to even get people to agree on exactly what the right justification is for why murder is wrong, and how it's exactly different from killing people in a war. And I just feel like it's a lot less of a thing you could turn a crank through. A way I would think about it now, professionally, is like if you said, "We're gonna run a charitable foundation, it's going to do the most Kantian thing all the time." I'm just like, "I don't really know what that would mean." But whereas I think the utilitarian framework is really interestingly productive.

SPENCER: Oh, yeah. That's super fascinating to hear about. Okay, so you talked about that first topic, and some of the different options. You mentioned that Toby already has a third one which is global consequentialism. What's that one?

NICK: This was his dissertation. And it's been a while since I looked at it. But I think the basic idea is that you can think of these other two things, rule consequentialism and act consequentialism as (I think he uses this language) evaluative focal points, that are like, "Okay, so what kind of thing are you evaluating?" If you want to know what the best action is, it's the action that has the best consequences. You want to know what the best rules are for society to follow. They're the rules that, if followed, would have the best consequences. If you want to know what the best life policy for Spencer Greenberg is, it's like, you can answer all those questions with the consequentialist framework. I think the big move is saying, we're not going to privilege individual actions as the evaluative focal point. You tell me what question you want answered, and I'll give you the consequentialist answer for that.

SPENCER: There's a broader kind of framework for asking these questions. Cool. Okay, so let's go on to the second big branch point. Do you want to remind us what that is?

NICK: Oh, sure. So the utilitarian theory of value. This is the idea that doing what's good reduces to doing what's good for individual sentient beings. You could contrast that with what that would not be. For example, you could have a king who thinks there's the divine right of kings, and there's God's will, and the things that God wants are the things that are good. Or you have some idea about how society should be organized that is independent (or maybe it's not totally independent but it's somewhat independent) of the notion of how well things are going for all the people in society. The utilitarian stance is in contrast to that, where it's like you're talking about how good a government is. The utilitarian framework is going to be focused on how well that government is succeeding, like doing what's best for its people or for the world at large, something like that.

SPENCER: You could have an environmentalist who says that they want to preserve the natural environment, like the environment itself, without making a claim that the environment is conscious or even has interest per se. That would be also in contrast to the utilitarian view, correct?

NICK: Yeah. And it'd be a more plausible version. I think I was just trying to give the contrast. There are a bunch of different theories of how you would add things up. Once you say all we care about is the wellbeing of individuals, there's still a bunch of different versions you could choose. So the classical one is basically just assign a level of wellbeing to every individual, and then add them up. And that's how good the world is. So take the whole history of the whole universe, everything that ever happens in it, or maybe just the horizon that you could possibly affect, and you assign a wellbeing level to all the other sentient beings. And then you add it up. And you say, that's how good it is. But there are a bunch of different versions of that. So you could have versions that assign higher weight to people whose welfare is lower, and maybe you want to give more priority to them. Or you could have versions that maybe they have some notion of when we get more and more people, we want it to count for less.

SPENCER: So you don't have weird incentives to just create more people that are slightly happy?

NICK: Yes. Or you could have versions where you take account of all the people that exist now and say we're trying to maximize their wellbeing. And so, a bunch of in-house debates basically which of these accounts of value we want to use.

SPENCER: Can you give a few of the considerations there? Because imagine, for example, that we're only counting the welfare of people today, then you could get weird situations. You're like, "Well, someone's gonna create a bomb but it's not gonna blow up for like 1000 years. So who cares that it blows a bunch of people up, or hurts a bunch of people in the future? Because they're not people today, therefore, their welfare doesn't count." So I'm curious just to hear some of your thoughts on that.

NICK: I know Derek Parfit has a classic example where you're walking through the woods, you find this broken piece of glass, and you're deciding whether you want to pick it up and take it home, or you're just gonna leave it there. And suppose it turns out that if you take it home, then it's gonna be a little inconvenient to carry it, but then you get rid of it, and it's fine. And if you don't, then in 100 years, there's going to be some kid that is running through the forest and steps on this piece of glass and they get hurt. And the thought is like, you should obviously pick up this piece of glass. And it doesn't really matter that this is happening in the distant future. Or it doesn't even matter that this kid isn't alive right now. What really matters here is just how stepping on this piece of glass affects the wellbeing of this child. So that's a kind of intuition pump in favor of the more timeless perspective, and in favor of not just counting the people that are alive right now. But on the other side of that, you could have these stories like if option one is to help all the people that are in poverty today in the world and lift them from poverty, and option two is create a new planet that just has zillions of people who are even better off than the people in poverty, the most obvious add them up account of utilitarianism would say that option two is actually better. But that doesn't exactly feel right. Or at least there's something to poke and inquire about there. And so that's the type of thing. We're getting really into population ethics now, which is this subfield of moral philosophy that's all about trying to have a mapping from the wellbeing levels of different possible distributions of people to an overall account of how good the history of the world is, and people have all these cases. And the game is to try to have a theory that has no counterexamples that seems counterintuitive. And unfortunately, there's good reason to think that that is not a winnable game. Because there are these nice impossibility theorems that the Swedish philosopher Gustaf Arrhenius has proven. He has like six of them that are like, "Here are some conditions that are not all jointly satisfiable everybody thinks, or at least most people feel intuitively like they want to have them all." And so, one of the things I think is that the game needs to be reformulated a little bit. And I think moral philosophers, or people trying to do population ethics, need to think of it a little bit more like, "How do we have a theory that respects most of our intuitions and not try to get every single one?"

SPENCER: Because just going back to the sort of the problem with just saying sum it up, you also have things like the repugnant conclusion as well. Right?

NICK: Yeah. And just to walk through that really quickly, this thought experiment from Derek Parfit: In his book Reasons and Persons, option A is this world that has 10 billion people in it, all who have a really high quality of life better than what we enjoy in the world today. And option Z is some world that has like some innumerable number of people in it — pick a really giant number, like 10 to the 80th or something — and I guess it's a really big world, it must be much bigger than the accessible universe, but they all have a level of wellbeing that is just somewhat above zero. And I think in Parfit's example, their life's positive. Now, they have some simple pleasures, but it's all music and potatoes. Nothing really good. And there's this intuition I think a lot of people have that's like, "Well, World A is better than world Z." Whereas if you're just doing the sort of add them up utilitarianism thing, then it's like, "Well, how many people are in this world? And exactly what is their level of wellbeing? Because if I multiply those two numbers together, if you make them large enough, it's going to exceed any finite bound, so it must be better than this original world." But that seems hard to believe.

SPENCER: So if we don't count future people, we get one kind of weird outcome that feels intuitively wrong. If we do count future people, then we get this other weird kind of outcome that feels intuitively wrong. And it's hard to navigate that and I think similar things happen around that other issue of how you count everyone's utility equally. Because let's say for some weird reason, there was one person that could just absorb unlimited utility. Do we really want a world where that person has all the utility and everyone else is at zero, even if that world has the same amount of utility as it being distributed? And I think most people would say, "Oh, I'd much rather have it be distributed more evenly." But then that starts to suggest that maybe there are some other principles in play. If you actually perform a more even distribution of utility, is it really true that you only prefer it when the utility is exactly the same amount? Would you not be willing to give up a tiny, tiny epsilon of total utility to spread it out more evenly? Because if you are willing to give that, now you actually care about something other than just sum right?

NICK: Yeah, yeah. And these are exactly the types of considerations that I think people get into a lot when they're trying to make sense of this whole system. And, I think the ones we're already talking about are actually enough to just get a quick intuitive grasp on one of the types of paradoxes that people would talk about here. So, for example, let's take this A to Z thing. You could construct a spectrum of cases from A to Z. So let's say I'll just do a simpler version than the one like Parfit does, in Reasons and Persons. Let's say world A, it's like 10 billion people who have this massive quality of life. What about a world B, where people have just slightly less quality of life, but there's twice as many of them. If you give any weight to there being more people, then it seems like that would be better than the first world, at least if the decrease is small enough. But then if you buy that, you can just iterate this thought, like, "Okay, well, let's have world C really slightly lower than that, maybe we'll lower everyone's wellbeing by 1%. And we'll double the population." If you say that's better every time you do it and if you do it enough times, then you're gonna get to a world Z like situation eventually. And then if you think as long as making things bigger just always keeps making them better, you're having this transitivity property holding, then you're gonna get that weird conclusion.

SPENCER: And this ends up basically saying, well, the world where you have just enormous numbers of very slightly happy people is better than the world of like, a million extremely happy people. And a lot of people think that that's really weird. And so another question I have about this kind of branch point is, it seems like it doesn't take into account things like, let's say, death. Imagine there's a hermit living in the woods, and someone could just assassinate them instantly. And they knew that it wouldn't cause any pain. And according to this view, is there anything wrong with that? I suppose if you're just summing utility, it would say, "Well, you lose that person's future utility." But let's say you add another stipulation, like the hermit has zero utility. If they're just neutral, it is not good. That is not bad. And now suddenly, it's taking them out of the world. Bad according to this view? I guess not. But that seems also very counterintuitive.

NICK: Yeah, that's right. There's this hermit that maybe they don't want to die. And they're just out in the wilderness. Let's say they're at zero or quite close to zero utility, And then maybe somebody kills them in this way that involves no suffering. Intuitively, this is a big deal. But it's a little hard to see why it's obviously a big deal on this utilitarian framework. So I think maybe that's a good place to get into some of the alternatives to utilitarianism or ways that you might naturally want to depart from utilitarianism. Maybe it's also a good place for me to say, like I mentioned in the earlier part of this conversation, that I think there was a point in my life when you could see me as a diehard utilitarian, that sort of like, "This is the way things should be done." And I think over time, I've backed off of that a little bit and have a more circumscribed claim that I can articulate some conditions under which I think a type of utilitarian reasoning is roughly right for a certain purpose. If I was gonna put a quick gloss on it, it would be actions that are conventionally regarded as acceptable, and you're happy to do them. So I think for people who are into effective altruism, or like me and my career, there's some parts of your life where, basically, what you're trying to do is help people and help them impartially. And I think most of the time whenever I say people in this conversation, I mean I want to broaden that to include all sentient beings, but you're trying to do good and you're trying to do it impartially. And there's a lot of times when there's no temptation to do anything sketchy with that. You're acting fully within your rights and any common sense conception of how things are and you're happy to do it. It's a sacrifice that you are glad to make, say, if it involves giving up some money or spending time. And I think maybe a first cut would be utilitarianism is your go-to answer for how to handle those types of problems, which is distinct from saying that utilitarianism is the master theory of morality that works for all situations, no matter what.

SPENCER: That's a great clarification, but I just wanted to check on something there. So are you including in this sort of stipulation the idea that you're not violating people's conventionally conceived rights?

NICK: Yeah. I would include that in the thing that I'm signing up for and endorsing. You're not violating the things that people would conventionally think of as rights, and that's gonna get squishy a little bit. If you start saying, "Well, is it convention exactly that matters?" I'd be like, "No, it's not convention exactly that matters." And then if you start saying, "Well, what is it?" I'm gonna have a little bit of a hard time pinning that down. But I would say convention is a good first cut. And I think I want to make the further claim that you really can do a lot with this. I don't think you know if somebody's mission in life is to do as much good as possible. I think most of the good ways of doing that don't require a lot of lying, or breaking promises, or violently coercing people to do things.

SPENCER: I'm sort of taking from this idea that utilitarianism is this really powerful moral framework. You're relegating it to a limited demand where you're saying, "It's best to use it in situations where essentially, it's not coming into conflict with other systems." And you're sort of in this role of choosing to help improve the world.

NICK: Yeah, basically, and maybe I would broaden it a little beyond that. I think of that for myself that feels like the way I use it. And that's a way that I would feel comfortable with a lot of people using it. And I think utilitarian ideas would be powerful for a lot of other contexts as well. So utilitarianism, I think, really does better than the other options that are available in answering a lot of why questions. If you say some action is wrong, you can keep asking why, what's wrong with it? I think with utilitarianism that process always ends with, "Well, because if we did that, it would be worse for these sentient beings by a greater amount than this other thing." By construction, you're always ending in something that I feel guaranteed to obviously care about. And there's some other theories where you end at some point, "Well, we shouldn't do this, because it violates this person's right." Sometimes I'll be totally sympathetic to that if it's their right to free speech or something. But if it's the divine right of kings or something, and suppose we're having a debate about which of these rights we should have or which ones should be worthwhile (which ones are worth protecting), I do feel like when we're grounding out rights of free expression in something, there's value in the marketplace of ideas. And there are a lot of true ideas that are out there. And people don't know which ones are true in advance. And there are major consequences for society having access to the truth. And this is why we should protect freedom of expression. I think this kind of debate feels more satisfying and compelling to me, than things that kind of just end at that's just a right we have or that's just how things should be done. Maybe this isn't the most charitable way of putting it, but I think a lot of it ends in places that feel a little bit more like that than these utilitarian or consequentialist stories about why.

SPENCER: I think one of the reasons that I find utilitarianism useful — again, sort of like a partial theory that you use some of the time and not as an overarching theory you use all the time — is that almost everyone agrees that suffering is bad. And that it's not just bad for them, but it's bad for other people to suffer too. To a lesser extent, people agree that happiness is good. I don't think everyone thinks that happiness is good, but I think most people agree that happiness is good. And so it's often almost like a Schelling point of a moral theory where we can go to and say, "Hey, look, we all think suffering is bad. Most of us agree happiness is good, so we can agree that it's good to reduce suffering and increase happiness." And if we're trying to help the world, we can also agree that we can be impartial, and we don't really care about who's being benefited by this. Everyone should be counted equally. And then, now you're starting to look at if utilitarianism is this thing we can pretty much agree on working with. However, I do see sort of limits to that beyond just the ones that you mentioned. And one of them is that where are these probabilities and utilities going into this calculus coming from? That's the one thing we haven't really talked about. When you're talking about doing the greatest good, in some sense, you're talking about doing math, you're talking about doing expected value calculation, expected utility calculation. So, for each action, look at the different probabilities and produce different outcomes, and multiply probabilities by the expected utility to come from that outcome. And then, do this aggregation across all of that. And I think I start to get really nervous about when the probabilities don't look that much like probabilities. And when the utilities look like things that people made up, and then I start to wonder, "Is this helpful? Or is it just adding math to something that gives it an aura of precision, essentially?"

NICK: I think that makes sense. And I guess I want to distinguish different things here a bit. We've left some vagueness in the question and openness in the topic of what exactly is the right theory of value to use if you're playing this utility endgame. Maybe just for the sake of concreteness, I'll say for most practical purposes, I'm in favor of adding up the whole total type view. But we haven't talked about — obviously, we're talking about this thing — where you're saying the goal is to maximize this quantity, which is the total wellbeing over the history of the universe, or a part of the universe, we could realistically hope to affect. And, obviously, there's a massive amount of uncertainty in that, and no one's going to be able to know it for sure. And even worse, you might have hoped that we could all come up with our best guesses for probabilities of different outcomes if we choose different actions and have helpful summaries of it and use those. But that's going to be a mess too. So, I guess I want to distinguish between two claims someone could be making: one claim could be something like a success condition to aim for and another one would be something a little bit more like a calculation guide. And I think the thing I'm trying to more strongly endorse is the success condition. So, if we could know the action that an ideally rational version of ourselves would come up with, and in the end of deliberation about what would have the highest expected value, then for this limited domain we've been talking about how we should aim to do that. That's one kind of claim you can make and I'm more attracted to that one. There's another kind of claim you could make, which would be the thing to do is to always try to estimate these quantities when you're acting. Now, that's obviously going too far. Because there's gonna be a lot of things where you don't have time to crunch all the numbers, but you need to do something that's going to be most of the time true. And then there's other kinds of claims of this kind that you could make where sometimes, I think on any account, the times for calculation are going to be times when there's very high values at stake. And it seems like you have some productive way of thinking about it. And you can use the thinking over and over again. So, if you're deciding whether your life's work is going to be helping make the lives of chickens better, or helping make the lives of pigs better, you totally want to run the numbers on how many pigs there are, and how many chickens there are, and what their lives are like. But I think most of the interesting action is going to be around when is this type of calculation appropriate and helpful, and when is it not?

SPENCER: I think that's well said, and we can agree, for example, that the laws of physics govern this universe, but we shouldn't be trying to constantly calculate using the laws of physics to decide what to do every moment, right? It's insane. So, I think a further debate is when this calculation is actually the right thing to try to do. And I think there's a whole huge can of worms there, especially when you get into realms where you don't really have probabilities, where you're using subjective probabilities. And people largely disagree about what they should be. Maybe the utilities get really, really huge. They spin many orders of magnitude. And now we get into really hairy stuff like how to even conduct the calculations, whether to calculate it is worthwhile, and whether it's actually producing better outcomes or not. That's where I think it gets most contentious. And I totally agree with you. Obviously, there's so many times where it's not worth it, or you don't have time to do it, etc.

NICK: Right. I think there's also some interesting cases where you do have time, it's high stakes, and it's repeatable. But running these numbers is not exactly the best thing to do. So, an example of that would be if I were in charge of the NIH, say, there was this area that was doing basic research in the biology of aging. I would think that for a lot of those people doing basic research, the thing to do when you're deciding which of these grants to fund is not to do a cost-effectiveness calculation for all of them, but it would be more like who's doing the thing that feels the most revolutionary and interesting and a big deal to the taste of really good scientists, and you're setting up something like that for that setting. There'd be other cases where you did want to do those calculations, like if you're saying, "Well, how much money are we going to put into malaria versus how much money are we going to put into measles?" Then I think you absolutely do want to be running these calculations. And that's going to be a lot of the game.

SPENCER: The key thing there is that for certain kinds of decisions, even if you're trying to just maximize societal wellbeing, other heuristics will actually just do better at leading to good outcomes than trying to do the utilitarian calculations (like a heuristic of finding the truly revolutionary scientists who seem to be working on important things or something like that).

NICK: That's right. I think there are cases where other heuristics are more relevant than your utility calculations. And I would include that kind of case. I think a lot of things involving discourse and making the discourse better would be like this. So, if you're running the New York Times and you're setting its editorial policies, I feel like that's a really important thing and it is happening over and over again. But I wouldn't think that expected utility calculation is the tool to use in order to figure out how to do it right.

SPENCER: Yeah, that makes sense to me.


SPENCER: Sometimes things just are so difficult to quantify. Like the other day actually, someone was asking me for advice about if they should go forward with this project. And this is someone who really wants to do as much good as they can in the world. And they were basically asking me, "How do I do the utility calculus on this problem?" And I'm like, "Look, you just can't." And they were shocked that I said that. And I started walking them through why. There's so many different weird second order effects that it's hard to make any sort of estimate about. And I was saying like, "I just don't think that's the way to answer the problem. Maybe the way to think about it is just to dominate the other projects you think you'll be able to think of in the next three months. Maybe that's a better decision heuristic for this point of your life."

NICK: I agree with that and that makes sense. I think this comment about heuristics is actually a good segue to something else I did want to bring up, which was another thing about the limitations of utilitarianism, or spelling out the conditions a bit more of (I said this thing about) conventional morality. It's like, follow utilitarianism, treat utilitarianism as your success condition when you're trying to do good, and you're happy to do it, and you're not doing anything that's wrong. Conventionally speaking, I could say a little bit more about those categories. And some examples of them that I think are helpful. So this is drawing from stuff from Sam Scheffler and Thomas Nagel, who are two philosophers. They have these three categories: constraints, options, and special obligations. These are all going to be conditions in which they are saying — and I would be inclined to agree — that common sense morality diverges from what the strict utilitarian calculus would say. So constraints are these cases where what you're doing might seem or even be best on the utilitarian calculus, but it still seems wrong for some reason. So there could be a case where a mob (this is a famous kind of philosophy example) needs to be satisfied. And maybe some government officials feel like if we say that an innocent man was the person who committed the crime that made them so angry, then the mob will disperse, and so will punish this innocent man, but will prevent these riots that do more damage. And so, you could debate what the utilitarian calculus really says to do in this kind of case. But there's a prima facie tension there. Or another kind of case would be maybe there's somebody who's raising money for a good cause that will, in fact, do a lot of good. And maybe if they use shady business practices, or they just lie to people in order to raise the money, though, they'll raise enough money to do a lot more good. And there's some kind of prima facie tension there between the utilitarian calculus, and some of these cases may not be the things you're most worried about. But I think a lot of these things have value as heuristics in ensuring that you don't go off the deep end. So the worst kinds of things might be like a state that's silencing opposition using courts of power and leaders of the state. The story they're telling themselves in their head is, "Well, I'm just punishing this one person. This one person has to pay and maybe it is kind of wrong, and my hands feel dirty, but it's for a good cause. And the state matters more so it's justified." These are all things that I call constraint violations. There's something that maybe you think it's going to have the best consequences and maybe it even would if you're right, but according to conventional morality, you shouldn't do them.

SPENCER: Right, so we can talk about a utilitarianism that adds these constraints in, which is what you were getting at before? It makes utilitarianism more robust, in some sense.

NICK: Yeah, I'm saying let's have a constraint bound utilitarianism. And you can have an interesting debate about the times when you could violate the constraints in order for it to be worth it. And I think there are some cases where you're lying, and it is worth it. I think sometimes it is worth it to be a spy and try to take down an unjust regime or something, and you're violating constraints. But I do feel when you're steering clear of the constraints, then I feel a lot better about just applying utilitarian frameworks. So that's the first category. The other two categories are called options and special obligations. One thing that's really striking about utilitarianism, if you're just following it, you're saying, "I always do the utilitarian thing, and I act wrongly whenever I don't." It's an incredibly demanding view. And so sometimes people talk about this in terms of how much you have to sacrifice. If you're a rich person, you really probably should be giving away tons and tons of your money. Even if you're just a regular person in the United States, you're trying to follow the utilitarian calculus, you're giving all of your money away until the point where you can use it better, it does more good with you than it does with other people. And that's quite demanding. But I think something that's even more striking than this is, in any situation, the thing you've thought of that does the most good is just one action out of this massive universe of things you could be doing or ways you could be living your life. And so the strict utilitarian compliant lives that are available to you are going to be very small in number, maybe it's just one, uniquely fixed thing, or maybe just a handful that you can't tell which is better, and you have to pick one. And that could be asking for a lot.

SPENCER: So you're saying the optimal series of actions where every moment you're maximizing expected utility or something?

NICK: It's a very constraining existence. And the more conventional point of view — and I'm not necessarily saying the conventional point of view is right — but the more conventional point of view would be there's tons of things you could do with your life and you're living completely compatible with living an ethical life. There's hundreds of different professions you could enter, and it would be a perfectly reasonable way to live. And so, that's a big gap. And strict utilitarianism is saying, "No, you have to pick the top of it." The idea with options is there are a bunch of things that are fully within your rights to do, and you can live a good life doing even though it's not the thing that has the best consequences. But notably, you don't have options to do horrible harm to others. You have options to live your own life without harming other people or without violating the constraints.

SPENCER: So is the idea there, it's sort of like relaxing, to say, "Okay, but you don't have to do the single best thing," you're opening up the doors to a wider range of possibilities beyond that single optimal?

NICK: Yeah, I think I'm not really taking a stand on how wide the range of options is. But I am trying to say that there are these cases where you have decided that you want to do the best thing where the point of what you're doing right now is helping others. And then I'm like, "Use utilitarianism. That should be your goal." And then there's a third category that's special obligations. So, you could think of these intuitively as the things you owe your family members, or there are people that you have special relationships with through your business professions. Maybe you're a doctor, there are professional norms that you signed up for. Maybe you're a lawyer, you have things about the privacy of your clients information and that kind of thing.

SPENCER: Does keeping promises fall under that too?

NICK: Yes, things that you've promised to do I think would fall into that too. I think you're pretty rarely in situations where I think, for many people, it's quite possible to live up fully to these special obligations without sacrificing a lot of utilitarian value. I think that's not true for everyone. Maybe your family situation means that you need to take care of some people that you love and you owe this to them. And beyond that, you're not going to have enough time to make a ton of money and donate it or start a new career path that's going to do a ton of good. But I think there are a ton of people who are totally consistent with living a very successful utilitarian life, to keep all these special obligations. I think this is important enough for people who are trying to live a really good utilitarian life. Even from a just a simple consequentialist perspective and a reputational perspective, it's better for the collection of such people if we all take these things very seriously.

SPENCER: Is there a name for utilitarianism, but with these constraints and extra options added with the special obligations taken into account?

NICK: I don't think there's really an official name for that. I think a lot of people would stop even calling it utilitarianism at this point. I think It would be like I'm an ethical pluralist. I think utilitarianism matters a lot. And I think these other things matter a lot. And I guess the thing that I'm pointing out is I think it's helpful to draw some space between the idea of trying to do as much good as possible in some kind of bounded way. And like strict utilitarianism, I think a lot of people might look at effective altruism and say, "Well, isn't this basically strict utilitarianism and isn't strict utilitarianism kind of crazy?" And I think these are reasonable questions. But I do feel like there's a very common sensical notion that's constraints, options, special obligations, maybe they have a utilitarian reduction, maybe they don't. I'm more married to them than I am to the idea of utilitarianism per se. Let's respect these things in practice, but I'm still really into the idea of doing as much good as possible for a big part of my life. And I think utilitarianism is the most productive framework for that. I mean, I wouldn't even sign up to always do the utilitarian thing in that setting. But I would sign up for, when you're trying to figure out what to do in that setting, try a hand at the utilitarian calculus and see where it gets you. And let's have that be our first cut. I think sometimes things could go in too crazy of a direction, and maybe you're not going to really endorse it. But I just find it helpful to have some language for all of this and carve out a space that's not so totalizing. And also is an intelligible justification for a kind of lifestyle that I'm quite interested in, and some other people are interested in, and maybe some of the foundations are helpful to have them be talked about a bit more.

SPENCER: You mentioned the effective altruism community because I think some people have the impression that EAs are utilitarians. I had this funny experience where it certainly seems that way. But then whenever I pin down specific effective altruists, they don't see themselves as utilitarians or they're unsure. They're like, "I don't really know." Or maybe a better description of that is that utilitarianism is something that a lot of effective altruists lean on for a bunch of their considerations. But actually, most EAs are not very utilitarians. What do you think about that?

NICK: I think that's right. And that's how I would describe myself. I would say, I'm not a pure utilitarian, I would say I use utilitarianism a lot. It's generating a lot of my insight. If you've never heard of utilitarianism, and you're trying to understand what I was trying to do by looking at my life, I think you would have a hard time. But at the same time, I don't think it's good or healthy to go all in on it. And so, I'm trying to throw out some space. it would be nice to have a better name for it. I've heard Tyler Cowen say he's like two-thirds utilitarian, I liked that one. But I don't know exactly what to call it. I'm trying to say something like, "Well, here's a domain roughly gestured at, where some principles perform quite well as a first pass." And that's sort of more the way I think about it too.

SPENCER: One thing we swept under the rug so far is what utility is, and I think this might be the third branch point that we never got to. Do you want to jump into that a bit?

NICK: So, the classical view is that utility is happiness or pleasure. The terminology I like better for this is feeling good versus feeling bad. We're trying to have people feel good or enjoy their lives. Language like that, I think, makes it a little bit more compelling. That's one of the views. Another kind of view is focused on preferences. It's a bit broader. And I feel like in my life over time, there's something broadening. I think my kind of initial view is more sympathetic to hedonism. Sorry, hedonism is the name for this feelings-first focused view.

SPENCER: It needs better branding.

NICK: It does need better branding. But you know, it's interesting, I feel like it's almost like a stoic notion where living with virtue is an important part of living a good life, where it's not just about if I think about how satisfied I am going to be with my life. When I die, I think that I'm going to care a lot more about what I've accomplished, and the decisions I've made, and how they've affected (other people than the sort of integral of my pleasure over my life so far). And I think there's something that feels really right about that. I think it feels right to like a lot of creative people. Maybe you're writing a book or something, and maybe it's not even just utilitarian aims. It's like your life has gone well if you've created this great novel. And maybe it doesn't have a lot of utilitarian significance or something that I feel like people do care about these things. It seems not irrational to care about them. And I feel like they matter to me independent of their kind of contributions to the aggregate welfare. And so, I see some temptation to bake those into a conception of a good life. And that's one of the things the preferences view tries to incorporate beyond the hedonistic take.

SPENCER: This relates to the idea of the psychological literature of life satisfaction, which is if you're asked, "Overall, how satisfied are you with your life?" You give one answer versus the balance of positive and negative feelings or positive negative that you feel throughout the day. You ping people at random times and say, "How do you feel right now? How good do you feel right now?" And you find that in many ways, they're related to each other. They do correlate positively, but they can diverge in various circumstances. For example, the relationship each of them has with income is somewhat different, at least, according to some research. So, to some extent, I really dislike preferences versus hedonic utilitarianism. Are there other ways to go that are neither of those two?

NICK: Yeah, there are. And I think part of the motivation you could have for thinking that there might be other things would be, I think John Rawls has this thought experiment, you can imagine this person whose thing they really want in life is to cut the grass in their yard with a pair of scissors, and they just do it.

SPENCER: That's the primary want they have.

NICK: And you're like, "How good is this for this person, really? How much should we care about this?" And intuitively, not very much.

SPENCER: That seems like a weird thought experiment. But you really have situations where someone has brain damage, or maybe just some kind of mental health challenge where it seems like the thing they want is really bad for them. And yet they keep seeking it out over and over again. And if asked in a cool emotional state, "Is it good for you that you want that thing?" They'd be like, "No, I wish I didn't want it."

NICK: Or maybe an example for maybe more people to be aware of. It's a thought experiment in artificial intelligence and existential risks. So, you have this AI system. And let's say it's a sentient AI system. And it's conscious, maybe it's programmed to maximize the number of paperclips that it makes, and by god, it turns everything into paperclips. And it's really gotten what it wants. Maybe it was constructed deliberately with this utility function that's linear and the number of paperclips that it gets, I want to say, even though I could include an AI system as a moral agent whose welfare I care about, I want to say it's not overwhelmingly good that this AI system has turned everything into paperclips. And so, I think that there's a third category that's called objective list theories. On his account, there's this contrasting with this subjective preference thing, where there's only certain types of things that matter. I don't love that name, because it sounds objective. And I have more of an anti-realist brain on my moral philosophy. But I think the way I would put a gloss on it is that there's something inherently subjective about the type of ethics that humans do, and that I do, and it's based on our evolutionary heritage and our cultural heritage, perhaps to some degree. And there's certain things that we're inclined to care about, like loving relationships, accomplishments, feeling good, not suffering, and knowledge. And I don't know exactly what it is, and I can't really articulate the whole thing. But these objective list theories tend to say that living a good life involves some measure of these things. I like to think of this in terms of emotional reactions that humans have to things where you could think like what kinds of things inspire sympathy in very aware beings or what kinds of things inspire compassion in a human that's fully aware of what's going on. And I would want to gloss it in that way. So, it's not really explanatory, it's sort of like it's taken as given the reactions that we have. And that would actually be the theory of wellbeing that I'm more inclined towards.

SPENCER: Objective list theory, just to make sure I understand, is this idea that things on that list, wherever you put them, are the good things. And some things that are not in the list don't matter. So, you just want to increase the number (the amount) of things on that list. Let's say loving relationships is on that list, then you want to increase the amount of loving relationships and so on, right?

NICK: And that no one has ever constructed any of these that is all-satisfying. And here's how you quantify (we call it) knowledge units. And here's how many you have at different stages and loving relationships. And here's exactly how you integrate them to get a wellbeing score, and how do you even compare them?

SPENCER: How do you compare unit building relationships to units of increased knowledge or something?

NICK: Right. So, I think maybe it's more like a subjective list theory, that's evaluative or relative. And I'm sort of explicitly acknowledging that I get a different one than some AI system you might create. But I'm saying, "Here I am, and I think we should acknowledge mine." And I'm cooperative with other beings. And so I put some weight on there. But there's some complexity around which ones really arouse my sympathy, and paperclips don't, and writing great novels does. So there's something to be worked out there. I think in practice, it doesn't matter a huge amount which of these you have. If you're deciding which of GiveWell's charities to donate to, I don't think you need to settle which of these theories of wellbeing you like. I do think eventually, if you are deciding what we're going to do with the universe, you would need to figure this out and make some decisions. But I think, in the meantime, there is enough concordance between what people want and what makes them feel good. And what gives them truly meaningful goods in their life, or subjectively meaningful goods in their life. And I would say it's a close enough approximation to give people what they would reflectively want, and I think we don't have to sort of pin it down.

SPENCER: Totally. I think there's at least one other branch point in utilitarianism that we haven't talked about, which is, which beings count in the sense of do you care about making slugs feel better? Or do you only care about making humans feel better? What about dogs and all these things? Do you want to say some words about that?

NICK: Sure. Sometimes people would talk about this in terms of moral patienthood. And they'd say which beings are moral patients. The way I prefer to think about this is a little bit closer to saying, "Well, you fix your theory of wellbeing." And then you just say, what's good is beings having that kind of wellbeing.

SPENCER: Regardless of what sort of being there basically, as long as they can have that, right?

NICK: So if it's feeling good that matters, then we can ask which kinds of things are conscious and can feel good. And if it's this subjective list that matters, we can figure that out. Or if it's preferences, we can say which kind of things prefer things and which don't. And then those become empirical conceptual questions that you could settle in different areas.

SPENCER: I like it, because it separates what you're trying to optimize for from the difficulty of figuring it out because, I think, many people agree that it's very hard to figure out how conscious is that animal versus this other animal versus this insect? Those are very difficult empirical questions, but you're saying, "Well, as long as we agree on what we're trying to maximize, whether it's actually preferences, or hedonic utility, or whatever," then it just becomes an empirical question of what experience is that right?

NICK: Yeah, definitely.


SPENCER: One more branch point. You touched on it briefly, but maybe we should say some more about it, is moral nihilism versus objective moral truth, etc. And how we're interpreting utilitarianism to begin with. Do you want to say some things about that?

NICK: Yeah, sure. The words I would use for this are theory of normative ethics, which is fundamentally concerned with different choice points you could be in. What's the mapping of choice points to right action? Or what's the mapping from different actions in different choice points to how good they were? Or how right were they or the moral evaluations of those actions? And I think of metaethics as the part of philosophy that's concerned with what's going on when we talk about this. Are these things truths that couldn't be known in some way? Are they really true at all? Is it all nonsense? Or what's going on linguistically? When we say this are these propositions expressed some way? Are we somehow more expressing our attitudes towards the world or our plans are our commitments, and so they're different names? You could be a realist in different senses that correspond to this. And that is not an area that I've thought about as deeply or worked on in my career. But I do incline towards anti-realist views. I think according to me, it's not exactly a well-posed question to say, "What is the right thing to do?" That question only makes sense in some sort of context of people who have certain concerns or want certain things.

SPENCER: And if you say, "What should you do?" You're saying, "What would you do in order to have some kind of thing?" You're trying to say, "What should you do?" Is that it?

NICK: Yeah, it's like that. I do not predict that if we ever encounter an alien species that they will have done a discipline called moral philosophy that looks exactly the same as ours. And I don't think that that implies that either of us are deeply mistaken about what is going on. And to some degree, I feel more inclined to reframe a lot of this as proposals. I guess if you were going to put me in a box, maybe the box would be expressivist who says that moral talk is really about expressing commitment to a set of norms or to some kind of moral standards or something like that, and not something that has a truth value. And I'm sympathetic to something like that. I'm not that attached to it as a theory of natural language, like what I just said is a theory of how natural language works. What I'm interested in normative ethics is having proposals for how to live our lives that we reflectively endorse after we've been talking about it, and that we're going to feel are meaningful and are going to be where we're engaged in this practice of working out proposals for how to live and how to live together. And I'm interested in doing that. Somehow I value the process of coming up with these things and being like, "Wait, no, that's not really what I care about, wasn't it? It's a little bit more like this." And I find it useful to come up with these frameworks. I don't think that I'm uncovering the final truth about the structure of the universe by doing it.

SPENCER: It makes sense. We've got all these different branch points in utilitarianism that specify the theory we're actually talking about. The first one is the rule versus act distinction. Does that have a name?

NICK: I don't think that has a name. I think it's just what type of consequentialism you have.

SPENCER: So that's like, "Are you trying to find the rules that maximize wellbeing Or are you trying to actually take action to maximize wellbeing?" And then we have sort of aggregation rules like, "Are you trying to sum up the utility?" Or you take the average or you can imagine other aggregation rules. You can have an aggregation rule that maybe says, "Well, we don't want one person to have all the utility. And so, if you have less utility, we want to count you more."

NICK: That aggregation rule is a good word for it.

SPENCER: Next, we have population ethics considerations. Do you only care about beings that currently exist? Do you care about future beings? What about potential beings that may or may not come to existence, etc?

NICK: I would put that under aggregation, but it's very important. It's a very important subset.

SPENCER: Got it, as distinct from whether we're summing or averaging. And then, now we have this whole set of considerations of. Do we want to limit the utilitarianism in some way? And I think now we're talking about really making it something that's not truly utilitarianism. Do we want to try to add constraints to it and say, "Well, only use utilitarianism if you're not violating standard constraints, like you're spreading people's rights, you're not lying to them, etc." We also have this idea of expanding it to, say, you have more options than strict utilitarianism. You don't just have to take the single best path that maximizes utility. You can maybe be free to choose more options beyond that. And then we have this idea of special obligations. Try to use utilitarianism, but you should care for your children that you have a special obligation to. If you're a doctor, you should have special obligations and so on. After that, we then have this preference versus hedonic choice. Do we want to try to maximize the satisfaction of people's preferences? Do we want to try to just maximize how good people feel, which is the hedonic version? And then there's this objective list view where there's just a bunch of things on a list. And those are the good things. And we're gonna try to maximize those, right?

NICK: Yeah, I like to call that last one theory of wellbeing.

SPENCER: Theory of wellbeing, great. And then we also have this question of like, which beings count. Do we count animals, insects, etc." And I think you have a nice way of putting that where maybe that just ends up being a consequence of the theory of wellbeing rather than an additional choice, right?

NICK: Yeah, that's how we'd like to think about it. But I think some people could debate that.

SPENCER: Finally, we have this metaethical consideration, which is just how are we interpreting all of this? Do we believe that we're trying to figure out the objective truth about morality that's true for all beings at all times? Or do we view this as a useful framework? Or do we view this as a linguistic thing, or an expression of our values or whatever right?

NICK: All right. I would add to that our decision theory, I think I would add it as another key moving part in this thing.

SPENCER: Tell us about that.

NICK: We've touched a little bit on it. So obviously, since you don't know what really is going to have the best consequences, you need an account of what you do. When you don't know what's going to have the best consequences or what the criterion of successful acting is, I like to think of expected utility theory as providing that foundation. I think you gave a fine introduction to that, maybe I'd add a little bit, which is just there's this idea that you should have probabilities, They're subjective in all the different things that might happen, you have utilities that you assigned to different outcomes. And then, instead of doing the thing that has the highest utility, which you're really not in a position to do (unless you have this magic crystal ball that can hold all the information in the universe), you do the thing that has the highest expected value. And there's some quite nice arguments for that. I think the one I like the best are the classic representation theorem arguments. If you satisfy a few conditions, say, your preferences are transitive. So if you prefer A to B, and B to C, you prefer A to C, they're complete, so that for any two things, you either prefer one over the other, or you count them as equal. They satisfy this thing called the sure-thing principle, which basically says if you have A and you have B, and A might be better, and it couldn't be worse, then you should prefer A to B. And then there's a continuity axiom as well. And it's a little more technical and not as necessary. I don't want to get into it as much.

SPENCER: Is that the Von Neumann-Morgenstern theorem You're talking about?

NICK: Yeah, Von Neumann-Morgenstern. I think stuff was done by Frank Ramsey first, and he gets a little less credit for it. But yeah, basically, there's these proofs that if your preference is satisfied, those requirements then have to be representable as you trying to maximize expected utility relative to some utility function and some subjective probability function which assigns a probability to all the possible outcomes. And I think it's important to distinguish the two things we talked about here, which is treating this as a criterion of successful action, versus treating it as a prescription to go through the calculations every time you do things. And the thing that I'm into is the former, not the latter, although there are some cases where doing the latter is quite useful. Just not all cases. I think it's quite an interesting debate about when it is good and when it isn't.

SPENCER: Right. So you're saying it's something to aspire to, in your view, but not necessarily to try to actually do the calculations all the time? Is it only in some cases actually useful to try to do the calculations?

NICK: Right. If I said, "The difference between the physics thing you said earlier in the conversation, the difference between, one person says, "Obey the law," another person says, "Get out the law books, and read through them before you do anything." And then there's a little bit more but I think we probably shouldn't get into it as much because it's not as much of an interest for me. But there are also these questions about what form of decision theory do you have? Do you have evidential decision theory, which basically says, do the thing that would be the best news if you did it? Or do you have causal decision theory, which says, do the thing such that if you did it, it would have the best expected consequences? If we intervene in the world in such a way that you now do this other thing, what then would have the best expected consequences? And then there's other fancier versions of this, like the functional decision theory that the MIRI people work on. It's interesting, and there's other ones.

SPENCER: So, are there any other interesting alternatives to the expected utility calculation version? Or is that really like the only game in town?

NICK: There's variations on the expected utility version, but it feels like the only compelling answer to me. There are people who make a lot of heavy weather on this notion of Knightian uncertainty. This distinction between rolling some dice and being like, "This has a one in six chance of coming up six, and saying, "There's a one in six chance of a war between great powers in the next 30 years where one of these is super well-characterized." You can do it over and over again, you can bet the farm on knowing that you have it kind of right. And the other one really doesn't have that property at all. And people would debate it and no one knows what the right number to assign to it is. There's kind of a cottage industry in decision theory and Bayesian epistemology that's interested in what they call imprecise credences. Instead of representing your uncertainty as a single number, they try to represent it using more complex things like an interval of probabilities. And then there's questions about, how do you update these over time? And how do you decide what to do now? And people have various answers to that. Well, you can do anything that would be optimal, according to at least one of the probabilities in your representor (it's a name that people use instead of a credence function). And there are people who say you should do the regret minimizing action. So, look at the expected value of all the things and then look at how much you would regret it, and take the regret minimizing one, as another way of thinking about it.

SPENCER: I've heard some people argue, though, that you can take those kinds of intervals and just collapse them down to one probability.

NICK: I mean, if you have a probability distribution over the interval, then that is a very natural thing to do. Because you could say, "Well, let's just weigh things by their probability. And let's collapse that into a single expected value." Some of the people into this say, "Well, you can't assign a probability to all of these things. It's just an interval." I think I've been a little bit unsatisfied with it because I don't really know what this means exactly. And also, it just feels a little bit unhelpful sometimes because I'm like, "Well, what do I do with this?" I mean, some of the ones that are more specific might be more helpful, but ones that are like, "Well, do anything in these representors look a little bit too permissive?" And I don't know exactly what the criterion is for whether something belongs in the representor. I like the more classic, straight up subjective Bayesian one that's like, "Your probability is what you would count as fair, betting odds where you take either set, you're indifferent between taking either side of that," feels like more natural to me.

SPENCER: Got it. Just stepping back on all of this for a moment. It seems like there's just a ton of flavors to utilitarianism. We've had many different decisions to make in choosing. I'm curious if you have anything to say about the combinatorial explosion of possibilities?

NICK: Yeah, well, I said already that I think there's not a huge amount of difference in practice on the theories of wellbeing. Which one you pick, I think, doesn't matter a huge amount for things.

SPENCER: Like a preference versus hedonic you mean.

NICK: Yeah, yeah. So I'm like, 'Yeah, whatever. Let's run with preferences, but notice some ways that it goes haywire." I think in the theory of aggregation, there actually are some good arguments — we haven't gotten into that — on the kind of add them up theory of aggregation for a fixed population. And there's a couple of arguments that I think are pretty good for going towards the total view or for population ethics, we could talk about those. With consequentialism, whether you're doing act or rule or something else, it feels a bit academic to me in the sense of being remote from life or something. I find it telling that it doesn't really come up, you have this community of Effective Altruists who like to nerd out like crazy on this stuff. And you know, every kind of conceivable thing that could matter they talk about for a lot of these things within this family. And it's never really come up as, "Oh, you know, well, the reason I'm donating to charity A rather than charity B this year is because I'm a rule consequentialist, and not..." Maybe you get a little bit of that. Maybe some people are like, well, I donate a little bit to Wikipedia because I think I ought to do it, because I benefit from it, even though I don't think it's the optimal charity. And I do it for NPR a little bit too, because I value it. And you know, and I think that's perfectly reasonable. And maybe maybe people should do it a little bit more. But I just feel like it's not where the action is. So, the things that I feel like are where the actions are the population ethics piece of it. And the aggregation is where a lot of that action is, and nailing that down feels like the thing where if you're really deploying, this is the stuff you want to get right.

SPENCER: It matters a lot what you choose. Actually, this conversation reminds me of something that's confused me. I'm wondering if you can shed light on it. I totally get when someone says, "Oh, I believe that there's an objective moral truth. I believe that there's such a thing as something being right." It's such a thing as saying to be wrong it doesn't depend on you knowing who it is. And it's universal. And furthermore, that they're convinced that utilitarianism is the answer to that question. I feel like I don't know, I don't take that perspective. But I get what they're talking about. What I'm confused about is there seems to be quite a number of people who sort of reject objective moral truth or think it's unlikely, and yet still say they think utilitarianism is the correct theory of ethics. Do you have a perspective on that?

NICK: I'm not sure I totally understand what you're saying. You're running into people who are anti-realists about morality?

SPENCER: Exactly. They don't think there's an objective moral truth. And yet they think utilitarianism is somehow the right theory of ethics.

NICK: Well, I don't know. If I was talking to such a person, then maybe I would say, "What do you mean by right? Do you mean it's the theory you propose and would like us all to follow? Or do you mean you think it has arguments such that a lot of other humans, who, if they understood them properly, would come to endorse it? Or what exactly are we disagreeing about?" I actually have a little bit of a view. And maybe when you're talking about meta-ethics, I'm not sure all the questions are super well-posed. I like to bring a lot of this back to: Is there a prediction we disagree about? Or do we have a proposal we disagree on?

SPENCER: Hmm, yeah. I suspect some of this comes down to things like the Neumann-Morgenstern axioms and things like that, where people maybe have the sense that somehow you can prove utilitarianism is right. And if we think about his actions (you mentioned them earlier), basically, they're a pretty reasonable set of rules that you want a rational agent to follow and make them what's better than what else . And then, you can prove that if the agent follows those rules to determine what's better than what else then they're in a sense maximizing expected utility. My feeling is that people overread these kinds of theorems because the definition of utility in those theorems has nothing to do with happiness. Is it just any function? Do you want to talk about your view on that?

NICK: Oh, yeah. I totally agree with what you're saying. So I think we distinguish between two things. There are representation theorems like Von Neumann-Morgenstern, Savage, and Ramsey. And those things are saying that if you're satisfying these intuitive axioms of rational decision making, you need to be maximizing expected utility. But they mean something different by utility than what Jeremy Bentham and the contemporary utilitarians mean by utility. So yeah, and there's just a real conceptual model here around the word utility. That is very unfortunate.

SPENCER: Because it almost seems like they're proving that you should be, which is really not the case, right?

NICK: It was really not what they're proving, really what they're proving is that you should have preferences that can be represented by numbers, and you're trying to maximize the expected satisfaction of those preferences. They could be completely opposite of utilitarian preferences, you could want to maximize total suffering. And you could satisfy these axioms totally. So, the word utility is really being used in at least three different ways by different people in different contexts, where, I think for Bentham, it meant happiness or feeling good for these representation theorems. It just means a numerical representation of the satisfaction of your preferences. And for the kind of theory of wellbeing people and modern utilitarianism, sometimes just those values, they talk about the utility for individuals, and they mean the degree to which your preferences are satisfied, or the degree to which you're feeling good, or the degree to which you're doing well, on the subjective list. A better word for it is welfare. I think we're unfortunately stuck with these terms a bit. But I like to use the word welfare when I'm talking about the theory of wellbeing, welfare or wellbeing. And then I like to reserve the word utility for this kind of expected utility theory concept. But then it's a little confusing what utilitarianism refers to, because it's like, you wonder whether it's referring to one of these other things.

SPENCER: And economists talk about utility, too.

NICK: They do. Yeah. And they walk around assuming the preference theory of utility. And maybe they also assume that somehow people's utility is always self-regarding, like people's preferences. They talked about preference and satisfaction, and assume that you're thinking about yourself here, and not necessarily other things. But I think for a lot of people, really, they care about the rest of the world more than they care about themselves. I think there are a lot of people who, if you're like, "Would you heroically sacrifice yourself to save your country? Or like, save the planet? Or..."

SPENCER: Even their family, right?

NICK: Yeak. I think it's interesting, because I think there's mercenary attitudes that you might have when you're a policy wonk, where maybe in your heart, you're a utilitarian, but when you're talking policy wonkery, you talk about the citizens wellbeing and you define the citizens wellbeing in terms of the GDP and stuff. But I really do think in my heart that for a lot of people it would be a win-win, to just have more and better foreign aid. It feels like a little bit of a moral stain and something that just makes our lives actually worse in ways we don't realize or necessarily always appreciate to live on a planet where there are so many people who are in poverty and not living the lives that they could live and dying of preventable diseases. And I actually just think it would be in the interest of all the citizens of the United States to expand those things because they could. They would have reason to be proud of what their country had done, and what they had contributed to, to a point where it would actually just be making everyone's lives better, even the lives of Americans, to do more for others.

SPENCER: You mentioned the sort of conflation of GDP with wellbeing. And I see that a lot in economics to where, because they often will talk about things in dollar units, it makes it seem like dollars equals utility equals everything that's good for people. And I think that can be just extremely confusing as well.

NICK: I did want to say, you know, since we talked about the theory of wellbeing, I highlighted the aggregation piece as where a lot of the action is for. To figure out what's really important. And we talked about this notion of utility and we talked about these expected utility representation theorems and how they don't prove that we should be utilitarians. The thing that comes closest to proving that we should be utilitarians is this argument called Harsanyi's aggregation theorem. And it's by this economist who became more interested in philosophy. At some point in his career (I can't remember which was first), he has this 1955 paper, where he introduces this theorem, and it's pretty simple, but I think it's conceptually very important and underrated. There are a lot of people who like to major in philosophy and go through their program, and they never hear of this theorem. But I think it's very good and very interesting. And basically what the theorem says is: Suppose that we want to have a ranking over different possible things we could do, let's call them prospects, where a prospect is a thing that assigns a probability to a number of different possible outcomes. So, intuitively, if you do something there, all these different things might occur, so every action is associated with a prospect. Every action is associated with a prospect. And then, let's say we want to rank these prospects and say which ones are better for society, or the world. Let's say that we also have rankings over all possible paths, prospects for each individual. Now, that should be especially plausible to us because of the Von-Neumann Morgenstern theorem that we talked about before. Every person, if they're satisfying the theorem, has a utility function of their own, and can think of it as representing their wellbeing. Or we could think of the one that would represent their wellbeing, we're doing this like the way I would like. So, they've got all their own utility functions. And let's suppose we add to further requirements. The next requirement is that if one prospect is better for somebody, and worse for no one, then we should prefer it. That's basically the Pareto principle in economics. And then the last one is something we'll call impartiality, which says: We don't care who's getting these goods, we just care what the wellbeing levels of the different people are. So, if we relabeled the names of everybody or mixed them up, it leaves things as good as they were before.

SPENCER: Let me just make sure I understand the setup. We've got a whole bunch of people, you know, Amy, and Bob, and Cameron. And each of them has a utility function, which is basically how valuable they think different states of the world are. And furthermore, we're being impartial. So, we're just as happy to satisfy one of their preferences and another one of their preferences. Now, that's set out. And there's one more condition. What is the other condition?

NICK: We're trying to rank different outcomes for the world. And we're doing it in a way that complies with expected utility theory.

SPENCER: So, we basically want to say, "Okay, given that they each have utility functions, how do we decide the way the world should be that sort of respects all utility functions?" Right?

NICK: Yeah, which is basically the aggregation question: There are all these beings with wellbeing, how do I map that to a goodness of the world? So basically, what Harsanyi showed is if you satisfy all those axioms that we just said, and you're working with a fixed population size, then the function you have to use is the utilitarian one that just adds up. Everybody's wellbeing otherwise, somebody's violating expected utility theory, or you're not being impartial, or, you know, you're violating this Pareto principle. And I think that's a pretty impressive theorem, because I don't know if all of those things seem pretty good. I think there are some limitations and we could talk through them. But I think that those are going to be some conditions that apply to a lot of cases.

SPENCER: So, essentially, it's saying if you're applying expected utility maximization to each individual, then to create the best society, given these kinds of reasonable seeming conditions, we want to apply the expected utility maximizing principle to the whole society. Now, I think it could still go under the same sort of attack, where we could say, "Well, but the utility functions of individuals are just sort of arbitrary things that they're trying to maximize," and they could be like, "One person could be trying to maximize suffering in the world," or something like that. So, it's still weird in that sense. It's still not like what Bentham was talking about. But you're talking a little bit about why you think this is impressive, and like what it teaches us?

NICK: Sure. The thing you just said a little bit. So the way I would prefer to set up this theorem is not necessarily just exactly what everybody wants, but what maximizes everyone's wellbeing. So there's a philosopher John Broome who did this very carefully in a book called Weighing Goods. He started off as an economist and became a philosopher, and I'm a big fan of his work. He has these two books, Weighing Goods and Weighing Lives that are probably my favorite ways to really pin down the right theory of aggregation. Or I don't want to use the word right, just the most useful theory of aggregation. But he says, "Let's interpret these as representing people's wellbeing and think about the axioms as requirements on the wellbeing rating." So instead of saying prefer A to B to C transitivity, we'll say each person has a wellbeing ranking better than for Spencer. And we'll say better than for Spencer, it needs to satisfy the requirements of if A is better than B and B is better than C, then A is better than C, and it needs to satisfy a sure thing. So if A could be better than B, and couldn't be worse than A is better than B, and satisfies completeness, if one of them's better than the other for Spencer, or they're equally good. If you satisfy those, then we could just reinterpret those classical theorems and say we can construct a utility function. But here we actually have a reason to think it represents your wellbeing. And then we say, "Okay, we're going to define the goodness of the world in terms of everybody's wellbeing functions." That's a little persnickety, especially if you think preferences just are utility, but if you don't, it matters.

SPENCER: The way I interpret what you said is: We're defining people's preferences here by identifying them with their wellbeing, then we're playing by Neumann-Morgenstern to get a utility function for each person, that we're playing by Harsanyi's theorem to get a utility function for the whole world. And so if that bootstraps, it looks a lot like utilitarianism. Am I interpreting that, right?

NICK: Yeah. And I would call it fixed population utilitarianism, because the Harsanyi theorem doesn't cover the cases where you can create extra people. It just works with a fixed population. So I think once you've gotten there, that is really doing a lot for you at this stage. If you bought this, then you really should be adding things up. And if you're in a situation where the main effect of what you're doing is not creating extra people, and it's a situation where you're not violating constraints, you're not violating special obligations, if it's the kind of activity where what you're doing is perfectly fine, and the point is to help people, and really gives you a conceptual framework that I think is pretty good for saying which things are better and which things are worse, obviously it doesn't tell you about the consequences, but at least it gives conceptual clarity to what you're trying to do.

SPENCER: That's a really nice way to look at it. So you mentioned that this has a bootstrapping effect to get you to utilitarianism, but it doesn't deal with the population ethics side. Do you want to say some things about that?

NICK: Yeah, sure. So Broome actually deals with this in a way that I find pretty cool and impressive in his book, Weighing Lives, which is following up to Weighing Goods, just really focused on Harsanyi's aggregation theorem, going through it in a really careful way. And it's really cool and impressive. But it doesn't answer the question of population ethics. And maybe I should just say a little bit about population ethics first. I think a lot of people might think, Well, how about you just maximize the average? Isn't that the obvious thing, if just working with the people we have now is off, and just adding people isn't necessarily the solution?

SPENCER: It's like the average utility of all conscious beings or something like that.

NICK: Right. Maybe the best reason (you don't want to just maximize the averages) is, if the average was negative, then adding negative beings could, in theory, make things better. But that seems crazy. For example, suppose everybody is suffering a lot and has a wellbeing of negative 10. And you add a bunch of people with a wellbeing of negative five. Intuitively, that's better for no one. That didn't help anyone, it increased the average, but this is not an answer. I should throw that out because I think a lot of people hear it and they're like, "Well, yeah, I got the answer to this one." What John Broome argues for in Weighing Goods, is he basically says: Suppose we have fixed population utilitarianism, so we're just ranking things by adding it up whenever the population is fixed, then there's a question though of like, what happens when you're adding people. You could think then if you're adding another person, it's got to be helping if they're at some level of wellbeing, and it's got to be hurting if they're at some other level of wellbeing. There's got to be some other point where it's neither helping nor hurting. What point is that going to be? And I think it's hard to say exactly what point that is. That's another further area of variation you could have. And also, in theory, that level could change as the population changes. Maybe when you don't have a lot of people at that level, the neutral level is small. And maybe when you get more people, the neutral level could go up. I don't know why you would think that but it's conceptually possible. Broome points out (it's kind of weird): If you have this type of dependence, say, if how good it is for another person to live is dependent on how many people there have been, or what their like wellbeing levels were, then things could depend on the distant past in ways that doesn't seem like they should, we could have a situation in which we're deciding, you know, whether to do A or B. And it turns out that it depends on some facts about ancient history and how many people were in Rome, and how exactly well their lives went. But intuitively, if we're talking about how much we want to pay for some kind of contraceptive policy or something, it feels like that is neither here nor there and shouldn't matter. So if you want to build in, you could build on a requirement that these levels don't depend on what happened in the distant past. And then you need there to just be one neutral level. And then you can have arguments about what it is.

SPENCER: And it just says, Your mindset is the neutral level, it's like we're adding someone to that utility level, neither makes the world better or worse, according to the theory. Right?

NICK: Right. So then this is the question, what is that neutral level? So, that's an argument that if you accept it, basically, you're like, "Okay, there's one neutral level." So then the question is: What is it? And John Broome doesn't really answer this question, but I feel like there's an argument that feels pretty good to me, for why the neutral level should be zero, which is the classic total utilitarian view. Or what does zero mean, though? It's sort of difficult to define one way of defining it that feels compelling to me. People have tried things like you could have a blank life, maybe if there was a life that just had nothing happening in it. Maybe that's neutral. But I don't know, you could debate that. And maybe somebody would say, Well, maybe it's kind of a tragedy for there to be a blank life. If things were otherwise pretty good, and you just add this life where nothing happens in it, it's just sad. And I don't know, you could debate that, I guess it's kind of like a cute continuity argument you can make. So you could say, "Consider a life that somebody has that lasts for duration T." And think about it. We have an apparatus that tells us how good that life is, for any length. If we describe the life, we can use the framework we've described to pin a value on that. And just let T approaches zero, and don't do anything too crazy with their life, don't make it undefined so that it's getting worse without bound and going back up in some weird sine wave, just think about it normal. And let T approach zero and think about the value as T approaches zero and say that's a zero, just a short life, the maximally short life where zero, nothing happens. That's the zero value. And I think that's a pretty intuitive way of thinking about it. And if you do that, then you're left with this total utilitarian view. And I think the total view is not perfect. I think it has some really notable difficulties with infinite populations, and just making things be well defined with that, that Nick Bostrom explores in a really helpful way in his Infinite Ethics paper.

SPENCER: Even a small probability of an infinite population could cause chaos to write.

NICK: And I actually have a big chapter about that in the second to the last of my dissertation, which is on my website and people could dig into if they want.

SPENCER: But what's your website address,

NICK: Yeah, so there's some real weird things there. But I think when your aspiration is having a theory that works reasonably well, in practice, I think it's reasonable to set aside the infinite stuff if that's not most of what you're thinking about. So anyway, we've now gotten to a very long-winded argument that I think highlights total utilitarianism. There's a lot more we could say about arguments against it and what we would say to those and how this relates to all the impossibility results, and all of that. I do think we've sketched out a pretty comprehensive view that I think works in a lot of cases from the beginning.

SPENCER: This has been really great, Nick, thank you so much. I've loved digging into all these topics with you.

NICK: Yeah, thank you. It was really fun.





Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: