CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 118: Critiquing Effective Altruism (with Michael Nielsen and Ajeya Cotra)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

August 20, 2022

What is Effective Altruism? Which parts of the Effective Altruism movement are good and not so good? Who outside of the EA movement are doing lots of good in the world? What are the psychological effects of thinking constantly about the trade-offs of spending resources on ourselves versus on others? To what degree is the EA movement centralized intellectually, financially, etc.? Does the EA movement's tendency to quantify everything, to make everything legible to itself, cause it to miss important features of the world? To what extent do EA people rationalize spending resources on inefficient or selfish projects by reframing them in terms of EA values? Is a feeling of tension about how to allocate our resources actually a good thing?

Ajeya Cotra is a Senior Research Analyst at Open Philanthropy, a grantmaking organization that aims to do as much good as possible with its resources (broadly following effective altruist methodology); she mainly does research relevant to Open Phil's work on reducing existential risks from AI. Ajeya discovered effective altruism in high school through the book The Life You Can Save, and quickly became a major fan of GiveWell. As a student at UC Berkeley, she co-founded and co-ran the Effective Altruists of Berkeley student group, and taught a student-led course on EA. Listen to her 80,000 Hours podcast episode or visit her LessWrong author page for more info.

Michael Nielsen was on the podcast back in episode 016. You can read more about him there!

JOSH: Hello, and welcome to clear thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast. And I'm so glad you joined us today. In this episode, Spencer speaks with Michael Nielsen and Ajaya Cotra about conceptions of effective altruism, comparisons of EA to other movements, and the nature of doing good.

SPENCER: Ajeya and Michael, welcome.

MICHAEL: Hi, Spencer.

AJEYA: Hi, Spencer. Good to be here.

SPENCER: I'm really excited for our conversation today about a topic that is very important to me, and I think the dialogues on this topic are often just not that high quality. The question of what is great and what is not so great about effective altruism as a philosophy and as a community. The two of you have a lot of insight into this topic. Michael released a really interesting essay with some of his thoughts about effective altruism — some good things to say about it, but also quite a number of critiques of it, which we'll get into — and Ajeya has worked in effective altruism for a long time, and I know has very thoughtful ideas around it. So, we're going to explore those two things together, but I just want to say something about what this conversation is really supposed to be doing — which is it's the fourth in a series of conversations where we bring together people who have some serious disagreements on an important topic, and we explore them together not to have a debate, but really to see if we can all figure out why you disagree, and what the different elements of the disagreement are. I think of it as a collaborative exercise, rather than two people just trying to win over the audience. With that, let's get started. The first topic I want to go into is really what effective altruism is. In Michael's essay, he starts with the definition, “using evidence and reason to figure out how to benefit others as much as possible and taking action on that basis” or shortening it to “using evidence or reason to do the most good possible.” I want to start there, Ajeya. Do you think that's a reasonably simple definition of what effective altruism is?

AJEYA: It's broadly reasonable, with one caveat. It's possible, and it's fruitful to kind of separate the intellectual project of figuring out how to do the most good with some set of resources, like some amount of money or 40 hours a week in a career, and the sort of moral or personal decision of how much of those resources to devote — Luke Millhauser recently wrote some notes on his blog about how he sees effective altruism — and I think that making that distinction of how much of your time, energy, and money do you want to choose to spend in this way versus given that you've made that choice, what principles do you use to figure out how to allocate your altruistic capital or energy. Where EA is much more distinctive and has much stronger takes on the latter question of if you've decided you want to do good with your career or donation. EA has a lot of thoughts about how to go about thinking about doing that as effectively as possible. It doesn't have nothing to say about the question, “Should you try and help others with some of your resources?” There's a moral appeal to do that, but that piece of it is much more personal and much more variable across people who call themselves EAs.

SPENCER: I will just add that there's really a third part of EA, which is just the community. I'm curious to hear your reaction to the way Ajeya described separating out the question of, “Given that you've decided to, let's say, spend 40 hours a week working on improving the world or spend $10,000 given to charity, how do you do it most effectively?” That's one question in effective altruism. And the other question is, “How much should you give? How much of your time, energy, and money should you give?” And she kind of wanted to split those two questions. I'm wondering how you feel about splitting them apart. Do you think that your critics are just about one or the other? Or do you think your critiques cover both of those?

MICHAEL: First of all, actually, especially for sort of the audience's sake, but also my own, I don't have some set of canonical critique, just to say the thoughts. The specific division, obviously, is useful as a way of deciding on your own personal engagement. I also think it's interesting to think about it in sort of two other ways. One way is at the organizational level. Do organizations ever take that same kind of analogous path? Something that I guess really struck me as I started to think a little bit about EA was just how attractive and very demanding ideologies tend to be. So, sort of the more something demands of you in many ways, the more attractive it is — certainly at least for certain types of person, and I'm probably one such person. And then something I'd be interested in hearing Ajeya's comment on is, how exactly would you suggest sort of making the cut that you're talking about?

AJEYA: In terms of the actual object level, how much would I suggest the typical person invest? Obviously, that's a very personal question. I think the way the messages EA tries to send in practice are what I think of as broadly reasonable. There's a lot of emphasis both on trying and figuring out if there's a career that's a good fit for you that you have a lot of impact and think about impact in this way as an important criterion. And while you're picking a career, think about what amount of your income you can sustainably donate and what makes sense for you there. But also, especially on the career side, as I think EA has shifted its emphasis over the last several years more from donations to careers, and there's been a kind of increasing discussion both of the importance of finding a personal fit and the importance of managing the risk of burnout, etc. The way EA tries to draw the line and the way a lot of EAs I know kind of draw the line is like a lot of us are excited about having a full-time career that is trying to do good in this effective altruist way. But, lots of people have careers, lots of people have demanding big jobs, but they also have families, and they also have weekends and evenings and their lives. That's kind of how I draw the lines. It's important to me to do well in my job. It's not something I'm trying to do or making myself do to, say, work 80 hours a week as opposed to 40.

MICHAEL: Can I just ask, do you think, for example, that the people who are doing the most good in the world are EAs, Ajeya?

AJEYA: That's a good question. I would say that the people doing the most good in the world are disproportionately EAs, but not a majority EA.

MICHAEL: Can you give me an example of a person you feel is doing a tremendous amount of good who is not EA? I realized that's maybe a slightly loaded question, or one that you're [laughs] going to want to think harder more about, but I'd be very curious if there's somebody or a group you're comfortable saying.

AJEYA: I don't know if you would count this as not EA, but I think the Gates Foundation does a tremendous amount of good — Bill Gates does a tremendous amount of good. A lot of companies that are just for-profit companies do a lot of good by bringing down the costs of providing essential services. Like in the rich world, to me, Amazon comes to mind as one that has made a lot of things a lot cheaper and a lot more convenient. That has done a lot of good, and they've captured a lot of that as profit, but I don't think they've captured all of it as profit. You can kind of set aside the question of “Who's done the most good?” from “Who's done the most good that they haven't recouped to the value of in monetary terms?” I feel if you ask just the first question without having to stipulate that they didn't benefit from it, a ton of for-profit companies would count.

MICHAEL: This is incredibly interesting. Some of my more libertarian and VC friends I've chatted with about what they think about EA often just say, “I don't know why they're focusing on all these not-for-profit causes. If they really cared about doing good in the world, they'd start companies that made humanity much better off.” That's quite an interesting critique, not really of the intellectual project of EA, but of the “in practice, how it grounds out.”

AJEYA: With both of the examples I gave, the Gates Foundation and a number of for-profit companies, I was taking a pretty holistic sense of just looking around the world at what is something that has a positive sign and large magnitude. The more you bring in pretty specific philosophical assumptions about what the nature of the good is, for example, if you took it seriously that animals have moral standing and there's so many more of them than there are humans, then the set of people who could possibly be doing the most good has shrunk a lot because you've changed the values to cut out Amazon and the Gates Foundation and all those other people. And now you're looking among a much smaller set of people who are working on helping animals because you've kind of, from a philosophical perspective, decided that's the most important, and the same for existential risk reduction or something.

MICHAEL: Sort of thinking about the animal example in particular, where I was going before a little bit was an idea that I find fascinating is the idea of a libertarian EA that is very focused on sort of companies or startups as vehicles for doing the most good. The comment about animal welfare, can you match those two up in any way? I guess if you think about longtermism and including people in the far-distant future inside current economic calculations, that seems somehow like it ought to be possible with notions around insurance or something like that.

SPENCER: Impossible Foods, making Impossible Burgers or companies trying to make clean meat that would replace the need for factory-farmed animals.

AJEYA: I'm very excited about clean meat type startups, but it's not libertarian or capitalism doing good in the same way as Amazon is capitalism doing good because in the Amazon case, the people with the dollars are like making the statement that this did good for them by paying for it. Whereas in the Impossible Meats case, it's some altruistic consumers have decided they don't want to consume meat, and they're the ones with the money that are paying for Impossible Foods. So, it feels somewhat different. You'd have to kind of have some way of eliciting preferences from the animals directly to make it analogous.

MICHAEL: To just switch over to something completely different, if you think about voting systems, I certainly think giving a children's franchise, there is no substitute, as far as I can tell – giving 12-year-olds the right to vote or something would, I think, be substantively different than anything else I can think of doing in that more indirect way. There's a sort of an analog there, and basically, you're saying animals, in some sense, are not participating in the economy.

AJEYA: You're right that with future people because we expect them to be intelligent, rational agents and have all these institutions, maybe with insurance or discount rates or long-lived institutions, they could participate in the economy. The answer to your friends' question about why isn't EA starting more companies is that EA has started off and it has a kind of gradient descent toward being more and more about the constituencies that have structural reasons that they cannot participate in markets. Whereas focusing on global poverty is sort of that and sort of not. There are still amazing for-profit companies focused on helping the global poor like Wave. But EA is focusing on animals and people who aren't born yet precisely because it's structurally impossible for the market to serve them.

MICHAEL: I should say, I don't particularly agree with my friend exactly, but for reasons I've found hard to articulate, and it has something to do with the sense that my own personal interest tends to be more in providing public goods. That's something that I think EA seems to have both concentrated on a lot but also almost trying to expand to the notion of what public goods are — the public is not just the people who happen to be alive right now. It might also include other sentient beings, animals, and people in the far future.

SPENCER: Ajeya, do you view that specifically as a reaction to market forces failing for those groups, and that's why it's a focus for EA because companies are able to serve, to a significant extent, people that have purchasing power and can participate in the economy? So maybe they need less focus?

AJEYA: I would say that most EAs don't necessarily put it that way. But the sort of EA logic is a consequence of that. In terms of trying to help people in the United States versus trying to help people in developing countries, the basic EA logic just sort of goes: people in developing countries are, on average, much less wealthy, so given amount of transfer of wealth can help them a lot more and go a lot further. That's the kind of observation and the logic, and that is downstream of how people in richer countries have participated in these market economies for a lot longer and have grown richer as a result. You could say that that is the reason EA focuses on these constituencies.

SPENCER: Near the beginning of this conversation, Ajeya, you mentioned that one aspect of EA is just saying, “Well, given that I'm going to give away a certain amount of money or invest some amount of time in helping the world, how do we do it effectively?” But it does seem to me that there is a strong strand of thinking in EA that we should (there should be an imperative to) give a lot and to do a lot. That it's not just we should arbitrarily decide, ”Oh, I'm only gonna give away 1% of my money and then just call it a day.” I'm curious to hear your thoughts on the sort of underlying philosophy that gets people to think that “Okay, you should do a great deal, and not just a little bit.”

AJEYA: In large part, it feels to me like somewhat of a selection effect. EA has these complicated ideas they are selling to people that want to do altruism with some amount of their resources. I think you have to be pretty passionate about doing altruism and pretty invested in putting a pretty large amount of your resources toward it before it becomes appealing to be putting all this thought into it. People who are attracted to being hardcore in this one sense of thinking extremely deeply and self-skeptically about how to do good tend to be the same kinds of people that have the passion that makes them feel a moral obligation or a strong emotional pull toward giving a lot of themselves. Even though they're separable, in principle, I agree that many EAs are doing a lot, feel they should be doing a lot, and feel that others in a similar position of privilege to them should be doing a lot more than they typically are.

SPENCER: The way you put that makes me feel like you don't feel it's a fundamental part of EA, that it's sort of just a side effect. Is that right that you don't think it's fundamental to effective altruism?

AJEYA: I don't think it's fundamental to the intellectual project. When we were kind of splitting those definitions there, I don't think it's fundamental to the intellectual project. And I have lots of friends who appreciate EA's logic and are just like, “Well, you know what, I don't really want to spend my life that way. Maybe if a job comes along that I am really excited about, for personal reasons, this would be a tiebreaker, this would be a bonus.” They understand the intellectual arguments, but they aren't pulled to give a lot of themselves to it. But I think it is a pretty big strand of the community, and the people who are most likely to call themselves EA, most likely to be found on the EA forum, or just the public faces of EA, are in fact, devoting a lot of their lives to this, and from the beginning have had this strong sense of emotional pull to doing that.

SPENCER: I want to read a couple of quotes that Michael uses in his essay. The first is from Nick Cammarata — I'm sorry if I mispronounced his name. Sorry, Nick. — the quote is, “My inner voice in early 2016 would automatically convert all money I spent, for example, on dinner to a fractional death counter of lives in expectation I could have saved if I donated it to good charities. Most EAs I mentioned that to, at the time, were like, “Yeah, that seems reasonable.” And there's another quote that Peter Singer uses in his book “The Most Good You Can Do” about Julia Wise and effective altruism — I will mention Julia Wise has changed her views on this, I'm pretty sure, so this is sort of like earlier Julia Wise — but basically, he says in his book, “When Julia was young, she felt so strongly that her choice to donate or not donate meant the difference between someone else living or dying that she decided it would be immoral for her to have children. It would take too much of her time and money. She told her father of her decision, and he replied, “It doesn't sound like this lifestyle is going to make you happy,” to which she responded, “My happiness is not the point.” Later, when she was with her husband, Jeff, she realized that her father was right. Her decision to not have a child was making her miserable.” It seems to me that this kind of intensity, this sense that if you don't do good maximally, or if you know you invest in something that you just want, you're letting down all these people that you could be saving. This idea really does exist in EA, and I think, to a lot of people, this is really an extreme viewpoint. I'm curious to hear your reaction to that.

AJEYA: I personally have gone through periods where I thought like that, and it was very hard for me. Right now, I spend a lot more money on myself, and I don't think as much about money. That's partly because I felt like the most good I could do was likely to be through my career, and it was likely to be counterproductive to spend a lot of mental and emotional energy being frugal in other ways. Another example is I used to be vegan for similar reasoning, just kind of thinking about the animals I might be harming with every meal. I switched back from that because I was having a hard time figuring out food that worked for me. Now I'm just far less purist on a lot of axis. I think a lot of EAs go through a similar journey, where they're very intense about this. It's extremely visible and feels morally horrifying to do things for themselves when that time or that money could have been used to help others who have far greater needs. And right now, it's not that I'm not demanding of myself in a lot of ways. That energy of demandingness toward myself has shifted toward really making sure that I'm not messing things up at work, really questioning whether I'm using my time in the most effective possible way, questioning whether the focus of my career makes sense, whether I have the right beliefs about the long term, big picture questions that could inform what I do, etc. I personally am fairly demanding of myself, and in a local community of EAs, many of whom are very demanding of themselves. And so, I think that's absolutely real, and it's definitely a big part of how many EAs experience the community, the philosophy, the ideas. Though, again, I feel like there's a pretty wide tail, a broad tent of people, and the people who are most likely to say “I am an EA” and who are most likely to be visible as EAs are usually the people for whom this self demandingness is higher. There's a lot of people who are like, “These ideas make sense. Every year, I donate 1%, 5% of my income to GiveWell, and that's how I engage with this.” There's definitely room for quite a spectrum of demandingness within the general intellectual philosophy and movement.

SPENCER: Now, I just want to say to any listener who thinks to themselves, “How could you have that attitude that anytime you go take some selfish action that you're letting people die?” I think it's useful to have a thought experiment here. Imagine that you literally could save someone's life instead of buying yourself some ice cream, and you decide to buy yourself an ice cream. There's something horrifying about the choice to buy ice cream rather than save someone's life. I think, what can happen to some effective altruists is that they start seeing that as a genuine trade-off. They start saying, “Well, what if I took this money, and I went and helped people? Maybe I could save a life. Maybe not for the price of ice cream, but maybe for a few thousand dollars, I could save a life. Can I justify spending a few thousand dollars in another way if that's true I could have used the money in that way?” Michael, I'm curious if you want to jump in here because I feel this was a core component of your essay. The way I read it is that you feel effective altruism kind of implies this way of thinking and doesn't necessarily have a principled way to get out of this from our thinking, even though, in practice, many effective altruists do move away from this kind of analysis.

MICHAEL: It's a very demanding thing to ask people to move away from. Basically, you have a very clear principle, which seems to lead a surprising number of people into this terrible moral quandary, which is very difficult for at least some people (certainly not all, but for some), which then it often seems they have to find their way out of on their own with the help from friends, which is not quite the same as having help from very large organizations, which are explaining in very principled ways and compelling ways how to get out of the quandary. So, grant that you can get out of it. And, of course, many people are not worried about eating ice cream and the damage that it may do. I think it's interesting to just ask the question, “What is the essential difference between that and many sorts of traditional ways of doing good in the world, like people who grew up going to Rotary Clubs, or participating in Lions Clubs, or participating in many, many other ways of contributing to their community?” A nice example that I think about frequently is people contributing to open source projects, that in some cases, one or two people will actually effectively be holding together a large fraction of the world's sort of economy. A few years ago, one of the key libraries used to secure internet communication was found to have a bad bug in it, the Heartbleed bug. This caused all sorts of chaos. And it turned out, I think it was one person who was working full time on this, and they were just kind of donating it. They were doing some incredible amount of good in the world that way, and that's arising out of a very different sort of ethos—the open source ethos. So I'm just wondering, I guess, how different EA is. How would you characterize the difference between EA and more traditional ways of doing good?

SPENCER: You mean, once you've kind of stepped away from this maximizing principle, right?

MICHAEL: Once you've stepped away from extreme altruism. Some people call it what becomes sort of the distinguishing feature. I have some thoughts, but I would love to hear Ajeya's thoughts.

AJEYA: I think this is where the kind of “are you doing bounded optimization or unbounded optimization?” That distinction we made at the beginning of how do you think about what good to do with the resources you've decided to devote to doing good versus how many resources you want to devote to doing good, where the product of those is trying to maximize that product, just trying to maximize the good you do, and it has these kinds of extreme altruism implications. But if you factor that product into those two terms, then the term that is thought about the most good you can do with your 40-hour-a-week career or with your $10,000 that you want to donate or with your 10% a year that you want to donate, I think, has extremely distinctive implications. Just empirically, EAs focus on extremely unpopular causes like farm animal welfare and existential risk reduction. EA is the significant plurality, if not the majority, of kind of effort and money towards a great number of weird causes. It seems a little odd to ask what the difference is. It's just that lots of people participate in altruistic projects. Those altruistic projects are all different from each other on the object level, even if the way people engage with them in terms of how much of themselves they devote to it is similar. That doesn't seem like the most important or interesting axis of difference to me.

MICHAEL: What do you think is the most important or interesting axis of difference?

AJEYA: Asking the question of why the EAs are focused on AI, existential risks, farm animal welfare, malaria net distribution, and things like that, as opposed to open source software or participating in their Rotary Club, etc. It seems like there's an empirical difference there, so it's interesting to me that you're asking this question of “What is the distinction?” And I think the distinction is that it's a sort of distinct kind of thought pattern, a framework for thinking about cause selection that leads people to very different causes than other people tend to focus on.

MICHAEL: The claim seems to be frequently made that it's more effective to do these other things. Of course, the people working on the open source project might well disagree with that — or I guess I've encountered two types of people there. Some people just don't actually seem to have thought that much about how effective they're being with their particular way of doing good in the world — and that's a very interesting frame of mind. From my point of view, it's a little bit foreign, but many people seem to have thought very hard about how to be doing the most good from their own perspective, and they've just arrived at completely different conclusions about that.

SPENCER: I just want to point out that I suspect there's an extreme selection bias going on there, Michael, over the people you hang out with because I think the vast majority of people, when they're doing charitable giving, they're just not in the frame of mind of how do I make this be as effective as possible?

AJEYA: Yeah, I agree with that.

SPENCER: Yeah, they hear about a charity in their community or from a friend or on TV, and they're thinking, “Oh, that's a good thing to do.” And then they're giving money, and they feel good about it.

MICHAEL: But I mean, you're now abrogating to individual EAs all the thought and care that has been done by a relatively small number of people. If you talk about malaria bed nets, certainly some people in EA have thought very, very hard about what makes that effective. A lot of other EAs are just like, “Well, GiveWell thought pretty hard about it.” And their conclusion is to give money there. They're not thinking much harder at all than somebody who decides to give to the Red Cross or whatever. They're sort of delegating to a different trusted proxy.

AJEYA: In a situation like that, you kind of have to think about — I can imagine good reasons and bad reasons for this person to end up trusting GiveWell — the story of a lot of EAs is something like they have started making a bit of money, they want to donate it effectively, they Google something like how to do the most good with your money. If GiveWell existed at the time that they Googled that, they poke around on the website, they think it makes sense, they read various articles about the kind of principles behind how GiveWell makes their decisions, they decide they agree with that, and then they decide that that is the amount of effort they want to invest in making this decision, and they want to trust GiveWell going forward. That doesn't seem like an unreasonable process at all to me. It is still different from the process that somebody else who spends a similar amount of time researching charities that ask different questions would go through. Somebody else might say, “I really want to help girls' education in the country where my parents are from.” And they would Google around to find some charities that work on that cause area, they would poke around to do their due diligence to the extent they want to know whether the money is being spent reasonably, and they would pick some charity. That's just that both parties did a modest amount of online research before picking what charity to donate to. They asked different questions, approached it from a different attitude, and reached different conclusions.

[promo]

SPENCER: You mentioned, Michael, though, that you feel you know a lot of people who are asking this question of “How do I have the most impact?” or “How do I make my work as effective as possible in helping the world?” I'm curious, what would you attribute the difference in their conclusions relative to EAs?

MICHAEL: Some of it, of course, is just comparative advantage. (I forgot the name of the guy who was responsible for the DNS system for decades.) He was basically responsible for the world's address system [laughs] — pretty good choice, I think, for one person to decide to devote their life to that. That's especially a question of opportunity. He was in the right place at the right time. It's also partially a question of comparative advantage. I doubt when he was 18, he thought to himself, “I'm gonna go in this direction.” He may have just been, “Oh, I'm really interested in computers.” And so he sort of ended up there. I guess, in general, just a philosophical difference for me is just having a much more hierarchy in view that in practice has been adopted by EA, where from my point of view, very centralized organizations have a lot of influence over what people see as effective. I tend to trust more the decentralized view that says, “I like just a little bit more chaos, where there's a lot of people going around and doing what they have decided on the basis of them, and a few of their friends is the most effective thing for them to be doing.” Most of those people will be wrong, the median may be worse, but I think that the nature of the returns in this situation is that, in fact, a few outliers really make up for a lot. The person that thinks going and working on the thing that turns into the DNS system has been tremendously valuable to the world. It's probably not the best example — I don't know if Norman Borlaug is maybe sort of an interesting example. — I sort of plugged the person who decided to work on open source network security, like that is a tremendous way to spend their life. That doesn't appear anywhere on GiveWell's list, as far as I know. (Maybe it does.) That's not a critique of GiveWell. It's not meant as a critique of GiveWell. It's meant rather as a hurrah for having lots of people make decisions in this kind of more decentralized fashion, where you're not trying to have a few institutions decide what is effective and what is not.

AJEYA: A few comments on that. One, I do think it's an awkward situation that makes for a huge possibility of important blind spots that there's a pretty small set of individuals and institutions in EA with a lot of the money, and almost everything in EA is not for profit. And so those funders have a lot of power, and their decisions about how pluralist to be versus how much of their inside view they should bet on can affect the field a lot. That is important — I don't know about critique — but an important red flag or something to be watching out for that probably creates bad dynamics,

MICHAEL: Just as a structural factor, it's like a choice that effectively the community has made is to be organized that way, and it has consequences for the quality of the decisions which are made.

AJEYA: I feel like that's maybe imputing too much of like one voice to the community. There are a number of EAs who are unhappy with the situation. It feels like what seems more true to me is that there are, like you said, literally a couple of individuals with a huge amount of wealth who are really bought into this way of thinking, this philosophy, and those people choose who they wanted to trust, which other particular individuals they wanted to trust to help allocate their money. Those people hired people. I think that's kind of how the centralization happened. I think it's the case that, like Cari and Dustin and later Sam Bankman-Fried, and a number of people who are also wealthy but smaller thought, “I want to give away my money to do as much good as possible. I'm going to hire a staff of people whose thinking makes sense to me, where I think their reputation suggests they're good.” Whatever it is, whatever that process is, then those institutions are set up to have a pretty good shot at culturally dominating EA. In fact, they do culturally dominate EA. If there were a third major funder, they would pick a slightly different set of people and maybe have a slightly different perspective on things. So, it doesn't feel like as much of a community choice to me. I do think the thing you said about how you prefer decentralized kind of market-based mechanisms.

MICHAEL: For decentralized, that does not necessarily mean market-based, but I certainly prefer decisions made at the edges.

AJEYA: When you say you prefer decisions made at the edges, to me, it feels like you either have a situation where there's a large market, and anybody can kind of say, “I want to try doing this thing,” and they're pretty likely to find a niche, some seed capital for that, and base of customers that can support them to do that thing, or everybody's doing a not-for-profit thing, and so the decentralization comes from the funders' decision to fund things in a decentralized way. It feels like it's a kind of decentralization, in the sense that the funder is deferring to people and giving money perhaps more broadly or based on a somewhat different set of criteria. But it still feels like it's structurally—

MICHAEL: I think we might be talking slightly at cross-purposes. I was referring much more to people's individual decisions about how they spend their lives. I just tend to be a little suspicious — that obviously what it gets funded constraints that. And so those two things do meet forces, which will broaden the range of ways people think they can do good in their lives, not in favor of things which tend to centralize it. An example of this to me is a few years ago, when I met EAs, very often they would have spent some time on direct cash transfers, or they'd spent some time living in Africa and participating in programs there. Now, when I meet them, they're very often much more likely to say, “Oh, I'm working on AI safety.” I just find it fascinating to me. I mean, this is fashion, which is what happens when a few organizations can set the agenda. You can move a lot of people from being interested in one thing to be interested in something else. It's not quite fair to use the law of the excluded middle here, but almost one of those two choices was wrong. I'm inclined to think that either AI safety was the most important thing the whole time or working on poverty reduction was the most important thing the whole time. There's a problem with that argument, but I'll let you point that out. [laughs]

AJEYA: I guess I would say most people would agree with what you said. Most people in EA think of themselves as on a quest to figure out the most important thing they could do with their career or their money, and they have all these philosophical questions that come up and all these practical and empirical questions. Most people have a sense that there's some optimal portfolio — I wouldn't say there's some single optimal cause because we have a lot of people and a lot of resources, and all these different areas have diminishing returns. But most people in EA would say, “Yes, some set of people have thought about these questions, and they've come to feel that through some combination of philosophical change and changes in their empirical beliefs, that this cause area that they're now working on was the whole time more important than what they thought previously.” I think that is how changing one's favorite area tends to work — when you switch from A to B, you tend to like half the belief that B was better all along.

SPENCER: One view is that if effective altruism has been increasingly shifting from, let's say, global health to trying to prevent dangerous effects of AI technology, that it's because EA is getting better at figuring out the best thing to work on. Another view is that it's fashion or some kind of status-related thing.

MICHAEL: I wouldn't want to make that argument. It made a mistake in one of the two cases, and because it's very centralized, that mistake has really bad consequences. Maybe without that mistake being made, more people would have worked on AI safety earlier if we had just sort of a two-part box model of the world, so to speak.

AJEYA: My understanding of your position is that it's difficult to figure out what the most important causes are — to simplify a little bit and say there's one most important cause as opposed to the most important or optimal portfolio. EA has a pretty small set of intellectual leaders that are trying to think about this and then disseminating their conclusions. And because this is a really difficult project, they're going to get it wrong a lot. But they're also going to influence a pretty large number of people to work on whatever they say is the current best thing that they believe. Even if that's completely in good faith, you're like, “Well, people should just have a policy of listening less to such intellectual leaders and doing their own thing more." And that could have led to a faster convergence on the optimal cause or optimal portfolio or whatever it is.

MICHAEL: That's a really nice summary, thank you. In many ways, you said it better than what I did right up to the very end. I'm not sure I have some lack of clarity around even what the notion of optimal portfolio means. I'd probably delete the last sentence, but otherwise, yeah...

AJEYA: It would lead to more good being done if the set of people who call themselves EAs listened less to the relatively smaller set of people that were, so to speak, setting cost prioritization.

MICHAEL: I think that is plausible. And just in general, I'm distrustful of particularly centralized systems, where the centralization comes from money — simply comes from ideas. I'm less concerned about it because the cost of rebelling tends to be significantly lower than in that case.

SPENCER: Because in practice, the big funders have positions on how to do the most good because that's sort of their purpose. But that also means that to get money from them, you have to be at least somewhat aligned with their view on how to do good. That's probably where suspicion comes from.

MICHAEL: The way this very concretely ground out is chatting with people who are EA adjacent, who have thought about whether they should go and do something, or should they pitch a project to Open Phil or some other EA funder. I've had some (chats) with some pretty frustrated people sometimes where they will say they know that if they pick project A, they believe that they could get funded by one of the EA funders. They, however, personally wish to do project B, which they know they don't have a chance of getting funded, and they also know that is a better use of their talents for doing the most good in the world. In some sense, from their point of view, it feels like the EA founders' position is that they know better than the individual themselves how they can do the most good in the world. They just find that personally frustrating (which I can certainly empathize with). In some cases, it may well be correct.

AJEYA: This is why I initially interpreted your statement or question as about the funding landscape because I do agree with you that I think that's where the rubber meets the road in terms of centralization is bad. It would feel a lot different if it were just some people putting out ideas. A lot of people listened to those people and chose to act on the ideas that the group put forward. So I think that is a legitimate challenge. At Open Phil, in particular, there's this persistent question of we want to do the most good, and we have not only particular values but a lot of context and particular beliefs about what actually will be effective in the world. At the same time, we're a pretty uncomfortably large fraction of the funding in the broad space of people who are trying to think how to do the most good with their careers. And there's always this tension of how much we defer to other people who share our broad values and come to pretty different conclusions about what would actually be effective versus how much we fund on our inside view. We've done a lot of both, and it's a tough thing to navigate. This is a bottleneck that could be relieved by having more funders with slightly different perspectives in the space. In some ways, it's not so different from how if you have a startup, you have to find some VC that's excited about your startup to fund it, and that might entail that you have to change your ideas slightly from the one that you actually think is most effective, or you have to find a way to pitch it. It's just that it's a very illiquid market because there are so few VCs in this setting.

MICHAEL: My understanding now is that there are thousands — possibly tens of thousands — of angels writing checks, and so that's quite a thick market. You're saying, Open Phil feels this tension between sort of broadly enabling people who are philosophically aligned versus taking sort of — if I understood correctly — more of an activist stance on the convictions you already have. Can you make that just a little bit more concrete for me to understand how that shifted and what your judgments are around that? What have you learned from trying to navigate that?

AJEYA: I'm personally much more on the research side than the funding side. An example, (I'm definitely not speaking for Open Phil institutionally here) I basically don't want to get into specifics to the level of individuals or organizations, but in general, we will often have situations where we say X is an EA. They've been very involved in the community for a long time. They seem thoughtful, they seem smart, and they're starting project Y. It doesn't really seem to make that much sense to us. It seems maybe a little ill-conceived or just not as effective as something else that we think a person with the skills could do. And there's a broad spectrum of choices in a situation like that: from don't fund them, have a kind of a suite of particular projects you're looking to fund at any given point in time, to completely fund them and be really excited to fund them with no strings attached, to a bunch of things in the middle where you fund them with no strings attached, but you try to persuade them that this other thing is better instead, or you fund them but you're kind of like, “Well, I'm kind of skeptical of this project. I'd like to see results A, B, and C within a couple of years." It's a kind of multi-dimensional space. It's not really like a fund/no fund decision. It's how much to fund, how much feedback to give, how pushy to be, while keeping in mind both that we feel we have good thoughts and a bunch of contexts, and we have pretty intense power dynamics with people in the community, and people are inclined to take what we say too seriously by default. So there are just a number of little situations like that that come up a lot.

SPENCER: I think the expected value maximization comes into play here. Because if you are pretty opinionated about how it helped the world, and you take the attitude that you're really trying to maximize the expected value of the impact, it might suggest putting a lot more eggs in fewer baskets. You're just saying, "Well, I think this is the highest expected value thing to do. So I'm just gonna keep putting money into it until the diminishing marginal return of additional dollars makes it so that the second best thing is actually now the best and then I put money into that.” And you kind of go down from the top until you run out of things, but that might actually be really a small number of things relative to the idea that maybe a lot of good things to do are things that we don't think are good, and we may need to make bets on things that seem strange, but somehow they're gonna turn out doing a lot of good. So I'm just wondering, Ajeya, how do you think about the level of concentration? Because I wonder if this is also an element of Michael's discomfort around this.

AJEYA: In practice, there is this question of “How do you do the most good?” There's this hierarchy from what are the broad principles you use to figure out what cause areas to be in, to what particular strategies to take within the cause area, to what organizations are best at executing that strategy, to what an organization should be doing day to day, what should an individual be doing day to day. And in practice, we just tend to be far more opinionated at the higher levels of this tree and less and less opinionated as we go down. We have a set of focus areas. We basically don't do funding outside of those focus areas. We've put a lot of thought into which focus areas we're working in, and it's a finite list. We have particular program officers for particular areas. At that level, we're quite opinionated at the cause selection process and then cause list level. We are not managing our grantees in any kind of super fine-grained way for the vast majority of our grantees. At the day-to-day level, we're necessarily constrained to be not very opinionated. Then often, the kind of messy negotiation is something like, as funders in, say, AI safety, we have intuitions about what kind of research seems it's addressing the problem better or worse or seems like it's more or less effective, or more or less promising researchers in the field who also broadly care about AI safety and share our values have pretty different tastes there. How do we negotiate that? Or, as funders in the EA community, as I said, we have situations where there's this person who's a promising member of the community. They have an idea that differs from our sense of what makes the most sense at a strategic level within a cause area. How much do we try to converge with them? Like, give them feedback versus bet on them? Versus do something else?

SPENCER: Do you feel that expected value maximization is core to the effective altruist's way of thinking, or do you think that can be separated?

AJEYA: It definitely feels core to me.

SPENCER: Michael, I'm curious if you have a reaction to that aspect of effective altruism.

MICHAEL: I have no idea what the state space is. I just don't know how to make sense of EV calculations, except for the sort of things that we already understand.

SPENCER: Could you unpack that a little bit?

MICHAEL: I think if you've been an effective altruist in the 1660s trying to decide whether or not to fund Isaac Newton — the theologian, astrologer, and Alchemist — he had no legible project at all. That would have looked just very strange. You would have had no way of making any sense of what he was doing in terms of an EV point of view. He was laying the foundations for a worldview that would enable the Industrial Revolution and a complete transformation in what humanity was about. That's true for a lot of the things that have been the most impactful. Things only made sense after they were done, sometimes a long time after they were done. We didn't understand the printing press until after it was done. We certainly didn't understand writing or the alphabet, what those things meant until a long time after. (I don't think I'm making much sense based on the expression on your face.) I think of expected value calculations as things that you do when you understand your state space really well, but most of the things which have mattered the most fall outside that kind of framework.

AJEYA: I definitely agree with you that most of the things that have had tremendous impact historically were not chosen by somebody doing an expected value calculation and then picking that thing.

SPENCER: Could they have been, though? Okay, I think he's saying something beyond that, not just that they weren't. But I think he's making a claim that it wouldn't have worked if you attempted to do it.

MICHAEL: If you were Alan Turing in 1935, and you're interested in questions not about computation but about the logical foundations of Mathematics, you have fairly esoteric problems. You're not trying to estimate the potential future value of a theory of computation. You don't know what a computer is. You have no notion of what you're doing until after the fact. You're inventing new conceptual categories. And that's a relatively modest example. In the case of something like the Newton example I gave, you're inventing almost a new form of reason because you're not aware that you're doing that at the time. You can't put that in an EV calculation. He didn't think that that's what he was doing. It's very hard to say actually, with any confidence, what Newton thought he was doing. (I should shy away from making guesses.) I think it's much easier with Turing, where you actually have a reasonable number of sources. I certainly think it's true that there's a certain type of very goal-directed, project-oriented work. It makes sense, maybe, to do expected value calculations. There is a slight caveat to that: sometimes the following consequences of the project are dwarfed by what the reason you thought it was doing, as maybe CERN's main impact was actually to lead to the creation of the world wide web. If you're doing an expected value [laughs] calculation on funding CERN, that just didn't show up in any way. For that kind of very goal-directed, very project-oriented work, I'm somewhat sympathetic to EV calculations. There are a lot of technical problems, but a lot of the things that I think have just been most transformative through all of history are very difficult to make work.

AJEYA: I should make a caveat to what I said earlier. When I said that expected value maximization was core to EA, I think the spirit of that, like what at the highest level we conceive of ourselves as doing when we set out to do EA projects, expected value maximization is at the core of that in terms of trying not to be scope insensitive and trying not to be risk averse beyond the empirical risk aversion that would be necessary for maximizing impact. The actual practice of doing expected value calculations was honestly something that when I got to Open Phil, I was surprised we did a lot less than I thought we would. As somebody who's tried to build models of things and calculate the impact of grants, it's pretty hard, and it's often not worth it. At the very highest level in terms of cause selection, there are extremely basic expected value flavored considerations you can bring up, such as, if you thought animals were morally valuable, then have you noticed there are so many more of them than there are humans, and they might be much cheaper to help? That's not an expected value calculation. It's a trivial one. What it really turns on is this philosophical question about whether you count those beings. So those kinds of very, very basic expected value calculations are a driving engine of EA cause selection. In terms of should we fund this person or can we do an expected value calculation of that, most of the time, we're not. Most of the time, we're saying they work in biosecurity and pandemic preparedness — again, a very simple basic expected value calculation of — in the future, we might discover technologies that allow us to make much more deadly viruses. If that happens, then almost the entire world could be killed, or literally everyone could be killed, and it doesn't seem like the probability of that is tiny. That's the expected value calculation — super basic. When we're funding within biosecurity, it's a lot more about this person who seems like they get the considerations, this person seems qualified, seems like they're passionate, they have a vision for this project, they're a good manager, they get along well with people. The kinds of considerations you'd have when deciding whether to fund anything really as a VC or as a philanthropist are often where most of our brain cycles are going.

MICHAEL: Something I really just enjoy about EA is the way in which it is different from other approaches to philanthropy. It's kind of my hierarchy. And point again, just sort of appreciating the fact, I guess, I like a lot the fact that you are taking a certain set of principles seriously and are trying to work them out to their logical conclusion. That then leads you to behave in ways that are different from everybody else. I'm sort of in favor of that very strongly [laughs]. In fact, with the caveat that I don't want it to grow in an unbounded way, and what I want is actually lots of other people to be doing the same kind of thing, but in very different ways. So regarding philanthropy as a form of moral inquiry, I thought a lovely way of thinking about it and just made me think about what are the fundamental spaces one can explore in philanthropy that would lead to very different kinds of approaches.

AJEYA: One thing that seems distinctive about EA, at least within philanthropy, is that we have a point of view that is thought through. It's not that a lot of other philanthropists necessarily have a super different, distinctive point of view; it's more that a lot of other philanthropists don't invest much in the higher levels of that tree that I talked about earlier in terms of how do we think about what areas to be in and what areas should we be in. The decisions at those levels are usually made in a somewhat personal way. Whereas, I think EA foundations have a lot more in common with every other foundation at the middle and lower levels in terms of what strategy we should take in the areas that we're in and what organizations we should fund executing on those strategies. Although, I don't think it's exactly the same. I think we try more than most people do to try and quantify some of those things in the lower areas as well.

MICHAEL: I'm interested in that claim. I'm not sure I buy it necessarily. Certainly, when I talk to people who are involved in very different parts of philanthropy, in many cases, they're coming out of an older tradition. Engagement with the basic principles and trying to sort of figure out what they're about is maybe not as high just because they feel like they've worked out a lot of the consequences. But it does still seem like a lot of thought has gone into those foundations just sort of reaching somewhat different conclusions. I mean, certainly, an area that I guess I know somewhat well is science funding. There's a very large number of really interesting things which are believed by many people, many science funders that form some sort of kind of coherent philosophy or worldview — you may or may not buy it necessarily, but I wouldn't say that they weren't operating from a very thoughtful and very distinctive, in some way, view of the world. I don't know that it's necessarily undergoing rapid evolution partially because a lot of it is grounded in things people thought in the 1930s and 1940s, and 1950s.

AJEYA: Yeah. HHMI (Howard Hughes Medical Institute) in particular, feels to me like it has a distinctive point of view as a science wonder. I definitely think EA is definitely not the only pocket of philanthropy that has a thought-through worldview. In my experience, I think the typical funding or grantmaker spends significantly more energy trying to think about what we find within the areas that we're in. Often, maybe for historical reasons, they don't have as much flexibility to rethink or think from first principles about particular areas. A lot of foundations are family foundations, and the cause areas that they're in are chosen by the funder for often pretty opaque reasons and pretty personal reasons.

MICHAEL: Public funders have the same issue, but sometimes they're constrained in interesting ways, just by law or by politics for them.

SPENCER: I would add that if you're thinking about the highest level of the stack of choice, you're gonna try to help animals, or you can try to help people in the far future. It does seem not necessarily completely unique to EA, but fairly unique to EA to start at that level of abstraction.

MICHAEL: I don't think that's right at all.

SPENCER: No, you don't think so?

MICHAEL: I think a lot of religions have started from people who are very interested in a lot of fundamental questions that are very similar, in some sense. It's not necessarily live and active because it tends to get canonized in various holy books. They were often people who were considering questions, which often amounted to who counters a person, what kind of status they should have in the world — a lot of very similar fundamental questions. They're not necessarily live questions for those religions anymore, in the same way as they are live in EA. And that's something that I admire about EA a lot. The fact that those questions are still sort of up for grabs.

SPENCER: More foundations rather than, like, Jesus or something like that.

MICHAEL: You're sort of saying, “Oh, I'm not even sure I buy that.” Somebody like the Foundational Questions Research Institute, the Templeton Foundation, they're grappling with pretty big questions like “How did the universe begin?” “What is time?” I am interested in this question of — I asked Ajeya, and I don't know what the right follow-up question is — who's doing the most good at the moment? Is it going to be an EA? A very specific thing in mind, I was just chatting with Alexander Berger, and in that conversation, I started to think about the question, “If I had been EA, would I have done more good in my life? And also, would I have had a happier life?” I came to the surprising conclusion, I think I would have done significantly less good but actually had a much happier life. My immediate instinctive thought was the other way around, but after 30 seconds, I was like, “No way that is right. It's actually the other way around.” That, for me, is then connected, I guess, with the question I asked Ajeya. I think you can plausibly make a case for Elon Musk or Norman Borlaug, or Marie Curie.

AJEYA: Is Norman Borlaug alive? If he's not alive or dead, then definitely he is not an EA, right?

MICHAEL: Well, yeah, sure. But in particular, somebody like Elon (It's such a hard thing to do on a podcast because people just have this allergic response to him.). If you grant that maybe he makes humanity properly into sort of an interplanetary and, ultimately, interstellar species. And this is wonderful from some longtermist point of view. Yet, I just don't think he maybe would [laughs] regard himself as EA. (I don't know.) Actually, I guess he's sort of a longtermist, but he's not a classic EA.

AJEYA: People like Elon Musk, a lot of for-profit corporations, a lot of people who are currently doing something cool in some other area but will in the future switch in and do something awesome in biosecurity or AI safety or some other cause area, a lot of people in politics who will potentially end up voting for critical things at critical times — the world is very chaotic, and as I said earlier when you asked me this question — I think it's very clear that EAs on non-esoteric notions of what the good is are generating a minority of what's going on in the world. Even for more esoteric notions of the good, I would bet on EA being a plurality but a minority of the good being generated.

SPENCER: I also feel there's a little bit of unfair comparison because such a small percentage of the world is EA. I think a more fair question would be, “Is EA punching way above its weight in the amount of good it's causing?” “Is the single most effective do-gooder in the world an EA?”

MICHAEL: The purpose of the question is not specifically to do a comparison of that person or sort of pick five people you feel have been extraordinarily effective and ask the question, “If they had become EA when they were 19 or something, would they actually have done less good?” is, I think, an interesting question. If Jeff Bezos had become EA when he was 21 or something, I think you can actually plausibly make an argument that he would have done significantly less good in the world.

SPENCER: Let's do that for you because you said you felt you would do less good if you were an EA. Why do you think that that's true?

MICHAEL: It's a very hard thing to reason about, of course, because you're making all these self-judgments, which kind of intertwined with your ego and self-image. In my case, mostly, what I've tried to do is find public goods that nobody else thinks about and work on those — that's not, for the most part, the conscious strategy. In fact, I think it's sort of personality reasons that that's what I've done. I started working on quantum computing in 1992. I started working on open science, in some ways, in the early 90s as well, and certainly full-time by about 2006. These are things that are not going to become cause areas at any foundation for many, many, many years after I started working on them. In many ways, a thing that was very significant for me in making the choice to start working on open science in 2006 full time was that there was nobody else really in the world working full time on open science at that time. There were so many people doing wonderful things around open data, open code, and around open access. But those are not the same thing. They're related, they're very important, but I felt that there was something very important to do around the notion of open science. I talked to funders at that time, and I didn't have a story that made any sense at all to them. I mean, it didn't make any sense to me. I was simply going on a bunch of intuitions and a hunch that there was some important story to try and distill there. That's a hard thing for people to find. We relate to each other using stories and narratives about how the world works. But if what you're trying to do is discover a new narrative about the way the world works, sometimes that takes five or 10 years of work. You're in a pretty difficult situation for a long time in terms of trying to make a case for what you're doing.

SPENCER: If you had to try to distill the heuristic that you were using when you're looking back at it, was it something along the lines of finding something that nobody else is doing that seems important? Or how would you describe it?

MICHAEL: I'm not goal-directed in that way. What happened for me was really starting in the late 80s, when I was a kid, starting to be aware of the internet and starting to get some sense that this was going to change the production of knowledge in really significant ways that might perhaps create an opportunity. And then just getting a sense of this all through the 90s, starting to watch open source projects, starting to watch things like Wikipedia a little bit later, and all these changes in the way we do collective intelligence. And then understanding gradually that there was a really big problem, sort of a clash, between that opportunity around new tools for collective intelligence and the political economy of sciences as it was set up. And then realizing that there were ways of resolving that clash — that was just something that, I guess, gradually dawned on me over a period of about 20 years. Once it finally crystallized, "Oh, there's something there in a way that I couldn't explain to anybody else, but I believe that very strongly.” At that point, I was like, "Oh, I need to go and figure out what this story is and hopefully do something valuable along the way.” It was just intuition, in the sense that I didn't have a logical argument at the time. It was a judgment call.

SPENCER: If you've been using the heuristic or been attempting to use the heuristic of using evidence and reason to do the most good, so I see you've been guided by that principle, what do you worry you would have done instead? Or what do you what's your prediction there?

MICHAEL: I wouldn't have done that at all.

AJEYA: But what would you have done instead?

MICHAEL: That's such a foreign point of view [laughs] to me. Creative work doesn't work that way. You don't know what you're gonna get out until years later. You make instinctive judgments about where to go and hope like hell that some of the time it works out. And most of the time it doesn't. But sometimes things kind of get put together in a way that matters over the long run — but usually not for the reasons that you think it's going to matter, either. I don't think the people that invented writing were probably — some people speculate at least that it was simply to aid in commerce, basically a way of keeping systematic track of who owed who what — they didn't anticipate the transformation that was going to be possible. That's a very large example of this leap. I just think, in general, the point is not legible to me. But so much of the work I most admire and think has been most important through all of history, the people who were doing it at the time didn't understand what they were doing. They were instead following hunches. Very often, they thought they were pursuing some other project, which, in fact, in many cases, I think, was kind of a silly project to be doing. In some cases, I know that they were justifying it in some other terms, but in fact they had a hunch that there was some other good reason to be doing it. That goal-directed nature, that goal-directed thinking, I just don't empathize with it. It doesn't seem to be a very common view, either in Silicon Valley (where I live) or perhaps amongst many EAs.

SPENCER: I think a challenge to what you're saying, Michael, could be that it can come across as just saying, “Well, do whatever you think you should do.” Or something like this. Most people are not attempting to do something really good for the world. Most people, if you say do whatever they think they should do, it might involve a lot of normal things, like a lot of watching TV and hanging out with your family and taking walks with friends and so on. And there's nothing wrong with that. But it does seem very, very different from the project of trying to make the world better.

MICHAEL: I don't know. A concrete example is Alan Turing working on — as I say, really, in some sense, see what seemed to ask now like fairly esoteric problems in essentially mathematical logic. But I'm certain that he had an instinctive sense that they were much more fundamental things, sort of much more important things underlying that. When you read his paper establishing the notion of universal computation, he gets that this is tremendously important — I don't know that he realizes just how important, but I wouldn't put it past him that he understood better than almost anybody understands it even today. It's such an interesting thing that he chose to focus on that relatively esoteric problem. And yet, I think he generically — I mean, he did this over and over. If he just didn't wait once, I wouldn't be tempted to ascribe such good taste. But I think he had a sense for what was very important to work on, which could only be explained legibly after the fact — maybe long, not long after the fact in some cases — he worked on morphogenesis in biology, which is another just fantastic problem and an amazing thing to be thinking about at that time. Of course, he wrote what's arguably the first paper about artificial intelligence as well — sort of just another piece of very good taste where he couldn't possibly have understood the long-term consequences. AI is very goal-oriented, though it's sort of easier, I think, right now.

SPENCER: Part of what you're saying is that you're concerned that people taking an EA mindset would just never fund something like this. It would just completely miss on these incredible breakthroughs that occurred. Is that fair?

MICHAEL: It does seem to be. I mean, with things like EV calculations and whatnot, it seems to be very oriented towards, say, what you're going to try and do, figure out how important it would be, and then make a decision about whether to go towards it. Some fraction of the important things in the world have that form. A very large fraction of the important things in the world don't seem, to me, to be like that at all.

AJEYA: What sorts of heuristics do you think an organization that you, Michael, would think would be approximately the most effective funder of things should take? What heuristics should funders use to, in fact, ex-post realize the maximum amount of value, if not the kinds of attitudes that you're critiquing here?

MICHAEL: I don't know. I've never been a funder.

SPENCER: It is a lot easier to critique something than to come up with an alternative, right? I think that's just the property of the world, a lot of times.

MICHAEL: I know what I would do, but it's not an answer to Ajeya's question at all. It's just something completely different. Like it's not within that frame.

AJEYA: I'm interested in what you would do if you had $10 billion and you wanted to fund it to do as much good as you could do.

MICHAEL: Okay, I've never thought about specifically the most good you could do as kind of an objective for a funder.

SPENCER: So maybe you would reject that framing of the question?

MICHAEL: No, actually, it's great. I think it's a wonderful question. I'm just saying it's not something I've thought a lot about.

AJEYA: I guess I want to kind of tease apart two things. It seems to me like you're making two different claims. One, you don't take a very goal-directed view of your career. You don't set out to do the most good you can do or maximize something. But at the same time, you're kind of saying EA's lights by that maximizing principle's own perspective, taking some different attitude actually achieves that goal better than trying to pursue it directly would. So you're saying, in fact, we agree, ex-post, that Alan Turing had a lot of impact, Isaac Newton had a lot of impact by EA lights. It seems to me, even if the expected value kind of framework and calculation and the interior position of directly trying to maximize that thing isn't right, we can kind of look at what you believe from this history and try and construct something that would be better by EA's own lights without rejecting the goal of doing the most good.

MICHAEL: It's a really interesting project to imagine doing. I guess that consistently bothers me about EA—the sense that some of the people I perceive at least as having done the most good would be very strange fits within my current understanding of how that is judged.

SPENCER: It suggested an interesting project, which is going back to some historical cases of enormous good and saying, “What would the EA analysis have been at that time, given what was known?“ It's probably really hard to do that because it's hard to forget what you know. But if you could forget what you know, it would be really interesting.

MICHAEL: It seems to be really surprisingly hard because it's so difficult to get into the headspace. It's a problem with — for example, a biography of almost any kind always appears sort of inevitable. And there was a relatively straightforward unfolding of the way things worked. It is very easy after the fact to make things legible, and it's hard to get away from that hindsight bias, basically. So it is a difficult project to undertake. Particularly, I think just all that doubt and uncertainty that people face in real-time when they're doing that kind of creative work (Am I working on the right problem?) is something that certainly affects everybody, but I think it particularly affects people who are working on things that are relatively illegible.

[promo]

SPENCER: Before we wrap up, I think legibility is a really good topic to turn to. In your essay, Michael, you talk about questions like, “Is celebrating a child's birthday legible?” or “Would effective altruists say that we should do that?” or something that cannot be explained from an effective altruism perspective, and you kind of give a number of examples of really normal stuff that people do, where it might be hard to justify it on an effective altruist's grounds. And maybe Ajeya would put that in the batch of like outside the realm of effective altruism, where you're just gonna say, ”Well, some amount of my time I'm just not going to be doing that maximize impact thing.” But Michael, do you want to explain a little bit this idea of legibility and how you think about that?

MICHAEL: The term, as I understand, comes from James Scott's famous book, “Seeing Like a State,” and he just talks about the fact that sort of modern statecraft is, to a very large extent, based around making as much as possible about the state in order to manage it. So, you see the rise of modern statistics and measures of economic growth and ideas like unemployment, and just so many ways, sort of dashboarding a society and using those approaches to rationally manage it, so to speak. The point of the book, in many ways, is that, in fact, there is, of course, a tremendous amount that will be illegible to the state, and that will remain true no matter how much effort you put into it or it should remain true, arguably. I'm using the term in a related but not quite the same sentence. It's really just the many of the same things I was saying before about the difficulty with a lot of (I was using the examples of creative work before) creative work of saying, “Why you're doing it or what it's for?” Because, in fact, we can't know that until decades in the future. Now, if you look at things like your children's birthday, for example, there's a different kind of legibility there. I'm probably a little bit more comfortable with an example like doing local community work for your rotary club or something like that, which maybe was, in the many EA analysis, that's a less effective use of your time or less effective use of your money as a donor than giving to certain other causes. What's the positive case you would make, Ajeya, for something like Rotary?

AJEYA: There's a number of cases you can make. So moral perspectives that are not utilitarian flavored, that are not about treating everybody equally regardless of where they exist in time and space, or regardless of their species. That's a very weird moral perspective, and EAs steeped in it from most other moral perspectives. Giving back to your community and giving back to people who helped you is a fundamental terminal goal, a terminal good. That feels like the one that would move most people, and I'm significantly less moved by that than most people, but not unmoved by it.

MICHAEL: Let's say you just reject that. What's the next best argument?

AJEYA: If I were to reject that (I don't know), I think it would probably be something like this is good for the mental health of the person doing it. It creates good. It would be hard for me to make an argument that from an impartial, utilitarian, valuing other beings equally regardless of time, space, or species perspective, it was the most good that person could do with that particular hour. But I do think it does good. I think it makes them feel better. It helps the people that they helped. Those people are valuable. I'm not sure that if I were to reject the “it's just good to give back” perspective that I have a credible case that comes to mind for Rotary club is, actually ex-ante, the most good somebody could do. Ex post, there are a lot of random ways that it could be the most good somebody does. You could end up helping somebody who then turns out to be the next Norman Borlaug and get their career started because you help them through your rotary club work. That's an ex-post story, not something that you could actually reliably expect to happen on any given Rotary Club excursion.

MICHAEL: This is very helpful for me. — Can I just sort of riff in a slightly different route, in a related direction? — It seems to me that there's some value in it just having sort of almost exemplar societies, almost sort of example communities that build certain types of institutions that then expand our humanity's repertoire of available institutional arrangements. That then becomes something that can be copied, which can be shared, which can be used elsewhere. The discomfort I have in trying to make this kind of argument is a very self-serving argument. It's one that you can use to justify — and I guess the famous example in Peter Singer's book, I think it's “the Make a Wish Foundation” — spending hundreds of thousands of dollars to make some kid with a terminal disease a little bit happier for one day, and that seems like probably a poor use of money. The case you might try and make for it is “No, no, no, no, it's great. We want to have examples of societies which care that much.” That's actually valuable as an exemplar, almost sort of a reference class. But intrinsically, I want to reject that. I want to say, “No, that's too self-interested. That's taking what makes people feel good.” With things like Rotary, I feel there's something there.

SPENCER: Another example that I think is interesting is friendship. Because I have been in conversations with EAs, we're talking about friendship, and friendship is something that I think almost every society and most people in those societies value. It's considered important. But there's this question, “How do you justify it on effective altruism terms, or should you even try, or is it just outside of the scope?” But I have heard EAs tried to justify saying things like, “Well, by having friends, it's going to make me more effective as an altruist.” I've heard these kinds of arguments, and I don't necessarily think that most EAs justify that way. But I definitely see a strain of that, where these normal activities are recast in sort of EA lens. I have to wonder sometimes when I hear people doing this, whether there's an element of rationalization going on here of like, “Well, you feel like you're not supposed to be devoting time to frivolous things like friendships. You have to justify it in terms of maximizing impact.”

AJEYA: I definitely think that in these cases, there's rationalization going on. It's complicated because I do think to the extent that you're trying to have a career, doing some complicated knowledge work type thing that you think will make the world a lot better, being comfortable in various ways, and being happy does impact your productivity, does make you more productive usually. So there's this blurry middle ground, where there's not nothing to this argument that you need to have friends or a partner or a nice house or whatever in order to be more effective. I think the kind of epistemically healthier way to think about it is just that no human is perfectly altruistic, or altruism might be a loaded term. You might think of friendship as a form of altruism. It's definitely clear that no human is perfectly utilitarian, initially valuing all beings equally across time and space and personal relationships and species and stuff. That's a weird thing that some humans value some amount and are trying to act toward. EA, in particular, is trying to not only act toward it but cultivate it in oneself as a virtue and so on. But that still leaves the fact that most EAs value many things other than that. And I find that I think clearer about it when I just say, “A lot of me is selfish, and a lot of me has altruistic or moral flavored intuitions or impulses that are not this maximizing an impartial vast scope kind of altruism.” I personally used to have these two buckets: altruism and selfishness. And it made me uncomfortable to admit just how selfish I was because some of these things that I felt a pull to do, like calling my parents or helping a friend when they're depressed, were not fun [laughs]. But then I felt kind of resentful because I had only these two buckets and I definitely had to categorize them in the selfish bucket because they definitely weren't doing the impartial, utilitarian-ish altruism thing. Now I think I just kind of, “Yeah, I have a lot of values. Some of me values this utilitarian project, and a lot of me doesn't. The other me's are often in the driver's seat, and that's what friendship is. And that's what a lot of things are.”

SPENCER: When I hear you say that, there's an element that almost seems like you view it as a moral failing, like that part of you that wants to have friends above and beyond the “making you more effective as an altruist.” There's a certain amount of friendship that might actually help you as an altruist, but then there's that extra amount of friendship that you just want because you want friendship. Do you view that as a moral failing? I do think that some effective altruists do, and I've heard people say that.

AJEYA: I find it really complicated, and I'm very conflicted about it. On an epistemic kind of fundamental philosophical level, I'm not a moral realist. I'm a moral anti-realist. I don't think there's anything that God wrote down in the atoms about what morality is. On an intellectual level, it's easy for me to say I have lots of different values. One of them is this utilitarian thing, and the others are others. I kind of aspire to have that relationship with myself because it seems it would be better for my mental health, and it just somehow feels aesthetically better to have a more settled sense of self. But I definitely have conflict over it. The part of me, that utilitarian voice, is not just utilitarian about the external world and what to do with its time when it's in the driver's seat. It's also upset that I'm not the kind of person that has that part of me in the driver's seat more and does view that as a moral failing. And that's just like a tension.

MICHAEL: I really enjoyed that articulation. When I've talked a lot to EAs about this kind of thing, they often consider or at certain points found it somewhat distressing. Using terms like tension, I'm wondering if I was thinking about it wrongly as a bad thing. Maybe it's actually just a good thing to spend really most of your life worrying a little bit about “Should I be doing a little bit more of this good or a little bit more focusing on myself or my community?” Maybe I should change my mind a little there and just view that, if it's at the appropriate level of tension, it might actually just be a really nice feature of EA.

AJEYA: I sometimes think of it as like there's a Laffer Curve of how hard you are on yourself. A Laffer Curve for taxes is like as you increase your tax rate, first, the amount of money you make as the state goes up, and then it goes down. It goes up because you're increasing the rate, but then later, it goes down because people are much less incentivized to work. I think there's a similar thing, like how much you push yourself, how much the morally motivated parts of you push the rest of you to do things and to act on that basis. I think, for a while, it makes you do more, and then at some high level, you don't actually have perfect control over all the parts of you, or the verbally thinking, morally motivated part of you doesn't have control over the rest of you. And the rest of you just learns that doing what this person says is really not great for me. You lose motivation and stuff. I think the amount to be hard on oneself is not zero, and this is the case, not just for the EA project, but for any other hard project, like starting a startup or writing a novel or like anything people do, where part of them that's kind of high minded wants to do it, but there are lots of days where they come and sit in front of their laptop and want to watch YouTube instead.

MICHAEL: I sort of have the same experience that you're just describing in my work, Ajeya. But I have met people who claim to, and I believe them, they just find what they do very easy, and they're phenomenally—

AJEYA: So jealous of those people! [laughs]

MICHAEL: [laughs] In some cases, really famous. And maybe it's actually connected. It's just like, “Yep, I'm gonna sit down and write another book,” and “Wow, yeah, I really enjoyed it. Twelve hours a day writing that book, it's so easy to do.”

AJEYA: I think there's a tension between selection and causation here. I was feeling that a lot with your examples of famous scientists and stuff earlier, where I think people who are like famous novelists are way disproportionately likely to be the kind of people that just find it fun to spend 12 hours a day writing. But for any given person that has the ambition to be a novelist, if they're a 7/10 on finding it's super fun to write a lot, and then they're hard on themselves and push themselves and act as if they were an 8/10, that makes them somewhat more likely to be a successful novelist. And so, when you were naming these famous scientists and famous creatives, I got the sense that the kind of counterfactual you were taking for most people in EA was not very representative of that. Whereas, if I hadn't discovered EA, I would have probably gotten a job that I liked fine, and I would have been a lot more relaxed. But I would have accomplished a lot less in a lot of senses of the word, not just the narrow EA sense.

SPENCER: Ajeya, I want to ask a follow-up question about something you mentioned earlier. You said that you don't believe in objective moral truth. Yet, there's this part of you that seems to say, “You should be doing more. You should maximize utility more.” There's some sense in which something like friendship is not maximizing utility in the world. But what confuses me about that perspective is, if you don't believe that there's an objectively right answer to what's good, then why is that one sense of what's good winning out over the others?

AJEYA: That's a good question.

SPENCER: The part of you that wants to invest in family or friendship, why is that less important?

AJEYA: I think it's that most of the other impulses I have toward doing good in other ways or other notions of the good are much less well-developed intellectually, much less verbalized, and I think, much less able to be well-developed or verbalized. In themselves, they're not maximizing. The part of me that wants to be a good daughter has never once felt like I want it to be the goodest possible, most parent-approval maximizing daughter or something. When one part of me really does just deeply feel that saving another life is saving another life, it does not matter that you've saved 10,000 lives before, it doesn't matter that there are 100,000 lives you can't save still, like that extra unit of push yields that incredibly important and meaningful game, I feel like that part of me feels more like she could benefit from more mental resources than other parts of me, which are more satisficing.

SPENCER: I can see the pole of wanting to give that part of you that wants to give maximizing global utility more credit. But if we step back and just view you as a brain that has these different goals, if there's no objective moral truth (we're talking about what's going on in your brain here), and let's say we adopt a sort of internal family systems perspective for a second (there's these different sub-agents of you that care about different things), should you be letting one sub-agent dominate the others? How do you feel about that?

AJEYA: I guess the thing that I said earlier was meant to be a psychological account rather than a moral account of why it is the case that this particular sub-agent is more pushy. There's no executive me that is like managing all these sub-agents necessarily. There sometimes is, when I'm being self-reflective. But the verbal, analytic reasoning-y parts of me are pretty tied up with this utilitarianism and EA thing, so that gives it a bit of a leg up. In terms of whether I should be letting one sub-agent dominate another, it feels like there's just this weird, delicate mental ecosystem. I'm not in favor of zero domination or something, like I think pushing yourself is a Laffer Curve, and pushing yourself a non-zero amount is good, or the parts of you that have these abstract values that are not instant gratification-y pushing the other parts of you some amount is probably good, and some amount more than that is probably bad. It was interesting to me, Michael, that in your notes, you said that we need these strong philosophical foundations or principles to counteract the “maximize the amount of good you do principle” to create this balance where people aren't burning themselves out. My view is that it's not really possible to create such kinds of philosophical principles that are so strong. It's just that everyone kind of has to negotiate what their internal agents are like and what they actually want and how they negotiate with each other, or whatever it is. And you just kind of have to live with how that can't be principled.

MICHAEL: I wasn't saying we have to or anything like that. It's a question about whether or not there's some different set of principles. I think a really interesting example is the Mormon Church tithing at 10%, and their point of view seems to be one of delegating authority to a superhuman institution, where it becomes sort of a social covenant. And you can feel very comfortable about it because people, for the most part, actually feel pretty comfortable about doing that with institutions that they really trust and they're very bought into. You could imagine a different sort of Mormonism that had started with the idea that you should tithe as much as possible to the Mormon Church, and people getting really upset and very uncomfortable. “Should I do another 1% this year? How do I feel about that? Am I going to have to stop having kids because I'm not going to be tithing enough?” And that's an example of, I'd say, a social solution. But that's not a strong enough term. I don't think it does justice to the way people actually do relate to the most important institutions in their lives. It's possible that there's some principled way. Actually, in some ways, the way we do it with governments and taxation is some funny mix of principles, political compromise, [laughs] some level of trust in our institutions, maybe rather grudging trust sometimes. There's an argument about how progressive or regressive taxation should be, and that's an argument about principles. It's possible to have some sort of very strong beliefs about that, and those will be, to some extent, reflected in a political compromise about the way taxation works. Those were the kinds of examples I was thinking of as maybe a different way of getting some purchase on this question, which doesn't just rely on individuals having to chat with their friends and maybe feel very uncomfortable about it.

AJEYA: For what it's worth, a good deal of that does exist. In my experience, though, it's kind of different in different zones of EA. The direct analog to the tithing thing would be the 10% Giving What We Can pledge, and a number of people who consider themselves EAs are, in fact, very comfortable with that. EA has started to emphasize careers much more than donating, and it's much messier and murkier there. But every EA organization whose HR policies I have some familiarity with tends to emphasize working a number of hours a week that is normal for jobs in general and having pretty generous vacation policies and so on. In particular, at Open Philanthropy, I've never felt pressure from the institution to work more. In fact, I end up working an amount of hours that is suspiciously similar to what American society at large has decided one should work. So, I think there are some of these kinds of social solutions in play here.

SPENCER: We have just a couple of minutes left. So I just want to recap quickly. And I'm curious just for your quick takes on what you think is the core of what you disagree on really is.

AJEYA: It seemed to me like there were two fairly different strands. One is the social or emotional impact that EA has on its adherence, this demandingness and internal tension, and so on. (I don't know how much that's a disagreement.) And then the other is that most of the projects, people, and institutions that had the most impact historically wouldn't necessarily have been discovered or promoted by this explicit goal-directed, EV maximizing type of approach, which is a reason to take a different approach. In the first case, yeah, EAs stress out about this; I stress out about it. I think it's worth it, it's something I like to choose to stress out about, and some amount of tension is good, and some amount of tension is too much. There are some social structures in place, and maybe they could be stronger. In the second one, I feel more inclined to get into some object-level questions here that might involve doing a lot of macro history or something to really figure out where we disagree. But at the highest level, I'm interested in how this could be put into action. If we think this pattern is displayed by previous things that were super impactful, what decision heuristic could we introduce to try and catch more of that?

SPENCER: Well said. Michael, you wanna give final words on what you think the root of the disagreement is?

MICHAEL: I didn't conceive of myself as disagreeing with EA in general or Ajeya in particular. It's not my motivation or not really how I think about the world. I don't track disagreement in that way. I'm more interested in understanding,

SPENCER: It seems like EA is a foreign way of looking at things to you, and there's some part of you that rejects that way of looking at things.

MICHAEL: I'm more interested, I guess, maybe in what I see is the creative, interesting aspects of EA and riffing on ways it might be done differently. I'm very interested in this issue that Ajeya has described of feeling this sort of tension in yourself and how that's changed over time, and whether or not there's some other conception, which gives you all the benefits of the “do the most good you can” framing but without having that same price. That's the kind of thing that I find very motivating. This is something that I guess I just, in general, find a little challenging about, maybe not so much the EA community as the adjacent rationality community. I'm not answering your question at all. I'm trying to refuse to answer, and that just feels ungenerous.

SPENCER: [laughs] We get on the podcast with you refusing to answer. That's fine.

MICHAEL: Mostly, basically, something I have been struck with Ajeya's responses, and also the response of many other EAs who have read what I wrote, has been the extent to which they're willing to take criticism seriously and often try and reflect it back or even do the steelmanning thing. On many occasions, I've been struggling to make an argument, and Ajeya or you, Spencer, have made it considerably more articulately than I was capable of doing. The people and communities that I've most enjoyed, I often feel like I don't really begin to understand them until I've spent tens or hundreds of hours talking with them. I guess I got a lot of that sense in talking with both of you today that Ajeya has thought much more deeply than I have about many of these issues, and I'd want to get to the bottom of that. I just think that that was just really interesting. I'm not sure I'm very comfortable describing what our disagreement is. I don't know what our disagreement is. [laughs]

SPENCER: That's a perfect place to leave it. Thank you all so much. This is a really great conversation, and you really stuck it out for over two hours. I really appreciate that. Thank you both so much for coming on.

MICHAEL: Thank you.

AJEYA: Thanks so much.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: