CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 206: What should the Effective Altruism movement learn from the SBF / FTX scandal? (with Will MacAskill)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

April 16, 2024

What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?

William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. He also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, which together have moved over $300 million to effective charities. He's the author of What We Owe The Future, Doing Good Better, and Moral Uncertainty.

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Will MacAskill about Sam Bankman-Fried and FTX, the speed of idea creation and evolution, balancing the considerations around free speech, post-AGI governance, and the future of Effective Altruism.

SPENCER: Will, welcome.

WILL MACASKILL: Thanks for having me on, Spencer.

SPENCER: It's been a little while since you were on last time, and I'm excited to talk to you again.

WILL: Yeah, I'm excited to have this conversation.

SPENCER: Tell us, what are you working on now and why are you excited about it?

WILL: I'm working on a new research project, which I'm currently referring to as 'post-AGI governance' or the governance of explosive growth. The core idea here is that I think that sufficiently advanced AI could lead to much faster rates of technological progress than we are currently used to. This, I think, will pose an enormous number of challenges. So the way to get an intuition about this is just to imagine all of the technological challenges that the world would have to face over the next five centuries — just imagine that intuitively or something — and then imagine if we had to deal with all of those challenges over the course of just a few years. I think that'll be very hard by default. Now what are some of those challenges? Well, the most widely publicized at the moment is the risk that AIs will be misaligned, and that's one important challenge. But I don't think it's the only one. Others include, for example, that we might be creating digital beings that have moral status themselves, and so there are hard questions about what rights should they have, what moral consideration should they have? We might be creating new weapons of mass destruction, new extremely dangerous technologies that we may even not have properly conceived of yet, just in the same way that nuclear weapons hadn't been conceived of at the turn of the 20th century. There are also going to be political challenges as well. In a world where now the police force and the military are automated and, in principle, can be controlled by a single person, that greatly increases the potential payoffs and stability for dictatorships or coups. That could be really destabilizing to democracy as well. Through this whole time period — this three-year period — we will be getting new intellectual and conceptual developments as well. We will be learning new, very surprising things that could really change how we should be thinking about all of these other issues. And it just seems to me, wow, that's such a short period of time. If we do get this period of ultra accelerated technological progress, by default, that doesn't go well. And an intuition for this is thinking about the last 500 years of technological and intellectual progress. Imagine if in 1500 AD, European nobility suddenly had to deal with 500 years of technological development occurring over the course of just a few years, I think that would have gone really quite badly. And at the moment, this governance question, after the point of time that we get transformative AI, seems to me really quite neglected. And so I'm hoping to do work to try and help us figure out, at least a little bit, what a good governance mechanism looks like over that period and also maybe just alert people to the idea that this is something that we should be preparing for.

SPENCER: I think that's a fascinating topic. But if you're up for it, I'd love to talk about something else first and we'll come back to that at the end of the episode. What I'd like to talk to you about is something that I think will be in a lot of people's minds, which is that there was this huge FTX debacle with Sam Bankman-Fried now being convicted of a number of charges like fraud. People know that FTX and Sam Bankman-Fried were linked, in some ways, to the Effective Altruism community. And as you are one of the intellectual leaders of the Effective Altruism community, I know many people have wanted to hear your thoughts on this, and you haven't had much chance to give those thoughts. I imagine you must have been under a bunch of restrictions about what you could talk about. So if you're up for it, I'd love to dig into the FTX topic now, if it's something that you can talk about freely.

WILL: Sure. Yeah. I'm more than happy to talk about that. And, yeah, you're right. I've been very keen to be talking about this and doing some public discussion analysis, really, since the collapse happened. And I spent quite a good chunk of the last year writing and rewriting blog posts on the topic. And, yeah, unfortunately, there was a lot going on. But the main thing was this investigation from Effective Ventures, the nonprofit I helped to start. The FTX was this horrible thing; enormous numbers of people were harmed. And it was this person, as you say, who was very closely associated to the Effective Altruism movement, associated with me, had donated to a lot of EA organizations, seemed to be, and turned out, had committed really major fraud. And so Effective Ventures commissioned a law firm to do an investigation into the relationships between the charity and FTX. And they really didn't want anyone closely involved to be speaking publicly while that was going on. And me, being on the Board of the nonprofit, meant that anything I said would have counted as potentially the charity saying something, so they quite firmly wanted us not to be doing that. But that investigation is now concluded, thankfully. And so I'm now more able to talk about this stuff, which I feel glad about.

SPENCER: If you want the full details of the FTX catastrophe, you can go to Episode 133 where I have a long discussion about it and I interviewed a number of people that were involved. But for those that don't want to dig into it now, could you just give us a very quick overview? What are some of the basic facts that everyone agrees upon about what happened? And then once we talk about that, we'll talk about how it connects with the Effective Altruism community and your role in it and so on.

WILL: Sure. So Sam Bankman-Fried and a few others set up two companies. The first was Alameda Research, which was a cryptocurrency trading firm. That was set up at the end of 2016, I think. And the second was a cryptocurrency exchange, FTX, set up in 2019, that allowed other people to trade cryptocurrency on the platform. And these companies seemed to be immensely successful. By the end of 2021, FTX was valued at $40 billion. Sam Bankman-Fried was, at the time, I think, the richest self-made billionaire under the age of 30. So these companies seem to be doing extremely well. But then in late 2022, a balance sheet from Alameda got leaked. That caused a loss of confidence in FTX. And a lot of people started withdrawing their deposits from FTX. And it turned out, they weren't able to do that. And that shouldn't happen. You can get a run on a bank, where a bank is not able to give out all of the deposits it holds, because it's lended out those deposits. But that shouldn't happen for an exchange; an exchange is not like a bank in that way. And, in fact, Sam had been assuring everyone on Twitter that FTX did not operate in that way. Given that people weren't able to take out their money, in the days that followed this rush of customers planning to take out their deposits, it started to seem very likely that there had been a misuse of customer money, and lies told about that fact. And so this ended up being enormously harmful. The company filed for bankruptcy. And it looks like over a million people, in fact, were not able to get their money out of the account and lost money as a result. This is sometimes most of people's life savings. Soon after FTX filed for bankruptcy, two people who are high up at FTX and Alameda — that is, Caroline Ellison and Gary Wang — pleaded guilty for fraud. Shortly after that, a third person who was high up at FTX, Nishad Singh, also pled guilty. Sam did not plead guilty, but there was a trial towards the end of last year, where he was found guilty by jury.

SPENCER: What was your relation like to Sam? I've seen articles that report that you caused him to get into the Effective Altruism movement. I actually don't even know whether that's true or not, or what sort of interactions you had with him around Effective Altruism.

WILL: Yeah, I think it's true that I was his first in-person contact and entry point to the Effective Altruism movement. The story is that, in 2012, I'd recently set up an organization called 80,000 hours, that advises people on how to have a big impact with their career. And I was going around giving a bunch of talks about what careers you should pursue if you want to have a big social impact. And I was giving a talk in Boston — either at Harvard or MIT — and someone put me in touch with Sam, I think, because he had been quite active on a forum for people who are interested in utilitarian ethics, and being part of discussions around career choice, discussions around the idea of earning to give, for example, which is the idea that one way of doing good is to deliberately take a higher-earning career so you can donate a large fraction of your earnings to good causes. And so Sam and I met up for lunch just before the talk I gave, he came to the talk, and we talked about career choice. He was interested at that time in maybe earning to give, maybe politics, maybe journalism, I think.

SPENCER: And what were your views about him at the time?

WILL: The most striking thing I remember at the time was that he told me he'd been brought up in a consequentialist household. Both of his parents are Stanford law professors, and they are both consequentialist, or consequentialist leaning. And he said that was the moral view in which he had been raised and educated. And that was really striking just because I'd never heard that happen before. But then in terms of how I felt about him at the time, he seemed very thoughtful, very nerdy [chuckles]; I remember nerding out about some technical aspects of moral philosophy. He also just seemed very morally committed as well. At the time, he'd been, I think, quite involved in vegan advocacy, and that was kind of his main focus. But he was certainly taking the ideas seriously, too. He also just seemed very autonomous. Six months later, the second time we met was at a vegan conference. I was experienced with people hearing the ideas and thinking that they're pretty compelling, nodding along, but not really then doing anything. Whereas when I then met him again six months later, he had told me that he's actually got an internship at this trading firm, Jane Street, and that just seemed impressively autonomous to me. It's not only that he was hearing the ideas, but he was actually taking them seriously and willing to make choices in his life on the basis of trying to do more good. That's how it seemed, at least, to me at the time.

SPENCER: I know he was interested in earning to give and was planning to give a bunch of his money away. Do you know what causes he was giving to at the time? Was it animal advocacy or global health causes?

WILL: His main focus was animal welfare, in particular suffering of animals in factory farms. But then, when he was at Jane Street and he seemed to be acting on his giving plans, I didn't know how much he was giving, but plausibly 50% or something. He was also donating to organizations that promoted Effective Altruism. So I think that included 80,000 Hours and the Centre for Effective Altruism.

SPENCER: I know that early on, after Alameda was founded, before FTX came into existence, there was that blow up, where some of the people initially involved suddenly left, and there were accusations in both directions. And some people have called this out as kind of a warning shot that should have made people aware that maybe something was amiss. What was your general conclusion at the time when this blowup occurred between the early Alameda staff?

WILL: This is definitely an important part of the story. And now looking back in hindsight, I really think for sure what happened there is a foreshadowing of what happened later. In 2017, there was a management dispute at Alameda; I think the company had started to lose money and a number of people were unhappy with how Sam was running things. And so they go to him and say, basically, "We think the company is going to collapse under your leadership. Either you're out or we're out." I wasn't involved in the dispute and I also didn't take sides. I was mainly in touch with the people who left rather than Sam. But there was certainly a lot of finger pointing from both sides, certainly also comments from them about Sam, that, for sure is also foreshadowing, looking back. The idea that he was reckless, that he was unwilling to accept a lower return and instead wanted to double down, that he wasn't interested in management was another thing they said. It all just seemed very messy; it was hard to know exactly what happened. I just came away thinking, "Okay, maybe Alameda was just this big folly project, and it was just gonna fall apart."

SPENCER: Yet, despite people's expectations at the time, it seems like it ended up starting to do really well after that. And then Sam goes on to found FTX, which seems to potentially be doing even better.

WILL: Yeah, that's right. I think despite what everyone thought, Far from collapsing, it ends up both of them becoming organizations worth many billions of dollars, I think.

SPENCER: And by the end of 2021, when things seemed to be going very well, what was your connection to Sam at that point? So I imagine he'd already begun to be a major donor, and there'll be reasons to try to get to know him better. And in addition, there might be reasons to start coordinating on projects with him.

WILL: He wasn't a major donor by that point. I wasn't really in touch with him during 2020. I don't think we were in contact then. But then in early 2021, it starts to become public just how successful FTX has been, and Sam's net worth is now measured in the billions. And so, yeah, I reached out to him in early 2021. And mainly, the thing that I'm interested in is, okay, perhaps he was donating a significant percentage when he was at this job at Jane Street, but is he gonna still follow through with his giving plans now that he's so wealthy, or has the wealth changed him? Maybe he just wants to keep it. And when we spoke, he said that, yeah, he was as committed to giving that money away as ever, planned to give away essentially all of it. He said at a time that that would probably be some years out. And that's what you would expect from someone running a very fast growing company. Normally, they start off building the company, and then afterwards, start donating once it's a bit more mature. But then by the end of 2021 — things are opening up after the pandemic, and I go to North America to reconnect with a bunch of people — Sam, by that point, has put Nick Beckstead in charge of his foundation. And so I met up with Nick and with Sam, in order to discuss the strategy for the foundation. And at that point, it looks like "Oh, he's actually going to start scaling up his giving in a larger way earlier on," and suggests that he's planning to give something like 100 million over the course of the next year, and then aiming to scale up to giving many billions over the years to come. At that point, I started talking with Nick about strategy for the foundation. The sheer amount of money that he's planning to give just seems like getting that right seems enormously important from the perspective of the big problems in the world. I'd worked with Nick for many years and felt like I was adding quite a lot of value in the conversations we were having, and so we discussed the idea of me becoming an advisor — unpaid and part-time — to the foundation. We tried that out in just about January of 2022, and then I had that role of advising the foundation, or the Future Fund in particular, over the course of 2022.

SPENCER: How well did you ultimately get to know Sam? Would you say that you knew him very well? Had you spent a lot of time together, by the time this all played out?

WILL: I knew him reasonably well. We maybe met one on one a handful of times, half a dozen times, and then more in group settings. There'll be group discussions on philanthropic strategy, cause prioritization, things like that. I don't think there was a case where we ever hung out as friends. The focus would really be on kind of discussing charity and the foundation and having an impact.

SPENCER: I'm also wondering, those people that left Alameda early on after that blow up, were they sounding an alarm bell about this as things got bigger and bigger? Or what were those people saying?

WILL: Given the blow up, yeah, you might easily think that people at a time were really warning about maybe 'FTX is just a house of cards' or something. But at least in my experience, because I was still in touch with lots of former Alameda people and that's a few years later now, and by this point, FTX and Alameda were really thriving. FTX has had VC investment, really some of the leading VCs in the world, like Sequoia and BlackRock. And so the general attitude from the people I was in touch with, was really either their initial view of Sam and Alameda had actually just been wrong, like they misjudged things, or that Sam had learned lessons and matured. Like I said, I was in touch with some of the core people who'd left and they were doing things like trading on FTX. One person had most of his life savings on FTX, one of the investors who was mostly involved in the dispute and threw a lot of his money emailed Nick to congratulate Nick on getting the job at the foundation. There's definitely a sense of, 'is he actually going to follow through with his giving,' which was a worry that I had as well. But the overall impression was just that, given how well the companies had done, including being vetted by these leading VC firms, that either the early impressions were mistaken, overblown in some way, or that those are the kind of teething troubles of an early stage startup, and Sam had gotten a lot better at running a company since then.

SPENCER: And why were you worried that he might not give? Is that just the normal worries you have for anyone who commits to giving or was there some particular reason you worried about it with Sam?

WILL: Oh, yeah, that was just the normal worries with people in general. I've met a number of wealthy people who perhaps talk a big talk about giving, but then do not in fact follow through. That's, I think, by far the normal human behavior. And so I thought maybe that would happen to Sam, too.

SPENCER: Once you started advising the foundation, what did that involve exactly?

WILL: It was primarily about high-level strategy, things like what projects should be run. What became Future Fund did a few big projects like a regranting program. There was an open call. There was an attempt to seed new philanthropic projects that had the potential to deductively absorb quite large amounts of money. Similarly, there were questions about how much to invest in different cause areas, how to think about AI, pandemics, nuclear war, and so on. I was also involved in personnel questions like whom to hire, how fast. Sometimes if there were issues with an employee at Future Fund, I was one person they could talk to. I was also involved in the naming of Future Fund as well. Sam was very keen for everything just to get called FTX Foundation. I thought it was a bad move to be tying the foundation both just to a company, but especially to a crypto company, in the same way that I think that if Open Philanthropy were called the Facebook Foundation or Asana Foundation, that would be a bad move. And so I made a push to have those brands separated, and that's what gave birth to Future Fund as a branch of the foundation that was explicitly focused on reducing existential risks.

SPENCER: And then the other thing that some people have claimed is that, when Alameda had that original split up early on, where some people in the Effective Altruism community fled, that you had somehow threatened one of the people that had left. What was that all about?

WILL: I felt pretty distressed when I read that because I certainly didn't have a memory of threatening anyone. And so I reached out to the person whom it was about because it wasn't the person saying that they'd been threatened. It was someone else saying that that person had been threatened. So I reached out to them. There was a conversation between me and that person that was kind of heated, tense. But yeah, they don't think I was intending to intimidate them or anything like that. And then it was also, in my memory, not about the Alameda blow up. It was a different issue.

SPENCER: And you already mentioned what your initial impression of him was. How did your impression of him change as you got to know him better? And pre-FTX catastrophe, what was your impression of who he is as a person?

WILL: There were a bunch of big changes, in various ways. One was that he was socially smoother in a way. [chuckles] I would not have described him as socially smooth but he was doing things now like having all sorts of external business meetings, and so on. And that seemed to go well, in a way that I think might have been harder for me to imagine from Sam, whom I first met. Another thing is just a lot more entrepreneurial in the sense of like he was definitely keen on, 'do big things,' 'keep moving,' 'don't waste time,' 'be more ambitious,' also a bit in a way that, again, is like foreshadowing, definitely a bit of like an anti bureaucratic vibe as well. And then the stuff that I kind of got to know over the course of the year, one big thing was just — and this is definitely going to be a theme as we talk about this — the arrogance, like hubris. Looking back, I just really think he got corrupted by his own success. I think early Alameda, during the blowup, just should have failed. That was like the cosmically correct thing to happen or something. But instead, it just seemed, maybe to him, he'd kept taking these gambles and kept being really successful. And so, he really had this attitude of 'He is very smart. He knows more than other people.' If the consensus disagrees with him, that does not move him. He won't get convinced by arguments; though whether he was responsive to arguments is another question. And then relatedly, there was definitely a sense of him as the big boss, kind of Mr. Status, in a way that's probably quite common in multibillion dollar startups. But you know, he's very time constrained and it wasn't clear to me, I guess on that, how much that's just the nature of someone with very little time.

[promo]

SPENCER: I met Sam in person a couple of times, and the vibe I got from him was, he seems very smart, he seems overconfident — he would state things extremely confidently — he seemed fast thinking, like thought on his feet very well. I didn't really get much of an impression other than that, so it seems compatible with what you saw, though you had obviously a more fleshed-out, nuanced perspective.

WILL: Yeah. That all seems accurate to me.

SPENCER: Now were you aware of any signs whatsoever that he might be committing fraud?

WILL: No. This is really something I spent a lot of time thinking about. When the fraud happens and news of it is coming out, and they're just like...yeah, such a mix of emotions. Obviously, there's horror. It turns out it's over a million people who lost money. It's just a huge harm. I just had so much confusion; it really seemed incompatible with what was my impression of things. So I really spent a lot of time trying to think about what the signs I might have missed. Looking back, bigger signs were: one was midway through the year, Sam said to Nick (who's running the foundation) he'd like to donate less than he had previously been planning. But 'less' was still 100 million over the next year or something. It was still a very large amount of money. [laughs]

SPENCER: It wasn't a very strong signal, right?

WILL: Yeah, there was clearly the crypto downturn so Sam's net worth was valued at half what it was. It was a bit more of a crunch for the crypto world. The thing that maybe struck me as weirdest, the most odd was, on my last trip to the Bahamas, which was September of that year, Michael Lewis, the financial writer, who then wrote a book about Sam, he was out there, too. We had a conversation, and he mentioned that Sam had been fundraising for FTX from the Middle East, like Saudis. And I didn't like that; I have issues with taking money from the Saudis. He also just struck me as a bit odd. And it was enough that...of the people involved, I got to know Nishad Singh probably the most. He was head of engineering at FTX, one of the people who pled guilty. Yeah, I've talked to him and just asked him, raising money from the Saudis, is anything wrong with the company? And he was like, "No." It wasn't just a passing comment; we talked about it for a little bit. Then we talked about it and he said "No." And now looking back, it's a bit unclear to me when Nishad knew what, but it was certainly very incorrect. That's the thing that most made me think that there was something wrong with the company. Beyond that, I've really tried to think and no.

SPENCER: It's still pretty subtle.

WILL: Yeah, It's kind of wild; you really would think that, given what was happening, there would have been lots of signs, clear signs. And I really thought, maybe I'm just an utter moron. Over the last year, I felt a bit reassured; it seemed like other people who were also very close also did not pick up on any signs. For example, Michael Lewis, who you know is one of the leading financial writers — and was just following them around, had access to anything he wanted really — he reported he just saw no signs. The one thing he said is, "Oh, the amount of money they had, maybe that was suspicious." But yeah, he also just said he never noticed any difference in their attitudes over the course of the year, like differences in their personalities. And then similarly, if you look at employees of FTX, too, even people who are really quite close to the inner circle, but not the people who pled guilty, they generally had quite large amounts of money on FTX. So I guess they also just thought it was a very stable company as well. And so I totally understand how people looking at this might look at me and be, "Oh, Will visited the Bahamas. He must have known." But as far as I can tell, there weren't clues I should have spotted.

SPENCER: What about his personality? Because this is something actually people have brought up with me. They have said, "Well, wouldn't you know... Okay, maybe you meet Sam a couple of times, fine. But if you really got to know him, wouldn't you know that he's the sort of person that could do this?"

WILL: Again, this is something that haunted me, and I really thought about a lot. And I know you've done this post on Sam's personality. I've been doing a bit of learning about other cases of fraud, like white collar crime. There's a book in particular — a single thing that I found most useful — called, "Why They Do It" by Eugene Soltes, Harvard Business School professor. So we'll talk about Sam's personality; I don't think it's irrelevant or anything. There's definitely a lot of ways, a lot of dimensions in which he was unusual or extreme. But the thing that's interesting from Soltes' book is he's really arguing against this bad apple view of white collar crime. Basically, he has various explanations you can give. One is this bad apple theory. Second is, it's a benefit cost analysis. People do fraud or other white collar crimes if they think the benefits to themselves can outweigh the costs. And then the thing that he points to is like a failure of intuition rather than reasoning. People often don't self-conceptualize what they're doing is fraud. It's often mindless; they're not even really thinking about what they're doing. They're also just getting the wrong kind of feedback mechanisms, where they're maybe doing something that's quite close to the line and getting lots of positive feedback, and don't have an environment of negative feedback, where they're getting punished for that. And that all just resonated with me in my experience. Like I say, there are ways in which Sam was extreme, like his tolerance for risk, and that's well documented. And I'm like, "Sure, it must have been at play, this move fast and break things attitude he had, really just wanting to go big or do things big. Obviously, that must have played a role. The thing that has made me feel so confused over the last year and a half, and really somewhat pathologically try and figure out, try and work out as best as I can, what exactly happened, was just this kind of stories and narrative that was happening in November, when the collapse happened, it just didn't mesh with me; it didn't mesh with my experience. And, yeah, it's been a long time trying to reconcile that. I'm not sure if I ever wholly will.

SPENCER: We've talked about him having some personality traits, like maybe being willing to take risks, maybe overconfidence, and those seem like factors here. But did you pick up on any traits that are more objectively worrisome? For example, I interviewed a whole bunch of people that know Sam well, and I asked them questions about what he was like as a person. And one thing that multiple people said to me is that he's very manipulative, and that he uses tactics to persuade people of things in a way that's much more intense than your typical person. I'm wondering, did you see signs of him being manipulative? Maybe you didn't even pick up on it at the time, but maybe in retrospect, like looking back at them.

WILL: Manipulative? I can't think of examples. He was bad at disagreement and would maybe treat it more like a status battle or something than like a good reasoner or conversationalist would. I think maybe, on the foundation side, or at least the Future Fund side, there was much less reason for Sam to do manipulative stuff. I guess maybe something else, there maybe a couple of times when it seemed like he'd maybe verbally committed to one thing, but then it switched in terms of personnel. That wasn't huge really. That definitely wouldn't have been "Oh, this is a big red flag," more like "Oh, I think lots of things are happening. He's gonna set up a foundation and still doesn't have that much time for that." But there weren't cases where it's like (say) he needed to fire someone... Other cases I've heard about since the collapse were more cases like if someone criticized him in a quite significant way, he squeezes them out. And there's one case, someone who's been on podcasts and things, threatening to damage their reputation and so on. That sounds really bad and, as far as I can remember, not something I saw while there.

SPENCER: I wrote a long essay about who Sam is as a person, and I gave different theories on that and discussed the evidence. One question that's come up a lot is, did Sam genuinely believe in Effective Altruism or was it bullshit? And there's some evidence people point to that he maybe was just bullshitting. I take the view that he actually very likely was genuinely a believer in EA principles, or at least utilitarianism. I'm wondering, did you see any sense at all that he didn't believe in EA? Or are you convinced that he truly is an effective altruist in his own mind?

WILL: I think if he was not, then this was the most wild long con ever. Because it was really... I met him, maybe he was 20 or something, I'm not sure. And he seemed just really engaged by the ideas. This was well before EA was popular or even had a name. And throughout, he just seemed committed to the ideas. And then there's obviously things about whether power corrupts you, and maybe you start believing that you're doing things for a good cause, but actually, there's some other motivations going on. That sort of thing is totally plausible. But as for whether he thought he was self-consciously bullshitting about EA, I think, just seems extremely, extremely unlikely to me.

SPENCER: The other big question I discussed in my essay is whether he has an extreme personality, which I call DAE, which stands for deficiency of affective experience — affective with an A, not with an E — deficiency of affective experience. And very specifically, the reason I use this term is because I'm talking about something rather precise, which is little or no ability to experience emotional empathy. So imagine you were watching an animal be harmed, most people would just feel really bad watching that; whereas, someone with DAE might just have no emotional reaction, and similarly, watching suffering of people or hearing about suffering people. And second, little or no ability to experience the emotion of guilt, and I talk a lot about different evidence for and against there. But I just want to read a few quotes, because I think that they set this idea up quite well. The first quote is something that SBF actually said, "In a lot of ways, I don't really have a soul. This is a lot more obvious in some contexts than others. But in the end, there's a pretty decent argument that my empathy is fake, my feelings are fake, my facial reactions are fake." In the book, "Going Infinite," there's another quote that, I think, was very telling. Sam is quoted as saying, "To be truly thankful, you have to have felt it in your heart, in your stomach, in your head, the rush of pleasure, of kinship, of gratitude. And I don't feel those things. But I don't feel anything, or at least anything good. I don't feel pleasure or love or pride or devotion. I feel the awkwardness of the moment enclosing on me, the pressure to react appropriately, to show that I love them back. And I don't because I can't." And the final quote that I want to give is from the COO of FTX, Constance. Constance investigated what had happened in the debacle after it occurred, and here's the conclusion she came to after — I think she spent a month trying to figure out what happened — "He has absolutely zero empathy," she said, "That's what I learned that I didn't know. He can't feel anything." And that was from "Going Infinite" as well. So this evidence and some other evidence I point to, maybe it's reasonably likely that he has DAE or something like it, which again means lacking affective empathy, lacking guilt. I'm wondering what you think of that theory.

WILL: I think my view is, I'm definitely not, "It's definitely not true." I'm definitely "It's maybe." I think what's true is that, it seems like he was very emotionally flat. This isn't something I ever thought while interacting with him. I meet a lot of people who are quite emotionally flat. [laughs] But then, especially the Michael Lewis quotes — these are comments from his personal Google Docs and stuff — and then the question is, "Is that explained by DAE or something else?" Other candidates are like autism — something that came out, actually even after the trial, was that they had an autism diagnosis — depression, which was well known, and ADHD and Adderall can both result in emotional numbness or emotional flatness. And so the question for me... I guess I don't know enough. I'm not enough of a psychologist to know how to distinguish between those sorts of things. Insofar as I understand DAE, it's closely linked to psychopathy.

SPENCER: It's one sub-component of psychopathy. It's more precise, yeah.

WILL: Okay, the question then is on remorse, where it seems like he certainly said a lot in terms of being remorseful after the collapse. And then the question's, well, was that just fake? With respect to the Alameda blow up, he also did some things that were demonstrating remorse. Tens of thousands of words analyzing what happened, owning up to a lot of mistakes that he thought he'd made. Now that I know more about what happened, I look back at that, and there are notable omissions. Both of these things, I think, are ambiguous or something. I'm like, "Okay, is this genuine remorse or is this just a big PR stunt essentially, a bit of spin?" And there is, "I don't know." That's why I'm "It's a maybe." The biggest piece of evidence is just the fraud that happened. In my mind, if I'm putting that to the side, then I feel like it's not like an obvious kind of match.

SPENCER: A few things I want to say there. One is that guilt and remorse are somewhat different. What I'm referring to is the emotion of guilt. You do something wrong and you feel bad about it. You could still have remorse. You're like, "Damn, I fucked up. I shouldn't have done that. According to the principles that I intellectually believe, I did a bad job." Or, "Let me analyze the mistakes that I made and acknowledge them." None of that is exactly guilt. I just want to point out the distinction. I also want to just mention how this connects to the idea of him committing this fraud. Because as you point out, someone who's actually very normal psychologically, could commit a massive fraud; it does happen absolutely. And humans, virtually all humans, myself included, are good at rationalizing our behaviors, [laughs] where we can do things that are actually against our own value system, and then we can rationalize it, and we can normalize it and so on. These are all things that are possible. What I think of in this regard is just that DAE is a risk factor for these kinds of behaviors. So if you imagine two people who are CEOs of big companies, and one has DAE, I would think that that's a substantial risk factor for them taking unethical actions. Why? Well, it's not because people with DAE necessarily act unethically. It's because they don't have some of the guardrails that other people have, the guardrails of emotional empathy where you feel bad for people suffering, and the guardrail of guilt where you actually experience this negative emotion when you do something that's out of line with your values. So I don't know the multiplier and how much it increases the odds of someone doing bad behaviors, but it's probably fairly substantial. That's where I fit this into the conversation.

WILL: I guess maybe one question I have is just whether, if you have just low affect across the board — if someone just experiences few emotions and doesn't experience them strongly — if that counts as DAE. If so, then that really seems quite likely, just given the quotes you said and so on. And then a second thing is, I definitely agree it would be an increased risk factor. One of the things that was interesting from the Soltes book "Why They Do It" was he really talks at length about how much financial harms are just very different from the sorts of harms that we're used to in the ancestral environment in which we evolved. It's very different from (say) punching someone in the face. And there's always a worry where you try and understand what's happening and the psychology of thought, that it can sound like defending it, and I really don't want to do that. What happened was utterly abominable. It's more like I was scared, so instead, just in terms of what lessons to learn. But anyway, one thing Soltes talks about is, doing the fraud, the harm is delayed, and it's also distant. It feels like numbers on a screen. Who are these people you're harming? So normal emotional reactions don't get kicked in. I do think DAE would absolutely be a risk factor. But then the question is how big a risk factor? And I'm not sure.

SPENCER: And also on the point about autism and depression, you're absolutely right that there are elements of those that can be similar, or seem similar at least, to DAE. Because, for example, someone with depression can have a low affect due to their depression. Someone with autism can have trouble reading people's emotions, and so can not act in the way we expect when people have an emotional reaction, which can make them feel low empathy. But I do want to distinguish it because people with autism generally do have lots of empathy. It's just that they don't necessarily read what's happening emotionally in an accurate way all the time. So if they read it, they might have the normal emotional response, but then they might not read the situation as other people are reading it. Similarly, someone with depression, I think, typically does experience normal empathy. But they just might have a flat affect because of anhedonia, so it might seem like they don't care about things. But I could see those things being easily confused.

WILL: That, I guess, is at least why I'm not "Yes, this is definitely correct."

SPENCER: In terms of accusations against him that are most specifically about being indifferent to harm, are ones around his romantic behavior in the office. I believe it was Time that quoted someone saying he had inappropriate romantic relationships with subordinates. One of the people that I interviewed about him told me that he, according to this person, would target women in their office whom he wanted to have sex with and then, once he had sex with them enough and he was no longer interested in them, they felt he would mistreat the person and sometimes relegate them to unimportant roles in the office. These are all accusations that are a bit hard to know precisely what happened, and how certain we can be of those claims. But if those things really did happen, and he did this in a severe way, that could be further evidence of an indifference towards causing harm.

WILL: I totally agree, that would be excellent evidence. Again, that wasn't something I knew about or heard about in terms of the Alameda blowup. I'd heard about the scandal midway through 2022. I heard about one person at early Alameda that he dated. But I hadn't heard that stuff and totally agree that that certainly seems like evidence for something going in the direction of DAE.

SPENCER: But the one example you did hear about, was just that he had had a relationship with an employee and was that all?

WILL: Not even really an employee, but someone at the same organization. Yeah, I didn't hear it was bad or anything.

SPENCER: Another thing that a bunch of people have asked me about is, could this just be a result of naive utilitarianism? Imagine you're a perfectly utilitarian agent that's just, "I'm going to maximize the expected value and I'm willing to bet the entire world on it." And this actually came up in interviews with Sam where, in one interview, he basically said he'd be willing to bet the entire world if there was slightly net positive expected value on it, even if the world would be destroyed if he lost the bet. Do you see that as playing a significant role here?

WILL: I was certainly worried. When the collapse happened, I had lots of hypotheses about what could have happened, and this was one of them. Maybe it was going to come out in the trial, there would be some spreadsheet entitled 'cost-benefit analysis of fraud' or something like that. One argument against this is what in fact happened seems extremely, very much negative expected value from a utilitarian perspective. And then, as I got to understand the details of what happened more, it seems to me that wasn't what was going on, at least not directly. I'm definitely worried about indirect pathways. One is moral licensing. There's this finding in psychology — I guess I don't know how well it replicates — where, if you do one good thing, then that makes you feel like, "Oh, I'm a moral person, therefore, it's okay to do something else that's immoral." And so there's various kinds of experiments on that. I certainly worry if that was at play. I also worry less like a calculation about risk, but more just an attitude towards risk, like not worrying enough about downsides, paying enough attention to downsides. But then at least in terms of my best-guess understanding of what got them into that situation, it looks like it's not a carefully calculated plan. And then the fact that it was so incredibly negative EV, did not look good for any utility function, adds to that. And then I guess the final thing is, again, the kind of base rate from other white collar crimes where, again, it really looks like it is really quite uncommon for there to be some kind of careful benefit-cost analysis. And you might say, "Well, Sam was this fairly uncommon person, so it's not that strong a piece of evidence," but I think it's some as well. And part of the argument here, again, the kind of stories that Soltes gives, it's just mindless decisions where people are not getting the feedback they should have. But part of the thought here, as well is, if you're looking at people who've committed massive fraud, probably that's some of the worst decisions they've made, probably they were not fully, deeply reasoning about that. And that, in fact, then seems to be the case in these other cases often.

SPENCER: We don't have evidence that Sam did a utilitarian calculation, and decided to commit fraud based on it. But suppose that he had, and let's suppose you somehow learned about this, what do you say to him? If Sam's like, "Well, I did the utilitarian calculation, and the best thing is to commit fraud that's going to maximize your net utility in the universe into the infinite future"?

WILL: The very, very strong argument, even just... So I do not think you should be certain in utilitarianism, far from it. But even putting that to the side, I think there's a very strong argument on utilitarian grounds for not doing that. And exactly what happened is an enormous amount of harm has resulted from this. I think it is a very strong part of that case. And this has been known for centuries literally, from utilitarian philosophers. If you have these very strong moral rules and heuristics that really seem to be, for the best in general, don't then think, "Oh, I'm so smart. I've done the calculation and it's actually correct in this case." And in fact, Sam knew this. Again, in terms of my probably pathological amount of time I spent on this, you can do a bit of Facebook archaeology. There's this old post by Eliezer Yudkowsky called, "The ends don't justify the means among humans," which is making this argument that's been made from, at least from Mill onwards. "Look, you are not some god calculator. And even if you are 100% consequentialist, follow the rules, respect side constraints, that's what's gonna do the best thing." And this got shared. And Sam's comment on Facebook is, "Oh, yeah, well, that's just obvious. Why are you sharing this?" Now, that was pre-FTX Sam; maybe things changed.

SPENCER: It sounds like, overall, maybe there was a little bit of worrying stuff around how risk taking he was, but you didn't really have a sense that he had any substantial negative traits. You didn't have any inkling that he had anything to do with fraud of any sort.

WILL: I guess I want to say, he had negative traits. He was a difficult person. He was hard to disagree with. He was very overconfident, very arrogant. He was definitely — I wouldn't have called this a negative trait at the time — very gung ho, very pushing forward in a way that now, in hindsight, I think is negative. I definitely also had worries about the long term as well. Here's a dynamic that happens with CEOs of successful companies: over time, they just lose feedback mechanisms, and get surrounded by sycophants, especially if they've just been really successful and very confident in their own view. I was worried that that could happen, too. But that was more like how might that develop over the next ten years or something, rather than something that was happening then. But did I think that he was someone who was capable of committing historical scale fraud? Then, no.

SPENCER: Given what you knew about Sam at the time, do you think it was a mistake to put so much trust in him as a spokesperson of EA? This is one thing that people have criticized that, okay, it's one thing to accept money from someone — maybe there's a level of due diligence required there — but there should be an even greater level of due diligence when presenting someone as a spokesperson for your cause, or for your community. On the other hand, I'm not sure to what extent you even had a choice in this or whether Sam just held himself up as a spokesperson, without anyone suggesting that he do it.

WILL: Again, this is something I've really thought about a lot over the last year and a half. And there certainly wasn't any sort of deliberate effort, coordinated effort to make Sam into a spokesperson or figurehead. From Michael Lewis's book, it seems like there wasn't even a super deliberate decision at FTX for him to become so public in the media. It's just something that he started doing and then it seemed to him, from his perspective, to be going well, and so then he just started ramping up. In terms of my relationship to this, I think it was mixed. So I feel like the biggest thing that I did to tie our brands or the brands of him and EA closer together was to start advising the Future Fund and being on the website, for example. I also did talk about him on podcasts as a successful example of earning to give, someone who is extremely wealthy, yet was planning to give away 99% of his wealth, something he told me many times that he was planning to do. I also did some things to try and separate out the brands of Effective Altruism and FTX. This wasn't because of worries about what Sam is as a person. That's not how I was thinking about things at the time. But more just for any company — let alone a crypto company — I wouldn't want Effective Altruism as an idea to be too closely tied to that. So as I mentioned, one thing was to have the existential risk oriented giving, having this separate name rather than being FTX foundation. Another thing was, some way through 2022, a project he was excited about was having this big nonprofit media venture. And that I thought was pretty half-baked and just quite a bad idea. And so that was something I pushed back on happening, too.

SPENCER: When you saw him being interviewed by really major media companies about EA, and seeing that a huge number of people were hearing about Effective Altruism for the first time through Sam's voice, how did that make you feel? Was that exciting to see EA being pushed out there? Or were you apprehensive because you wouldn't necessarily have chosen him as the person to promote EA? What was your feeling about it at the time?

WILL: I think, initially, I was apprehensive, again not because of any attitudes to Sam, but just him being a crypto billionaire. Crypto has a very mixed reputation. Billionaires do not have a great reputation. And then the thing that surprised me was that the coverage seemed so positive. The media were really fawning over him. The pieces were just uniformly very positive. And that certainly took away my apprehension. So it certainly wasn't the case that I was thinking, "Oh, this is terrible that Sam is becoming so famous," and pushing against him.

SPENCER: One thing that Sam has been accused of is misrepresenting his lifestyle when he's talking to the media. For example, painting himself as a really frugal person, rather than someone living a life of luxury in a resort in a multimillion dollar house. Do you think that that's a fair characterization, that he was misrepresenting his life?

WILL: That was definitely a big part of the narrative over the last year and a half, that he was painting himself as a saintly or monkish figure. Really, he was living the high life and sometimes the claims about polyamorous orgies on his personal yacht or something. The exact story varies. And what's absolutely true is, he and many of the other high ups at FTX lived in Albany, which is an extremely high-end resort in the Bahamas. This wasn't a secret. I mean, there's media that Sam did, that discusses or shows the penthouse. He's on this YouTube video called "the most generous billionaire," and that has the big shot of him in the penthouse. Similarly, the main puff piece on Sam in Bloomberg, written by Zeke Faux. It's talking about his lifestyle, and it says, "He lives in a penthouse. It's got a college dorm feel; he lives with nine other people. It is still, though, a penthouse in the island's nicest resort." And so I think it's very reasonable looking at the penthouse, which was very luxurious, to think, "Oh, okay, maybe his whole life, the whole way he was being portrayed was really misrepresenting things." But I think that's not quite accurate. At least in my experience, I didn't really see them living the high life. I didn't see them on yachts or anything. I didn't see them having wild parties. My strong best guess is the idea that they were all polyamorous and in relationships with another was not accurate as well. So really, what I saw was just people who were working all the time. In the evenings, they'd play video games, they'd all play bughouse chess, or they'd have dinners where Sam would cook. I definitely asked about the penthouse. The whole thing, that side of things definitely felt pretty wild. And I talked with Nishad Singh about it. And his explanation was, yeah, it's more luxurious than they would want to be, but the issue is just they faced major supply constraints in the Bahamas. So there wasn't really mid-range properties for them to move into, and that they needed somewhere where they could have a lot of employees of the company all in a campus feel, somewhere with good security, somewhere that would be enticing for people to move from the US to the Bahamas. That's what I was told at the time. And it did seem pretty credible at the time, partly because I wasn't seeing other evidence of the luxurious living that you would expect from people who are worth many billions of dollars. And then there were other things that did suggest they were maybe quite supply constrained. So the offices, for example, were really not that nice, with a bunch of huts in this basically big parking lot carved out of the jungle, and they were really quite cramped. They did provide free food but it wasn't that nice. And Nishad commented that maybe the officers were less nice than they wanted, the place where they lived was too nice. I think that kind of overall narrative was just largely off, and I did reread the coverage of him at the time, from Zeke Faux and others. I also just tried to gather, in every occasion where I'd talked about him as well, where I would mention him as an example of successful (I thought) earning to give, planning to give 99%. But I didn't really find anything that I thought was misrepresenting the way he lived his life. I did, however, relistened to the 80,000 Hours podcast interview with him, and Rob Wiblin asked him, "Oh, any luxuries that you like, anything you like to indulge in?" And one thing he said was that nice apartments was something he liked. It wasn't something I noticed at the time. And so yeah, perhaps the story that I was told by Nishad was not 100% accurate.

SPENCER: I imagine this must have been just an incredibly difficult time for you. And so I'm curious about the emotional experience of going through this, watching as this all falls apart, watching the media storm around it, and how you've processed that.

WILL: It's been the worst year of my life by quite a long way. Yeah, I obviously don't want to dwell. Other people have been harmed much more than I have, and I don't want to play on my own little tiny violin too much. But it was just a horrible time in so many ways. One was just, the shareholder, just like the harm that was caused. And the harm caused by this person that, you know, like I say, I had issues with Sam, but overall, I admired him and I respected him. Second was feeling like a moron for that reason. Definitely anger as well. Yeah, anger at him and the others. And then, finally, is just this kind of sense of confusion and kind of desperate want, like this desperate desire to understand exactly what happened, what was exactly in people's minds in order to just kind of try and reconcile what my experience was like with what happened. And I made some progress on that, but not so much. But then the aftermath was really rough. And that was actually, yeah, I guess my mood got a lot worse, pain came back, and so on. And that was significantly because EA and these organizations were just the hardest thing I had ever had to deal with, by a long way. And I really couldn't help, I was excused from the Board. I was wanting to do public common stuff and was discouraged or blocked from that. I have a too personal attachment to some of these organizations, and so it really felt as if my child was on fire, and I was just behind a plexiglass screen and couldn't help them, beaten that they were being taken care of by someone else. And obviously, it's okay now. But yeah, that just felt really hard. But like I say, other people have been harmed much, much more than me. I am now feeling much more optimistic about EA as well. At the time, I thought there was, I don't know, 20% chance that the FTX thought would just be the end of EA altogether. We just never recover. Whereas now, I think, there have been these major changes. People just took it really seriously. The ideas are still really good and still important. And I just see people wanting to get on and try and tackle the big problems that we're facing in the world. And that's just inspiring and motivating.

SPENCER: So what should we learn from this? And how do we prevent things like this from happening again?

WILL: Yeah, tons to learn. Two big things that are quite closely related: one is about governance and the importance of good governance, and another one (which is related) is about EA exceptionalism, and being against the idea of EA exceptionalism, which I can explain a bit more. So on the governance side of things, the thought is just, in assessing whether to expect bad behavior from someone and trying to reduce the incidence of bad behavior, there's really two ways of looking at things. You might ask, "Is this a bad person?" Are we focusing on the character? Or you might ask, "What oversight, what feedback mechanisms, what incentives does this person face?" And one thing I've really taken away from this is to place even more weight than I did before on just the importance of governance, where that means the importance of people acting with oversight, with feedback mechanisms, and with incentives to incentivize good rather than bad behavior. Yeah, I think this, in significant part, comes from just learning about other frauds. So the lesson that Eugene Soltes takes in his study of white collar crime, that actually, the normal case of fraud, like typical cases of fraud, come from people who are very successful, actually very well admired, will be not the sort of people where it's like everyone was talking all along about how this person's a bad apple and are up to no good. Instead, you know, Bernie Madoff even was the chair of NASDAQ, other high profile fraudsters like the CEO of McKinsey, who had decades of well admired work. And so, what he really emphasizes instead is the importance of good feedback mechanisms. Again, because people are not often making these decisions in this careful, calculated way. Instead, it's this mindless, incredibly irrational decision that people are making. And he actually gives examples. One example of a VC, Ben Horowitz, who said he almost committed a major fraud that, in fact, put other people into prison, but was lucky to have a good lawyer who he consulted and advised him not to do. It was like a particular payment scheme. And then yeah, that's my honest diagnosis as well. If I think about how should society respond to FTX? How should society structure things such that things like this don't happen again? I'm definitely not claiming that character plays no role, but from what we've learned since the collapse, it just seemed like FTX had truly atrocious governance. I mean, I think I heard they didn't even have a Board. And then similarly, there'd been many cases now of people running crypto companies being arrested or at least charged with various civil or criminal suits. And that seems to me, it's because it's so unregulated. And then how does this tie into kind of exceptionalism? Well, I think there's a way that's natural to think, or at least certainly that I was maybe swept up in, where you might think people who are into EA, they're really going to donate a lot, they're vegan, they seem really morally committed. So that's going to just generalize from those traits to other sorts of traits to other sorts of moral traits, like integrity and so on. And I really think, now, my very strong attitude is to assume that people who are into Effective Altruism are basically just at the batting average of other sorts of traits, unless they have kind of really quite good evidence for thinking otherwise. And I think this kind of lesson has all sorts of implications. Certainly, I've been happy to see that, across EA, there's just even more attention being paid on good governance now compared to before. But yeah, all sorts of other implications by really thinking about just what incentives people face, given the positions that we're in. One lesson is just trusting VCs less. There was a post from Jeff Hoffman that was just really excellent, I thought, pointing out — it wasn't something I thought about before, so after the collapse — pointing out that VCs are not really incentivized to care that much about whether a company is fraudulent or not. What they care about is how much money the company might make, and its chance of going to zero. And whether it goes to zero, because it's just a company that stopped being profitable, or whether it's gone to zero because it's fraudulent is not that big a difference from the VC's perspective, whereas if you're deciding to work with someone, or if your brand is getting tied with someone or a company, then that becomes a much bigger difference, besides the obvious importance of wanting to just prevent fraud for its own sake. And this also just relates to how I'm thinking about AI and the governance of AI as well. This view that one might have, and I've definitely seen others have, which is, 'a really important thing is that the good guys are in charge of AI development, or at least, people who are adequately sympathetic to the risks that AI might pose.' That's like the really important thing. And obviously, I think that's important. I think it's relevant. But the bigger thing, I think, is just do we have good governance systems for AI companies or for AI development, in general? Like, is there appropriate oversight? Is there appropriate feedback? Are the incentives aligned such that we're encouraged to build safer than dangerous AI? And, yeah, why I hope to work on in the coming years is that project of helping design those governance systems.

[promo]

SPENCER: On the question of who commits fraud, I think you're right to point to environment, which has to do with incentives, and also to point to feedback mechanisms, which can adjust those incentives, and keep things on the up and up. But I think character is a really fundamental part of it as well. And if you think about extremes, there are people that I think would not steal from someone, even if they were going to starve to death. That's an extreme person, right? And then I think there are people that will steal from others, even if they essentially have no need to do it. Just to give one example of someone who some have suggested have propensity is the founder of the Fyre Festival, who, as I understand it, when he was out on bail for doing the Fyre Festival, he was accused of committing a new type of fraud involving ticket sales. So if those accusations are true, he might be the sort of person that essentially is going to be scamming people just even if he has no real reason to do it. And of course, most people are not either of these two extremes, right? Most people would be willing to steal if the incentives were great enough, like they're starving or the family were starving, and they would not be willing to scam someone just as a matter of course, for no particular reason. So, I guess maybe I think of it as more multiplicative, where it's like character times your incentives to do the bad thing. And then what feedback mechanisms do is that they can shift those incentives. Like, if you had a really, really bad person who has a really high propensity to scam others, but there was sufficient oversight, maybe they wouldn't do it, because the chances of getting caught would be too great, and so on. So yeah, what do you think of that model?

WILL: Like I say, I think characters got to play a role. The thing which makes me more worried about efforts that are like, ideas that are like, "Oh, we could identify bad people, and then make sure they don't have power" or something, is just like, instead, identify good people and then they do have power. It's just, I think, if you look at the big, high profile frauds — and you know, I'd love someone to do a deeper dive research project into this than I have done — even Bernie Madoff, but Jeffrey Skilling from Enron or Elizabeth Holmes or badger Gupta, I think the predictive power, like if you go back, I think if someone had been going around thinking like, "Oh, who were the bad characters in finance, or tech or whatever," I think it probably would have failed on not all, there are some cases, I think, where it's just like, "Wow, this is a bad, bad person." But I think, at least in many cases, whereas I think there's some things like, "Does this company have a Board that are just very legible, and very predictable?" So yeah, the thing I at least think more strongly is that in terms of what we can do to try to reduce bad behavior in the future, or kind of low integrity, unethical actions, the lever that's more compelling to me is on the governance side.

SPENCER: I'm not sure I agree with that. I mean, I don't know the specific cases. I haven't investigated the personality of those particular fraudsters. And of course, anyone with an extreme personality, even really negative personality traits, is unlikely to commit some massive fraud, right? Obviously, the base rate of these things is very low. But I do think that someone with certain sets of character traits might be at a 5x or 10x risk of doing something like that, if they're in that kind of position. Would you agree with that?

WILL: Yeah, probably. Though, it then depends on how well we can measure that. That's part of the reason why I think the governance side of things is a more promising lever, where maybe if you can get people to do personality tests and not lie on them, and so on, you can get that predictive power. But I think that's harder. Unless maybe you just have a kind of bar such that the majority of CEOs would not pass it or something. That could be a way of losing out so many people that you've routed out any of the bad characters too. But then, that poses some other issues, I think.

SPENCER: There's no question that governance is helpful. There's no reason not to think that good governance obviously is going to help. But I guess I would argue that on top of good governance, you also want to try to identify people that are more likely to cause a lot of harm. It might be hard to do. It might be that we don't have the ability to do that all the time. But there are times, I think, when you can recognize that someone is a bad actor. I don't know how accurate we can always be, but there are times when I feel like I've done it, where within a day of meeting someone, I had strong suspicions about their character that were born out. And of course, there might be a selection bias. And maybe I got lucky. Maybe I forgot cases where I thought someone was a bad actor, and they didn't do bad things and so on. But I do think it can be done substantially better than chance.

WILL: Yeah, and again, I think I agree with that. I think I'm just kind of struck by, again, another thing that Eugene Soltes says — he does actually almost go as far as just thinking — the thing that he kind of really emphasizes like, man, we should just really have humility here about what people can end up doing. He even goes as far to say (he thinks, I think) most people, or at least most people who are CEOs of large companies, which has definitely always already selected quite hard, in terms of personality there, could end up committing white collar crimes, which is really kind of striking. It seems too strong to me. But then I also think, I guess just another reflection from the case of FTX too, is if there had been this kind of discussion about all Sam's character in 2021, it would have been quite mixed. I know a lot of people have spoken out about negative views they had, and he certainly had his detractors. But there were other signs that seemed positive I thought too, and I think people would have said that. So like, I don't know, one thing being kind of his attitude to honesty as well, where sometimes people have said that he obviously, as things turned out, he in fact was fairly dishonest. But certainly the way he presented seemed to me, often, like strikingly honest, where he was sometimes willing to just say things that seemed quite unnecessary, like bad for him, just because that's kind of what he thought. A couple of examples, or public ones, was when he was asked about how much he might spend to prevent Trump from winning in 2024. Upon just very gentle prodding from the interviewer, he said, "Oh yeah, it might go as high as a billion dollars." That was something he didn't need to say, he could have honestly said, "I don't know how much exactly," because I don't think he did. But obviously, it was big news. I think it's a strategically bad move. Another example was on Odd Lots, where he talked about how many crypto products are essentially Ponzi schemes. And again, I think, strategically for him a bad move, but because many people interpreted it as him saying he was running a Ponzi scheme, which I think is definitely the wrong interpretation, and not what the interviewer thought. But again, the impression that I got was like, "Wow, this is just a person who kind of says, kind of transparently what's on his mind, even if it's really quite a bad look for him." Obviously, looking back, that is a huge misjudgment. And in fact, he would, as Michael Lewis has said, omit things, very salient omissions, though, Mike Lewis thinks that it's not clear that, with him, he was ever lying, but would be very good at answering a different question. And so I guess, I don't know, my lesson, at least, is that I'm gonna focus a lot on governance, because my experience is of it being just, in some cases, I agree. I can think of other cases of bad behavior where I immediately thought this is a sketchy person. And they in fact, were sketchy. But I think of just a lot of cases, including this one, where it's more like either you just don't have worries about a person at all, or it's mixed and complicated and conflicting. So yeah, that's kind of how I'm thinking about it. Maybe

SPENCER: Maybe one way to think about it is that, while there are extreme cases of people that maybe are at very highly elevated risks for doing bad things, most people fall somewhere in the middle. And for someone who follows somewhere in the middle, other factors like incentives and lack of governance are maybe going to be much bigger factors than whether they're slightly higher in this personal traits or slightly lower in that one. Yeah.

WILL: Yeah, and if it's multiplicative, then even take someone who's quite a bad character, but put them in a system of good governance with appropriate constraints and oversight and feedback and incentives, then that person will not, in fact, end up doing really bad things. So yeah, like I say, I've spent quite a decent chunk of last year, kind of trying to process all of this. There's lessons in my own case, and then there's lessons for EA. In my own case, I think I just am too trusting. When I look back at everything, I think I did just take the things that Sam said, and then Nishad, in particular, because Sam was very busy, so I talked to Nishad more and believed the things he said. They really presented to me this picture, this narrative of the company as deliberately holding themselves to higher standards than other crypto companies, because they were aiming to give away the money and therefore had to because, otherwise, they'd be seen as hypocrites. And secondly, because they're aiming to get regulated. So they needed to be the good guys of crypto. And that's why I got told from the start, from the first time I reached out to Sam after FTX, all the way through to me asking Nishad about whether the company was okay in September 2022. And I do think, in general, I'm too apt to trust people, especially those people that I think I'm kind of on the same team as me, where that team is if I think someone's in EA, then probably I'm just not really entertaining the idea that they could be doing something really bad or deceiving me. So that's definitely a lesson I'm going to personally have going forward. Simply, also is just way more attention to a much wider kind of confidence, like error bars, in terms of how bad things could be, or the idea that maybe FTX was committing enormous fraud. illicitly moving billions of dollars, that just really did not cross my mind. But yeah, again, in the future, I'm really going to be paying attention to worst case outcomes. I don't know what difference that would have made if I had done, but it's certainly something I'm going to bear in mind. And then the base rate of fraud, as well, is just much higher than I thought. Ben West has been doing this great analysis, where I think among Y Combinator companies, 1% or 2% of companies end up being fraudulent, or the founders commit fraud. Among Giving Pledge signatories, so those people who are billionaires who pledge to give at least 50% of their wealth away, 40% have some sort of scandal or at least accused of some sort of scandal, 10% are convicted, whether in a civil court or criminal of some sort of financial claim, 4% spend at least a night in prison. These are all numbers that are much higher than I would have thought. And then finally, also just like against this idea of EA exceptionalism, where you might think, "No, but these are good people. Look, they're giving away money, they're vegan." I think now instead, the attitude should be, EAs are like, okay, sure, they are more into using their time and money to try and make the world better, they are into thinking really intensely about how to do that. Basically, assume that in every other way, they're just the batting average. So yeah, that's a bunch of lessons for me. Sorry, also in terms of... because it's what happened with FTX, and then there's the aftermath. And I think, one thing, again that's a failure of mine is a tendency to maybe take on too many things, have too many roles at once. And that really affected things, like in the aftermath of the collapse, where I was on the Board of Effective Ventures, I was an advisor to the Future Fund. I was also, if there was anyone like a spokesperson for EA — though, I don't think there should be a spokesperson — and in that kind of moment of crisis, all those things infected each other in a way that I think was quite bad, quite damaging. And so yeah, again, in the future, I'm gonna be much more focused on and streamlined. I'm not going to be on Boards for a while. So that's a bunch of stuff on me. Then there's also stuff on EA, like EA's lessons as well that we can talk about, if you want.

SPENCER: Yeah. Do you think this should alter how EA is discussed or how the ideas are put forward?

WILL: Hmm. I mean, yes. Both causally, because now there's this huge stain on its reputation. But then also, just because EA has gotten bigger, and it's gotten more attention now. And so, in terms of how EA was discussed, I think early days of EA, we're just really putting a lot of weight on what's the distinctive thing about EA. And the distinctive thing is using more of your time and money to do good. And with that time and money, trying to think really hard about how you can get the biggest impact with that. And that's like the innovation from Effective Altruism. It's really placing a lot of focus on that. And I think the worry is that, especially as EA grows, people end up thinking that's what we think the whole of morality is. And it's not true. Certainly not for me. And what instead we could do is just accept that we shouldn't just talk about the distinctive part of living a good life, but just instead, from the outset, be clearer about, "Look, this is what a good life looks like. It's abiding by these virtues: cooperativeness, integrity, honesty, humility — definitely humility — as well as turning up the dial, relative to common sense, on what philosophers called beneficence, that desire to do good, and truth seeking, scientific mindset or what some people call Bayesian mindset." That's the picture of a good life, at least from my perspective, and I think that would help a lot with people understanding what we're about. I think, probably, in my own case as well, we've done too much in the way of, "Hey, this is what's really distinctive about EA." And then the kind of afterthought is "Oh, and obviously, you don't violate side constraints in order to do more good, obviously be cooperative. But I think having that kind of front and center is just going to be necessary for how we talk about EA. And I think that was going to be necessary anyway, but now more than ever,

SPENCER: I have this funny experience where sometimes when I talk to younger effective altruists, they just seem really gung ho on utilitarianism, like just do the best thing, maximize the good, turn everything into hedonium. And when I talk to people who are sort of higher up in EA, they often say things like, "Weah, we can't be sure of which moral theory is right. And we need to have moral uncertainty. And we should not violate side constraints. And we need to consider virtues." And it's just this disconnect, where it's like a lot of people have the impression that EA is just about being utilitarian. And yet, a lot of people, like yourself, I think, have a much more nuanced perspective that is not just purely utilitarianism.

WILL: That's right. I mean, I guess there's a classic thing of, you know, 18 year olds get really taken by an idea, want it to be as extreme as possible. Certainly, in my philosophy classes, that happens a lot. [chuckles] But yeah, I do think it's really an unfortunate conflation of effective altruism and utilitarianism. This has been something we have tried to dispel for many, many years. It's absolutely true that many, even most people within the EA, are consequentialism sympathetic. Certainly, it's about placing more weight on consequences. Because common sense morality, in practice, doesn't place very much weight on consequences given that people spend their income on luxury goods when it could help other people so much. But yeah, I think both just as people get older, but also just want to think things through more. No way you should be certain with utilitarianism. I think I did my credences once. I've had a bit of criticism for this before because people are like, "No, you really are, full-blooded utilitarian." And I'm like, "I've got a whole book on moral uncertainty." [laughs] But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something. I mean, large fractions go to — people are often very surprised by this — there's just error theory, so there's just no correct moral view, very large faction to like some moral view we've never thought of. But even within positive moral views, I'm 50-50 on non-consequentialism and consequentialism. Most people are not consequentialist. I don't think I'm way smarter in moral reasoning than the vast majority of people in the world. I'm not sure if it's the sort of thing you can have, like amazing expertise at, even if you are a professor of the subject. And I think this becomes the kind of, the side constraints thing that like just being a good citizen is just super overdetermined. Even if you're just 100% consequentialist, and I think this is, in fact, even the strongest argument is that if you've done some fancy calculation, such that you think that some grave commonsensical violation of morality is the best thing to do, and you should do that, almost certainly you have made a mistake. But the other case where I think it becomes more and more important is when we start thinking more about longtermism, where the kind of, if you just get really in the mode of having one moral view, and you're like, "Oh, what we want is the future to look like this one moral view. And that's what I'm going to push towards." That's really quite worrying to me, versus, "Okay, what we want to do is push towards a future where there is scope and time for people to be able to reflect and reason and deliberate and maybe cooperate with each other and make trade, gains and have gains from trade and so on, so that we can figure out what a good future looks like and then act on that." That's what I really think we should be aiming for. Because I think probably our best guesses at the moment about what's right and wrong are wildly off.

SPENCER: Do you have any takeaways from the FTX debacle about what the leadership of EA should look like?

WILL: Yeah, I think a big thing for EA, in general, is kind of decentralization. And I've thought about this a bit. And that has, in fact, happened. And we're in kind of two ways. This decentralization in the sense of like, previously, most of the core EA organizations or core EA projects sat within one legal umbrella, and that posed a bunch of issues, especially after the collapse, especially in a moment of crisis. And that's gonna change. So all of these different projects are going to become their own organizations. I think that's going to be really good, just in terms of having more leadership. I think one issue that I kind of pointed to, in my own case, was often leadership, having maybe multiple roles. And I think that was probably the worst for me. So I think, again, like having things that may be more siloed is generally gonna be better, and just more possible where if something's young and fast growing, then it's kind of natural thing that you're going to end up having people doing many things, like it's kind of classic startup issue. And so I've been trying to deconstruct my kind of formal responsibilities over the last year and a half as best I can. And then the final thing, which kind of relates to that, is that there's actually been a remarkable refresh in EA leadership, at least in the sense of the organizational leadership. The CEOs of Centre for Effective Altruism, 80,000 Hours, Open Philanthropy, Open Philanthropy's Global Catastrophic Risks (it's not CEO, but like the lead), all of those are different. The Board of Effective Ventures is either wholly different, or will be, or are people who are planning to move on. And so I do think this isn't directly caused by the FTX collapse. Sometimes it's indirect, where the collapse just caused people to really burn out, because there was just so much to do and the work was just pretty grueling for quite a while. Or in other cases, it's more just like, "Okay, let's each have more of a focus. So there's less overlap between the different roles, and maybe more clarity." And honestly, overall, I think that's just a really good thing. I'm just really excited by the new leadership at these organizations. I often think of EA as like a teenager or adolescent or something. It's like maturing into becoming an adult, but it's slightly awkward teenage years at the moment. And this new leadership is just the people who can help EA do that. And so I'm excited about that.

SPENCER: A funny thing about the Effective Altruism community is it's never really had a leader exactly. But I think, until very recently, maybe even today, if you ask people who's the closest to a leader, I think a lot of people would point to you. And I'm wondering, is that something that you want to get away from in the future? Or how do you feel about that?

WILL: Prior to the FTX collapse, this was already something I felt like I was in limbo in 2022. Again, another thing that I could have done, and kind of wish I had done, is maybe get more clarity on that, firstly, in myself and then to other people. And then also communicate that. There's leader in two senses. One is kind of leading this community of people. I think there's just never been anyone in that role. The person who's kind of closest is the CEO of Centre for Effective Altruism, which I was for a time. But really, that's just CEA. It's kind of the main organization, but it's just one among a community. You could just disagree with everything CEA is doing and still be part of EA, and maybe that would be a good thing, in fact. And then there's kind of a figurehead where, yeah, that was certainly the case that, okay, if it's going to be anyone, then it would be me. That's kind of how it's perceived. I think in 2022, the extent to which that was the case took me a bit by surprise during the 'What We Owe the Future' launch in particular. I think there's already like that, but then the ' What We Owe the Future' launch was so big that it really intensified that. And the thing that I guess, I definitely felt uncomfortable about it, mainly because suddenly, especially as EA got bigger, it was like, "Am I a politician now or something? Is EA a special interest group, and I have to represent it and not upset people?" And people would lobby me a lot. And I felt a lot of pressure, in particular, to have certain beliefs or not have other beliefs. And as you know, as well as like many other pressures to and be, I just really didn't like that. I got into EA because I want to think things through much more from a first principles way. And then live my life in accordance with how I thought things through. And then suddenly, it was like, "Oh, no, I'm now back to..." Yes, I've lost kind of ownership of that, including some things like... I don't know... I would have liked to have been able to be like, "May I see your alignment work? is it good? Is it actually helpful? — I'm not sure!" That's the sort of thing where I would feel pressured not to say that because then there would be this lobby group against me or something. And then the other thing was just that, I also just don't think EA is the sort of movement that should have a figurehead or should have a leader in this sense. So like, who's the leader of science? And it's, well, there isn't one. It wasn't Feynman when he was around. It wasn't like Carl Sagan. There are people who might be like representatives of science and advocates of science. But the idea that that would be like leading a movement would be really quite off. And that's at least the vision I have for what EA is; it's much more like an intellectual current. It's not like a special interest group. And so then that was another thing that made me feel uncomfortable about it, too. But like I say, I was in limbo, because I was also like, "Well, maybe this is an important role to have. And if it's anyone then it should be me." But at least, I kind of feel that's at least now a bit clearer.

SPENCER: Alright, so moving on. Let's talk about your work now, which we touched on at the beginning of the episode, and why you're so excited about it. So it's the topic of post AGI governance, and what happens in society if we have this incredibly explosive growth due to AI? How do we manage such a world if that happens? How do we prevent bad things from happening as a result, such as extreme concentrations of power? What do you see are some of the key subtopics that you want to focus on?

WILL: There's kind of two ways of, or maybe three things you could do. One is just making the case for explosive growth, or finding what the case is one thing and then arguing against it. That's something I'm probably not going to particularly contribute to, myself. I want to really understand it in as deep a way as possible, and perhaps help in explaining it or communicating it. But mainly, that's going to, I think, be worth it from economists or people in adjacent areas and Epoch at the moment. Epoch and Tom Davidson's take off speed reports are really the best things on this topic. Then, secondly, there's focus on particular issues that are particularly neglected. So what moral consideration or rights should we give to digital beings, both welfare rights, economic rights, and possibly political rights as well. This just will be an issue that the world will start confronting, and it seems enormously important, and also something that we should really try and get right ahead of time. Because I think quite plausibly, once we've actually started integrating digital beings into society, then it's going to be very hard to change the norms or legal rules that are governing that. So if every person just owns 1000 digital people, well, that's gonna be hard to then get away from. It's gonna be hard to switch, I think. And similarly, vice versa. I think, if you give digital people the vote, well, they probably will increase in number much faster than humanity will, and so will become like the vast proportion of the voting populace. So one of the reasons this is so important, I think, is it's just so hard. They just don't have some views on what an ideal society with both humans and digital people in it looks like. A second kind of specific issue to focus on that I think is particularly neglected is a kind of governance of resources that will be newly valuable after an intelligence explosion. Where, as you want to basically create more and more artificial intelligent agents to scale the economy greatly, energy is going to be quite a plausible bottleneck. That might mean there's just a big race to grab surface area on the oceans for solar farms because that's where most unclaimed land is. Also potentially space resources as well, because the sun has a billion times the solar output compared to what lands on Earth. And, again, we're imagining this scenario where we've leapt forward centuries worth of progress in just a few years. At that point in time, yeah, you really could be able to harness that. And whoever was first willing and able to grab that resource could really just control what happens on Earth and within our solar system, just potentially indefinitely. And then third, is this tough political question of making sure that small numbers of actors don't seize power, which is at least we already have a threat of dictatorships many times. That's at least a little bit more familiar and less out there than the first two topics I've discussed. But then the kind of most important challenge, as I see it, is what I call the meta challenge of just having a good deliberative process over the course of this explosive growth period. So, how is this governed? Is it just that one country, like the United States just plows ahead, has a growth explosion within its own boundaries? And then it calls the shots? Is there instead some international collaborative project? If so, what does that look like? Are there ways of extending this period, slowing down the pace of such rapid growth so that we have more time to deliberate? And that, I think, is the thing that I'm most excited about working on, mainly for a reason that I think if you get that right, then of all these other specific challenges, you at least help significantly with them. And so it's kind of the first thing I want to work on. As for whether I can help and make any good progress? I don't know.

SPENCER: Have you begun to see an inkling of what that might look like to have that kind of good governance that can help solve the other problems?

WILL: I'm certainly interested in a number of things. I'm interested in the idea of a structured pause. So at the moment, there are people who are campaigning to pause AI. It's not totally obvious to me whether a pause now is good or bad. But something I'm very in favor of is a pause at the really crucial moment in time. So what we could do is define up a set of benchmarks, and perhaps expert opinion, like a panel of experts, that would delineate this as the start of the intelligence explosion. So this is the point in time when AI is meaningfully automating AI research. That's the thing that really drives very fast rates of growth, you've got AI building better AI who can build better AI. And that will, by default, be quite a gradual process. Maybe fast, but it's gradual. It will also not seem too crazy; it'll seem kind of underwhelming, I think, before it's too late. So, we want something that's at the point of time that says AI researchers are going four times faster than it would have done because of AI assistance in doing AI research. That's a point in time where we want to say, "Okay, things are about to get extremely wild, things are about to go very fast." Let's say the US is the front runner, they can say, "Okay, we are going to pause development of Frontier AI for one month, and hold the convention in order to figure out what exactly we do next over the coming few years." Anyway, any other countries who also pause at this point in time can attend this convention. And we'll try and figure out something that's mutually agreeable. I feel particularly in favor of this because pausing at that point in time, means that we just would have a much better sense of how things are gonna go over the course of intelligence explosion. And then secondly, we could also potentially benefit from AI assistance with helping us deliberate, because I think there are many ways in which advanced AI systems could help us be much much better at reasoning and thinking and forecasting. And ideally, we would get as much of that sort of assistance when making these huge decisions as possible. So yeah, that's one idea I've been thinking about and working through a little bit.

SPENCER: To what extent does this line of research hinge on assumptions about how quickly AI progresses and how much effect AI has on economic growth?

WILL: I have been talking about explosive growth. I think, certainly in the early stages of what happens, it's not clear at all that it shows up in any way in growth statistics. So, I do need to get my terminology better so it's less confusing on that front. I do think all of the issues that I'm planning to work on, that I think are important and should be worked on more, I think they are important, even if growth in tech progress just continues at the same rate that it has done for the past 100 years. So we will, at some point, need to figure out how we will at some point develop AI systems where we don't know whether they're moral patients or not, that is whether they have moral status or not. We will need to figure out what to do about that. We will need to figure out how newly valuable resources get allocated among different countries. So in that sense, it's kind of the work fails gracefully, if we don't get something like explosive growth. However, it does put real extra urgency on it. And it is where I would like to focus my efforts, just because I think that the chance of this happening over the course of the next 10 years or 15 years is really much higher than one would like. And also unusually high, I think, over the course of the next decade or 15 years, because of the very predictable, very rapid increase in investment into AI.

SPENCER: So one way in which AI is unique is that we might be creating moral agents, like if AIs are eventually conscious, they could suffer. And that's a kind of very unusual situation to be in. Are there other ways you see the AI question as distinct or unique compared to other ways that economic growth could accelerate?

WILL: Yeah, there are some ways. One is that AI can enable you to make very long lasting binding treaties. So suppose that you and I want to make a deal, and we want that deal to last for an extremely long time, we could have kind of an AI enforcer such that the AI is, especially assuming that we have solved or got an adequate handle on alignment, we have kind of aligned AI with that goal. And we know that it will just align with that agreement, and we know it will follow it kind of indefinitely. AI just also makes space just much more important. It's very hard for human beings to go into space, it's really not an environment that is well suited for us. To put it mildly, that's very different for artificial intelligence systems. But the biggest thing, I think, is just the sheer rate of progress that one could get, where other ways that you could imagine of boosting economic growth or boosting technological progress. I don't think they have this kind of recursive loop that could go anywhere near as quickly as could happen with AI. And yeah, that's kind of the biggest difference. And then just lots of other things, even things that seemingly have nothing to do with AI follow from that.

SPENCER: Three major risks of AI that people talk about are: one, that we can't control AI. We build something super intelligent, and we lose control of it. For example, the classic case of you giving a genie a wish, and it technically satisfies your wish, but to your great detriment. The second being concentration of power, where these systems allow, let's say, one company to have 20% of all the labor in the world suddenly running a software, and you suddenly could have authoritarian governments that can monitor all the people with AI drones. And then the third is much more subtle, but potentially very pernicious forms of harm, like increasing the spread of misinformation through AI bots, or algorithmic unfairness that sort of gets executed by AIs, or faked imagery or video that confuse everyone about what's really true. And I'm wondering, this line of research, to what extent is it focused on these three different kinds of fears of AI?

WILL: Okay, so you had misalignment, concentration of political power, and three is misinformation bias and so on. I think I would just add on to it. One is misuse: use for biological weapons of mass destruction, for example. Then another is these digital beings. And then a final is the idea of lock-in. So, this is a little different than concentration of power, but it's more kind of solidification of power and solidification of particular moral views, where it's pretty plausible to me, certainly more than 10%, that as a result of explosive growth, you end up with something that is in the direction of world government. I think that would be like a real pivotal moment because that would be setting, potentially kind of locking in, a set of institutional arrangements that could persist for an extremely long time, maybe indefinitely. And I guess the kind of thrust of what I'm saying is just, man, all of these things are going to be happening at once. We're gonna have to be addressing all of these challenges at once in a short span of time, while also having new conceptual innovations and intellectual developments and so on. So the meta thing is the optimal deliberation challenge that I'm thinking about is, can we have systems and governance systems in place such that we do better in all of these challenges at once?

SPENCER: Do you see the solutions to them being pretty interconnected, rather than we need really unique solutions for each of them?

WILL: Yeah. So I think the best argument against what I'm planning to focus on being the top priority to focus on is that, "Look, don't go meta, just work on one of the most important things." And this was true for alignment, for example. Most people who are worried about AI are focused on misalignment. And some are kind of worried about misuse. Maybe it's the case that rather than trying to figure out governance at the high level, instead, it's more impactful to just just focus on the totalitarian fear or a dictatorship fear, or just focus on digital beings, or just focus on new weapons of mass destruction that we aren't even like thinking about properly yet. That might well be the case, but I think it does seem like we potentially have levers that help on all of these things at once.

SPENCER: So it sounds like you're now focused on the effects of AI. Are you concerned at all that focusing on AI will kind of take over the Effective Altruism movement? Would that be a good thing or a bad thing, if it did happen?

WILL: So certainly true, that there's been, for many years, a rise of interest in AI, and focus on AI. Also, it's true that AI is just a bigger thing in the world now. And I definitely don't think that EA should just become synonymous with concern about AI. I think that would be an enormous loss. I think there are certain core ideas underlying EA: scope-sensitivity, empathy for all potential beings that have moral status that can feel happiness or suffering, and just really intense desire to figure out what's correct in order to help others as much as possible. These ideas are just — Peter Wildeford says, "these ideas I want to protect." And I agree, I just think they're really important. I think it'd be a huge loss, if that just disappeared or faded away. I think it would certainly be a huge loss if transformative AI is a long time away, which might well be the case or might even never come. That's also, whether we get AI that drives a sort of explosive growth, that also might just never happen. And if you tell me that, then growing a movement of people who are really concerned to help others, and really take that obligation seriously and want to act on it as best they can, I find it hard to think of anything more important. But then even within, even if you do assume that AI is going to be transformative in the next couple of decades, I think the EA mentality is just really important, where, yes, now is the case that misalignment risk is getting a lot more attention. And I'm hoping that AI safety just as seems to be happening kind of develops fully into its own field. So it feels less like just a sub component of EA. I think this is already happening. But like I've been saying, there's just many other issues too. And actually, for many of these issues, they're exactly the sort of weird issues that don't have much attention on them that need people with foresight and moral seriousness to work on them. And people who can be impartial and work things like epistemically impartial too, and really work things through. And these are exactly the sort of people that I think we often find in EA, and in EA qua EA, EA for itself. And so yeah, I think that even in those worlds too, AI will impact many, many things, just as during the Industrial Revolution. Industrialization impacted many, many things. But that doesn't mean that misalignment alone is the only cause area. And so, basically, whatever happens with AI, I'm feeling really quite positive about the renewed strength of EA and the development of EA going forward.

SPENCER: Given all that's happened, what do you see as sort of a unique role of Effective Altruism going forward? And what kind of unique impact do you expect it might be able to have, if it is able to achieve its goals?

WILL: No doubt at all, it has been rough, extremely rough for EA over the last year and a half. In the media, on Twitter, it has taken an absolute pummeling. And some horrible things have happened. But I think the role that EA has to play is the same as before. I think just is the case that in the world as at the moment, people, in general, even people who have the opportunity to, don't think that much about trying to use their lives, whether that's their time or their money, to make the world better. Some people do, but most people don't, even those that have the option. Nor do people think in this careful, considered, scientifically oriented way about how they can actually use that scarce resource, which is well meaning and well motivated time or money. And that's something that just looks really robustly good, if the world were to change in that direction. If more people were to say, "Look, yeah, I am in a position of privilege, I want to use a significant fraction of my time on this planet to try and help others and to try and mitigate the huge problems that the world faces." And yes, EA has come under enormous attack. That was true for many other social movements and intellectual movements in the past too, including those that we think extremely highly of now. And so I would really like to see EA kind of doubling down on its core principles and being willing to stand up for those core principles even in the face of attack or unfair criticism.

SPENCER: Will, thanks so much for coming on.

WILL: Great. Thanks so much for having me Spencer.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: