July 16, 2021
What is "the precipice"? Which kinds of risks (natural or technological) pose the greatest threats to humanity specifically or to life on Earth generally in the near future? What other kinds of existential risks exist beyond mere extinction? What are the differences between catastrophic risks and existential risks? How serious is the threat of climate change on an existential scale? What are the most promising lines of research into the mitigation of existential risks? How should funds be distributed to various projects or organizations working on this front? What would a world with existential security look like? What is differential technological development? What is longtermism? Why should we care about what happens in the very far future?
Toby Ord is a Senior Research Fellow in Philosophy at Oxford University. His work focuses on the big picture questions facing humanity. His current research is on the longterm future of humanity and the risks which threaten to destroy our entire potential. His new book, The Precipice, argues that safeguarding our future is among the most pressing and neglected issues we face. You can find him on Twitter at @tobyordoxford.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast. And I'm so glad you joined us today. In this episode, Spencer speaks with Toby Ord about the precipice, catastrophic impacts on the global population, existential risks and security, and long-termism.
SPENCER: Toby, welcome. It's great to have you on.
TOBY: Oh, it's great to be here.
SPENCER: So what is the precipice?
TOBY: Yeah, the precipice — the focus in my book — is the time period that we're currently in. Ultimately, humanity has been here on Earth for at least 200,000 years so far. We play our cards right, there's no reason why we couldn't remain here for hundreds of 1000s or millions of years more, perhaps even about a billion years that the Earth will remain habitable. But that entire future is at risk at the moment, because humanity's escalating power over the last 200,000 years. In the 20th century, that power reached this point with nuclear weapons where we had the power to destroy ourselves, and this possibility that we could end not just the presence, but our entire future. So I think that humanity has reached a precarious moment where we've opened the door to this time to mix my metaphors. We need to end this risky period.
SPENCER: I've heard some people talk about this with the idea that our wisdom needs to grow at least as fast as our technology. Otherwise, we're in deep trouble.
TOBY: That's right. Carl Sagan put it that way. The idea actually has a long history — and I put it that way, myself as well — that we often consider this exponential progress in many dimensions in technology as exponentially increasing power. But our wisdom, I think, has only grown unfalteringly, if at all. Obviously, if you push the metaphor too far, it's a bit hard to say what it means for our wisdom to grow faster than our power. You need some kind of units of wisdom, and so forth. I think that the idea does hold up that if you've only got powers that can wreak havoc on a local scale, then you only need to be able to kind of govern at this local level. If you have power that can reach some kind of global havoc, then the kind of interconnectedness of different countries, and that some ability to kind of manage on this world scale becomes essential. Then if you have power that could not just cause some kind of local challenges for a short time, it could be their final events, which would extinguish humanity, or perhaps be a permanent collapse of civilization. So some kind of existential catastrophe, where there's this key aspect that if it happens once, there's no way back, we had a lot of wisdom to get through our entire future without everyone falling victim to one of these events.
SPENCER: Just saying. I think another way to put this is that if each year, we accept some fixed probability of humanity going extinct, let's say 1%, then humanity will definitely go extinct. After not too long, the probabilities add up. So in order to exist in the really long term future, we either have to get those probabilities really, really low, or we have to have them declining, right? Okay, maybe we could accept a 1% for one year, but then it's better to push that down, or humanity doesn't have a very good chance of long-term survival.
TOBY: Yeah, that's right. So we've always been vulnerable to natural risks, things like an asteroid (a 10-kilometer asteroid) that killed the dinosaurs, something like that, or maybe a super volcanic eruption or various other things. We know that humanity has survived for 2000 centuries without falling victim to any of those. That's not just a selection effect, we know that species typically survive for about 10,000 centuries. So the per century risk can't be all that high from those natural causes. My best guess is that there's around about one in 10,000 per century, meaning that one can last about a million years.
SPENCER: And we live the rest here, including there.
TOBY: This includes basically risks from outside humanity. This includes the super volcanic eruptions, which I actually think are the biggest known natural risk, asteroid impacts, comet impacts, explosions of stars, such as supernovae and gamma ray bursts, ice ages, if that was a serious threat, although we basically know that's not going to happen in the next 100 years. My focus is mainly over the next 100 years in particular. So pandemics are an interesting case, because there are kind of — you might think there was naturally a rising pandemics as opposed to an engineered pathogen, but unlike these other natural risks, where, if anything, we're safer from them than we were in the past, because we're globally distributed. We're not just dependent on one type of food or the ecosystem of East Africa. In the case of pandemics, we've done a lot of things that actually could make them worse, such as this very quick interconnectedness that only takes a matter of about a day to get anywhere to anywhere else in the world. A number of other features that — you know that it's basically complicated. There are ways in which we're more protective of pandemics and as ways in which men are more vulnerable to them.
SPENCER: Think how long it would have taken a pandemic to spread across the world 1000 years ago?
TOBY: Yeah, it is, I'd be fascinated to find out more about this kind of modeling. We also have faster telecommunications now. So we can find out about a pandemic happening in China before it reaches somewhere else. I wonder, what's happened to the ratio between our telecommunications speed and the speed the pandemic traveled? It seems like that ratio might be the relevant thing. I think there's a whole lot of these interesting questions like, "How long did it take for something to get from the first infected spillover case from animals to everyone that a pandemic would affect in the past?" and, "How long does it take now?", these related questions, I think it's fascinating. But the reason I mentioned them and brought them out separately, is because they're a bit of a mix between a natural and anthropogenic risk, because they're kind of maybe naturally initiated, but anthropogenically mediated, or something like that.
SPENCER: Right, our behavior affects the speed of them you mean?
TOBY: Yeah, affects the speed, but also affects the scope. So before we could cross the Atlantic Ocean, there were two separate domains of people as the kind of Afro-Eurasian people and then the people in the Americas. The Afro-Eurasian people also connected into Australia as well. But that meant that there was kind of limited scope for a pandemic to actually destroy everyone. So that's another way in which we're more vulnerable than we were in the past. So the past might not give us as much reassurance when it comes to pandemics as it does for these other natural events. When it comes to the natural ones, we do know that this risk per century was low. Something like one in 10,000, maybe lower. The risks since we develop nuclear weapons, I think, are substantially higher than that. Kennedy, after the Cuban Missile Crisis, his estimate of the chance that it would have turned into a full scale nuclear war was between 1/3 and one half — and maybe that's a bit high. I don't know exactly — but it seems somewhat optimistic to think that we could get through, say, 100 of the 20th centuries in a row with risk like that, without at least falling victim to some very serious global catastrophe, perhaps an existential catastrophe. So I think that you're right, that what you said earlier, that you can think of it as this kind of risk per unit time, and that if it stays at a certain size for long enough, then the probability that it will continue vanishes. My best guess for the probability of an existential catastrophe in the next 100 years, is about one in six, suggesting that we could only last on average, about six centuries with risk that high. It's an unsustainable level of risk. What we need to do is to ultimately bring that risk down — as quickly as we can — but then also, even if we brought it down to 1% per century, you still only got a time limit of 100 centuries, which is much shorter than the amount of time we've survived so far. You need to progressively keep bringing it down and down and down.
SPENCER: We're gonna dig more into the manmade risks. I want to go back just for a second to the natural risks, and double-click on a couple things there. You mentioned that you feel like volcanoes were the largest such risk, which surprised me. Could you talk about that for a moment?
TOBY: Yeah. Maybe it would surprise us if you found out that asteroid said just not that much of a risk. It's all to do with which one's larger than which other?
SPENCER: Right? I think that's the one people usually think about. They think about, "well, it seems like an asteroid. They've killed off the dinosaurs", and so on.
TOBY: Yeah. An asteroid 10-kilometers across did kill the dinosaurs, but that was 65 million years ago. Our estimate for something like that asteroid of that size, how often it would hit is about one in a million per century, in other words, about once every 100 million years, which is very infrequent. Then as it happens, we have scanned the skies, that all such asteroids were found, and not on a collision course for us in the next 100 years. The only possibility that we could be hit by an asteroid like that is if it's one in the location that we've had trouble scanning, such as if it's directly opposite the sun.
SPENCER: There's no particular reason those would be higher likelihood to hit us, right? The ones we have trouble scanning?
TOBY: No, it's just that effectively, we have scanned about 99% or more of the sky. So that basically gets rid of 99% of the risk that we'd have in a typical century when it comes to the current century. I think the chance of an asteroid of that size hitting us in the next century is about one in 150 million. The reason that super volcanoes are looming large is mainly because asteroids and other things are actually very low risk. A similar thing happens with supernovae and gamma ray bursts, they're just incredibly unlikely, something like a one in 50 million chance that there is a near enough supernova to cause trouble happening in the next century. A similar number for gamma ray bursts. Whereas when it comes to super volcanoes — these are volcanoes that are not the kind of cone shaped ones the tower above the ground, but the one that leaves behind a kind of Caldera kind of sunken into the Earth, like the Yellowstone Caldera — and the biggest trip recent one was the Toba eruption. This is something that is still exceptionally unlikely to happen soon, but is more likely than the asteroids or other things.
SPENCER: Do they have a large effect on the history of our planet?
TOBY: We're not sure, there has been some very interesting evidence that the Toba eruption corresponded to a genetic bottleneck in homosapiens. It was thought that perhaps it reduces to a very small breeding population from which we only just recovered. But more recently, this evidence hasn't really held up. It seems that if anything, the track record is that it's good news that our kind of the early Homo sapiens were able to survive such as supervolcano, and also the Toba eruption was a very large one, that they can't get that much larger than that — as far as we understand it. The central estimate for that is something like one every 80,000 years.
SPENCER: So about naturalist, there's one more thing I want to ask about, which is this idea of anthropic, and imagine that we lived in an area of the universe where there were just supernova all the time, presumably then, in that area of the universe, there wouldn't be intelligent beings that eventually built civilization, because it's going to get wiped out constantly. This gets when thinking about if you were in a place or a time where civilization just couldn't come into existence, then it wouldn't come into existence. This could kind of skew the probabilities in terms of thinking about how likely events are. Do you want to comment on that?
TOBY: You're right, that there are these observer selection effects, that we could only find ourselves in a time and place where we could exist. Some of my colleagues have looked into this — for example, Nick Bostrom, and Max Tegmark have a nice paper, looking at the possibility of vacuum collapse. The idea that the vacuum of space is not the lowest energy vacuum, and that it could spontaneously collapse, leading to an explosion going at the speed of light, converting everything within it to some new kind of universe, presumably destroying everything in the process. That possibility that they were saying, "Well, how do we know how small that possibility is, maybe it could be really common, but we're still going to see ourselves not suffering from it". There's no way that we could ever actually witness it. So, how could we know that the probability was small? And what they did was to ask the question of, "when we exist in the history of the universe...", and they note that there doesn't seem to be much a say, stopping humanity having evolved, instead of 4.6 billion years into the Earth's history at, say, 4.5 billion years into the Earth's history, "...maybe there's good reason, we couldn't have evolved point 1 billion years into the Earth's history?", because there's not enough time to do all the steps. We could have been a little bit earlier, that wouldn't be that unlikely. Then if you suppose that the chance of these events destroying all life was fairly common, let's say once every million years, then getting through 100, additional such million year periods, would be exceptionally unlikely one in two to the power of 100 chance that you'd make it through. So you'd find that you should expect in that kind of world to occur very, very early, like the earliest you could plausibly occur. The kind of argument for saying that these risks are certain kinds of risks. Even ones that you can't ever detect if they happened. You can use your date of occurrence or also, you could use like how long we've survived for as some kind of evidence, because you would be less likely to survive a long time in the worlds that have these events, even if you can never witness the events themselves.
SPENCER: It's very interesting. I've also heard about people using a kind of near-miss analysis. Let's say the only reason civilization exists is because an asteroid just got really lucky. Normally asteroids destroy civilization really fast, and we just haven't been hit by one. Then people sometimes argue, "Well, if that were the case, you'd expect a lot of like asteroids that are just below the level they sort of civilization", or something like that. What do you think of that kind of thinking?
TOBY: Yeah, I mean, it's good thinking. And it's good to be noticing these kinds of problems with the data collection. There's a lot of fascinating issues that come up to do with the fact that human extinction would necessarily kind of be an unprecedented event to observe. That means that you can never observe this thing. So, "how do you get data?", "How does this change your methodology?". Some examples of what you're talking about, one could look at other time periods. You could look at if it's the case that this thing, unlike with the vacuum collapse? If this wouldn't sterilize all life on Earth, then you could say, "Well, what about 100 million years ago? Were there many more asteroid collisions than there are now? Or were there more kinds of asteroid collisions with bigger asteroids?". Then, "has there been a kind of conspicuous kind of weird absence of big asteroids in most recent times compared to earlier times?"I know that Andrew Sandberg and others have looked into that that's called an entropic shadow. If you notice that there's been this kind of strange lull in recent times that could only really be caused by the fact that it's necessary that there's a lull there. I think the evidence was basically unclear. It didn't really point strongly either way. Another example you could do is look at the moon. The moon surface is kind of like a picture of the level of asteroid bombardment in the solar system. You could imagine a situation where suppose physics was such that any asteroid hitting the Earth would destroy it. You would still see asteroid collisions on the moon, even though you could never have witnessed any craters on the Earth, and you could still kind of work out how much a selection effect was going on by looking at the moon. So there's like a whole lot of interesting ways that you can use indirect evidence like that.
SPENCER: No, it's a good point. My favorite way of explaining anthropic is to observe that whenever you're at a museum, and you look at the math, then you are here, arrows exactly where you're standing, which seems like a very weird coincidence.
TOBY: Yeah, has a lot of interesting anthropic selection things. There's interesting interactions with ideas like Copernicanism in science. This very general idea that we should think of humanity is typical. It was meant to be a kind of humbling reorientation of how we understand humanity. It can go a bit wrong. For example, it's possible that we're at a typical planet — although that's probably not true, because it looks like most planets don't occur in the habitable zone around a star — so, we're in a typical planet in the habitable zone, we're certainly not at a typical location in space. Most locations in space are not inside galaxies, let alone inside solar systems, let alone on the surface of planets. So there's a lot of sensors in which we're not typical. I think often the explanation for those things is that it's selected for, it's evidence that it's much more likely to find life on the surface of a planet, because otherwise, why would we be here? I think there's this interesting interrelationship between component principles and selection effects. Basically, the things where competitive principles don't apply are things that have very strong selection effects going on.
SPENCER: I've heard some physicists try to use anthropic reasoning to kind of explain parameters of physics and things like that. How do you feel about that approach?
TOBY: It's interesting, but yeah, it's beyond what I know. I gather that my recent Max Tegmark, two people who are pioneers of understanding existential risk, are also pioneers of this multiverse understanding where one looks at these fine tuning arguments where you notice that certain parameters of physics seem to exist in a very narrow range where life anything like us could exist. Then, there's this kind of how do you explain that coincidence. One explanation could be that there are many universes realized with all different combinations of these parameters? then, of course, we find ourselves in one way where possible, that seems pretty plausible to me. But then, there are a lot of people who don't like it and think it's deeply anti-scientific. There's various interesting questions about whether this leads to testable conclusions and say, and if not, is it still a kind of rationally supported belief?
SPENCER: Okay, so we've discussed now for risk, let's pivot now to thinking about technological risks, you want to kind of break that down for us? What are the big risks you see? And how do you bring order them?
TOBY: Yeah, the way I break this down is into the kind of current risks and the risks on the horizon. And in terms of current risks, I think that there's the risk of nuclear war, there's the risk of extreme climate change, and I also kind of include a catch all for other forms of environmental destruction. I'm not sure what one could perhaps find most plausible is this kind of mass extinction of ecosystem collapse — where we are losing species at an extremely fast rate, a rate that is substantially higher than it would need to be to count as a mass extinction historically. Were only 1% of the way to the level of species loss that would classify as mass extinction. So when the kind of rate measure were in a mass extinction, but on the kind of level measured, maybe we're not. I think there's some interesting possibilities like that, that we really don't fully understand yet. That also could be kind of current anthropogenic risks. Then in the risks on the horizon, I think that the main ones are risks from engineered pandemics and risks from artificial intelligence.
SPENCER: On the ecosystem collapse, what would a scenario be there? Would it be, for example, too many fish start dying, and then as the kind of ocean sort of goes into this cataclysm of death, we find it much harder to support life on this planet?
TOBY: Yeah, I think that the general worry is some kind of cascading failure that the Earth system is robust up to a certain level of perturbation. But beyond that, all of the food chains are broken and things and things fall apart. So I'm not able to assess that. In general, what I found was for a lot of the different types of other kinds of environmental damage that have been suggested, such as resource depletion, and things that if you look into them, it's actually very hard to see how they could be existential catastrophes. It is plausible that something of that kind could be we're certainly putting unprecedented stresses of various forms upon the Earth system. It just wouldn't be shocking if one of these data's in but certainly a lot more studies needed. That's why I have it more of a catch all category rather than kind of getting into particular ones. Because the particular ones that I've seen, it's hard to make a kind of concrete case that it really is a risk.
SPENCER: I guess in order to be an existential risk, it would have to be that some species died out and those kinds of other species that would cut other species kind of would have to chain way up until there's just such food scarcity. The whole Earth can't support itself or something like that. It is a little hard to see how it kills all humans, although obviously could cause enormous disaster.
TOBY: Yeah, no, it is an interesting question. You're right that in general, when looking at existential risk, to focus on the topic at hand, one often has to set aside these other forms of unmitigated disasters. Clearly, if we destroyed almost all life on Earth, and we were living on a kind of barren planet. Even if we could survive and ultimately get off this planet and get back to a flourishing existence, it would be an unspeakably bad error. One still does need to set those things aside, if one wants to focus on existential risk and actually make some progress on the topic at hand, rather than just looking at all bad things. That said, yeah, the idea would be if it was just a case of us going too far, it's a bit hard to see how that happens, where first we destroy 1% of the species and then 2% and then 3%. That's kind of a slippery slope argument as to why we keep doing this beyond the point where it kills us all. Whereas, if there was a version that was more once you sever enough links in these chains, that they start to snap in unpredictable ways, it's kind of easier to see how this could be our undoing. But you're right, that if you try to kind of trace it all the way down to "how did the last few months die?", "what is actually going on here?", it is pretty hard to see. And a good example would be if you ask the question, "In 100 years time, would we have the technology to have a self-sustaining base on Mars?", my guess would be yes. I think that a lot of people would say yes, seems very plausible, particularly worked out. Then if that's true, then if a lot of the disasters we're considering wouldn't make Earth as uninhabitable as Mars is — in which case, we presumably should be able to have continued habitation on Earth, even if it had greatly reduced levels — there's a kind of interesting tension now.
SPENCER: Well, as you point out, there are some absolutely horrible scenarios that don't kill anywhere close to all humans on Earth. So do you want to talk a bit about why you break out of this category of existential risks, which I think you define to mean every single human dies? Is that right?
TOBY: So I define it to include that case, but also includes some other cases as well, such as an unrecoverable collapse of civilization, or an inescapable totalitarian regime. What such situations would have in common is that they involve the permanent destruction of humanity's long-term potential. There's a kind of irrevocable illness about them that they wouldn't just destroy our present, but they would destroy our entire future as well. That gives them a large number of special properties. It means that their immense importance and significance and that they also have these properties, such as that we have to get through our entire future without ever once falling victim to one of these things, that at the time we deal with each of them. We've never actually suffered from one. We can't learn from trial and error, and a lot of related things as well. I think that they're kind of unique in this extremely high stakes, and also in the methodologies that are required to deal with them. My formal definition is that an existential risk is a risk to humanity's long-term potential that it will be permanently destroyed.
SPENCER: That makes sense. There's sort of a special category that's both incredibly important, but also has sort of unique analytical properties that as you mentioned, that makes it worth breaking it out. Then would you call the other group catastrophic risks? Do you have a definition for that?
TOBY: Yeah, the definition that often floats around for global catastrophic risks is something as an arbitrary line to say, the type of risk that kills 10% of the people on Earth. At that level, it's unclear whether there have been any such disasters, any such global catastrophes — although there have been some things such as the Black Death and also the Columbian Exchange (where with the 1492, spreading diseases into the Americas, both those things could have got close to killing 10% of the people in the world). It's clearly a very high bar. That's what global catastrophic risks often refer to. Some people are now using the term extreme risks to refer to things — say at the level of COVID or above. So there's a few different kinds of levels there.
SPENCER: Got it. Now, in terms of technological risk, you mentioned bio-engineering, want to talk about a bit?
TOBY: Yeah. We've always been vulnerable to these, these natural pandemics, or naturally rising pandemics. There are some kinds of hopeful thoughts in theoretical biology as to reasons why it's very unusual for a pandemic to kill all of its hosts. If we look at the record of catastrophes, as I mentioned, the Black Death killed between about a quarter and a half of all people in Europe (which was getting towards a 10th of all the people in the world). The Spanish flu, or the 1918 flu, just 100 years ago, was the first really global pandemic that reached all inhabited continents and older Pacific islands as well — very far flung parts of the world. We don't quite know how many people are killed because the estimates are about 3%. So many, many more people than with COVID. So the kind of worst disasters we know about Earth is of a pandemic nature, but it's still less some kind of reassurance as to "Well, why would it actually kill everyone?", and it's still possible that even if it couldn't kill everyone, it could lead to some kind of collapse of civilization. Perhaps it's the case that we couldn't recover from that. But when it comes to engineered pandemics, we've got very rapidly improving power over these biological pathogens. If you remember, the Human Genome Project, at the time, was the largest scientific project ever undertaken. It took immense effort and time to sequence the single human genome. Whereas now that can be done in less than an hour, all for less than $1,000. So this is kind of generally amazingly rapid advances in our powder to deal with this. That also includes DNA synthesis and creating organisms directly from their DNA. So this power is, of course has a lot of upside to it in terms of improving biotechnology, but also has this downside, which is that as more and more people become able to access these technologies, and if we look at something like CRISPR, or gene drives, in both cases, it was only two years between the first development of these technologies by the absolute world experts, and the time at which students were using them in science competitions. Then the pool of people with access — as they get larger — there's more chance that you will include some misanthropic person who wants to destroy everyone. That they're not very common but as you keep expanding this pool, you will find them because those people do exist. Then there's also concerns about nation states using bioweapons in that kid — that's a slightly different case, because they generally wouldn't want to kill everyone in the world. But there's also the possibility that they would lead to that nonetheless.
SPENCER: Yeah, I guess the way I think about this is every year, it seems to get easier to make a bio-weapon. And furthermore, natural pandemics are not optimizing for destruction. If someone was actually optimizing for destruction and tried to create a bioweapon that causes the most chaos, possibly the most loss of human life, they may actually be able to do much better at that horrible goal than natural pandemics do, especially as technology continues progressing. Is that a fair summary?
TOBY: Yeah, that's exactly right.
SPENCER: Got it. I think the other technological risk you mentioned is AI. Were there others we can go into AI? But were there other ones you wanted to talk about?
TOBY: Yeah, I mean, I guess there's also nanotechnology, although I don't imagine that the risks from that are going to be arising, particularly in the next 50 years.
SPENCER: Is it because the technology's not advanced enough?
TOBY: Yeah. I guess I talked about risks on the horizon, and it depends how far away that horizon is. Maybe if I just stood at the top of a tall building, we could kind of close a horizon of engineered pandemics and possibly AI as well. Whereas maybe you'd need to get to the top of a tall mountain in order to be able to see the nanotechnology risks on the horizon.
SPENCER: When you see nanotechnology risks, are those things like the gray goo scenario?
TOBY: Yeah, so that's the central possibility that's been talked about a lot where you have self-replicating nano-machines. The world has basically self replicating nano-machines bacteria. They can be very successful as certainly proof of concept that you can make systems that are self-replicating, and so forth. We're increasingly understanding how they work and being able to engineer them, modify them. We could perhaps create very different kinds of such machines that evolution can't create if we created them out of different materials, using different engineering techniques, in the same way that no animals evolved wheels, if there might be a whole lot of options to create things that can outcompete nature. There's also a question about why would anyone do that. And that's what Eric Drexler — a foundational figure in nanotechnology — thinks is that there's not much reason to create self-replicating nano-machines anyway. The main benefits you get from nanotechnology are in terms of vastly improved manufacturing capability, where effectively, you make manufacturing things with atomic precision incredibly cheap. The only real costs are the designs. If someone wants a new laptop, they just download the new laptop design, maybe there's intellectual property on those designs. But maybe there are also some very good open source designs. Then you can just print out your new laptop, which will be constructed out of standard feedstocks, basically just stores of the elements that it's made out of, then you could get all of these kinds of benefits. You wouldn't need to create microscopic machines, you basically create macroscopic machines with microscopic scale manipulators on them.
SPENCER: The machines you're imagining could kind of print out, essentially any materials.
TOBY: Yeah, that's right. That's the dream. One of the problems with the dream is, again, proliferation. Among the things that you can print out will include various forms of weapons of mass destruction, even something like atomic weapons you might think couldn't be produced by this because you need uranium as one of the feedstock materials. It turns out that uranium is not that uncommon in the Earth's crust, the main issue is that you need to just get the right isotopes to do a strip separation. What you could do is print out machines that do so type separation. That's the kind of concern.
SPENCER: I see. So on the one hand, there's this concern about "well, what if you use this to make tiny replicating machines?", then hypothetically, "what if they out competed bacteria and things like that and spread over the whole world?" Now we have tiny robots everywhere, and that doesn't seem great. On the other hand, maybe these machines just make other forms of risk more dangerous, like they make it much easier to make a nuclear weapon.
TOBY: Yeah, I think that that's the better way to look at it is that they create increased instability. If we're thinking about this rise of humanity's power escalating until large nation states — let's say superpowers currently have the potential to perhaps destroy the world through nuclear war, we may be in a situation where that level of power was had by even small terrorist groups. That basically empowering all kinds of nations and groups in this way may lead to a very unstable world. If it turns out that offensive technologies kind of dominate defensive technologies. There's a little bit of an issue there . We could hope that defensive technologies somehow get the edge and we break out of this, but you see the kinds of concerns.
SPENCER: I know that Drexler wanted to accelerate positional chemistry in these kinds of nano-manufacturing ideas. Do you view it as sort of a good thing that we haven't developed faster on this?
**JOSH:**Yeah, I don't know. I guess I just have very mixed feelings on this. I'm kind of happy that there's not an extra risk that I'm having to worry about. And that one, can kind of punch that one for a while. We're presumably also missing out on a whole lot of benefits. It's a bit like a more general question about how good the technological change is happening as rapidly as it is? Is it happening too rapidly or not rapidly enough? I think it's very hard to say.
SPENCER: So one topic that I think people might wonder about that we haven't really dug into is climate change. Do you see scenarios where climate change really ends life on Earth?
TOBY: I think that may be possible. It's an interesting situation here where we currently don't even know if that's possible, maybe not be possible, maybe that the physical climate system can't really produce the types of temperatures that make the world uninhabitable. Perhaps no matter how much carbon is bad, there have been models that look at things like this — for example, the most serious type of warming effect would be this runaway greenhouse effect, like is theorized to have happened on Venus. And the sun has been slowly brightening. The idea was with Venus, eventually the increased solar radiation, so they increase kind of incoming sunlight, tip that over into this runaway greenhouse effect. There are papers on this for the Earth. It currently looks like yeah, that no matter how much carbon was burned, that wouldn't happen. Instead, you would actually need extra sunlight. It would happen in the future as the sun slowly brightens. But it wouldn't happen from this reason, which would be good because that means that the temperature wouldn't be driven up to near the boiling point of water or something like that. But then what we're more concerned about there, since that stuff currently looks like it's physically impossible —although we're not sure — the more likely alternatives are that there's some single digit number of degrees of warming in Celsius. People often talk about, say, six degrees of warming being a very large amount of warming, perhaps there could be even more and perhaps even more than 10 degrees of warming. One can then start to ask, "how could that actually destroy humanity?", and even at six degrees or 10 degrees of warming, it's still pretty hard to see how it can actually destroy humanity. Perhaps the most plausible versions are that it could lead to some kind of collapse of civilization, and then that we can't recover our way out of that.
SPENCER: Right. This is sort of this kind of one two punch idea that, obviously, any kind of clubs, civilization is just terrible in its own right. For the moment, we're focused on existential risks or unrecoverable risks. You could imagine there's some kind of really bad catastrophe that kills half the people on earth or something like this, or stabilizes civilization, and then in a kind of much less stable state. It makes us much more vulnerable to actually going extinct. What are your thoughts on that?
TOBY: It's interesting, I probably don't say enough in my book about multiple catastrophes occurring at the same time, or this kind of vulnerability argument. I'm a bit suspicious of the argument. If there's a bunch of low probability events, then you rely on two of them happening, and the probability of that is kind of like squared, right? If you're talking about, say, events that are something like one in 10,000 chances of happening in a given year, the chance that two of them happen in a given year does end up being really small. Perhaps just more likely that a bigger event happens than that you get to have these smaller events that coincide. I could have said more about that argument. Tried to delve into it and see what supporters have those views say in response?
SPENCER: Yeah, well, I guess some people would argue that the chance of global warming getting really bad is not that low. What do you think about that?
TOBY: Yeah, I mean, it's probably not that low per year, I was sort of trying to give a per year number, many of these other things are talking very low. If it's a 1% per century, that is one in 10,000 per year, the number I gave, and not many people think that various global catastrophes that a slightly more vulnerable world would destroy us that it's still a fairly gloomy to think that they're larger than 1% chance in the century. The way I think of it is that we know that civilization is not that hard to establish. People often refer to the Fertile Crescent as the cradle of civilization. In truth, human civilization had many cradles, there was independent origination of civilization in North America, and in South America, and in Southeast Asia, and in China, and on the Nile. Perhaps more places, as well. There're at least five times where we established civilization, they almost coincide in time, when you zoom out, they basically were all at the end of the last ice age. Once it became more possible, the river started flowing, and so forth. So that gives us some evidence that it's not that hard to establish civilization, if we lost it, even if we had such a deep collapse, that we literally did not have civilization, there was no agriculture, there was no writing. Last time we were in that situation, it arose five times independently in different parts of the world. So I think we can be somewhat hopeful. But you could still wonder about situations where maybe if there was a huge climate catastrophe, with a very large number of degrees of warming, maybe we'd just be collapsed to a much worse situation than we were 10,000 years ago, where it is much harder, then we don't get any civilizations reestablishing. We just ran out of natural time before, it needn't be that there's another catastrophe that happens soon after, it could be that we have lost another 100,000 years as hunter gatherers.
SPENCER: Just like the normal amount of time for species.
TOBY: Exactly. Maybe we lost another million years. If we're not establishing civilization over that time. Doesn't really matter how long we lost from this viewpoint.
SPENCER: It seems like the Earth would have to be in really horrible shape for that to happen, like constant hurricanes or something like this.
TOBY: Well, I'm not sure. I mean, if we look at this history of humanity, we were around for something like 190,000 years before we established civilization. We know that it's possible to go very long periods without doing that. That said, one of the reasons was that an ice age was holding us back. Another reason was that we were kind of slowly accumulating more and more advances. It's not like nothing was happening during that time. We know we have a detailed history of or prehistory of all of these innovations that these humans were making over deep time, including a lot of very important breakthroughs. There developed this stone age but there's so many breakthroughs that we had, such as boats: enabling us to travel between continents, clothes: enabling us to travel into all kinds of uninhabitable environments. I think there's a nice analogy actually to be made between spacefaring and just the diaspora of humanity across the Earth, that the original ships played the same role of spaceships that originally there was only just a small part of Africa that was habitable, but clothing was kind of like spacesuits that let us actually go into these uninhabitable locations. We built shelters like houses, which enabled us to survive in places that were so cold that they would freeze, and so on. We tend to look down on these technologies that have existed since time immemorial. But they're very big breakthroughs for humanity. They were not easy to come by. We did a lot over that time.It was a slower accumulation process that finally got us to civilization so that if we didn't lose those things, it shouldn't take so long again. I'm not sure I think it's fascinating to ask.
SPENCER: I mean, the technology was sometimes being lost during that process, right?
TOBY: Yeah, in some places, and also read independently invented in different geographical regions. I don't know too much about their particular history of different places. I do think that there have been about 100 billion humans who've lived before us. They have really achieved great things in terms of how they invented our languages, there must have been humans who first used adverbs say, "there's just the kind of the development of various syntactic features of language", and so on. There's so much stuff that we just take for granted now, that all happened before any written records. We don't know the names of the people who did it, but they still created this vast wealth that we have perished.
SPENCER: I think a piece of skepticism that people might have... is to say, Well, sure, it might have taken a long time to develop all these ideas. Maybe 100,000 years ago, you could forget one of the inventions but today, knowledge seems more permanent — it seems hard to believe that, as a civilization, we're just going to forget a lot of what we know.
TOBY: Yeah, I'm very sympathetic to that. Ultimately, I think that some of these arguments based on the collapse of civilization are very much worth taking seriously, I think that that is really — it's very plausible that more existential risk is for the easier to happen event of a collapse of civilization compared to extinction. It also has to be a collapse that's not recovered from and perhaps the collapse, the failure to recover together is still more likely than extinction. There's different ways of looking at it. Some people, when they talk about a collapse of civilization, are imagining that we lose industry, or something. we're back at a point before the Industrial Revolution, although perhaps we still have the knowledge of how all these steam engines and machines work, or we still have the ruins of them that we could accelerate our way back. In that situation, it's easier to see how a disaster could take us back to that point. It's harder to see why it is that we wouldn't be able to recover. I tend to imagine a deeper collapse of civilization, where it becomes quite hard to see how civilization could collapse so far, it's at least a little bit more plausible that we couldn't recover from something so deep. Ultimately, I'm sympathetic if people think that it's actually very hard to find a combination of depths of collapse, difficulty of recovering from that depth, to find a combination that actually has high enough probability.
SPENCER: Okay, so we've talked about all these potential risks to civilization. How do we get out of this stuff? What is the solution in your view?
TOBY: Well, it's not easy. I think that a lot of the focus of the community working on existential risk has been to look at the risks one by one. Then to try to ask, "How can we deal with that risk?", "Is there a way of lowering the risk immediately?", or perhaps even solving that particular risk. For example, "can we find our way to a world without nuclear weapons?", or to a biosecure world, there's still an issue of our creation of these new existential risks. I think that with humanity's escalating power, we are creating more and more of these risks. I think the risk of this century is substantially higher than it was last century. If we don't get our act together, it seems like it'd be higher next century. So as well as kind of fighting fires and dealing with each risk as it comes up, I think that we kind of need to get our act together, as I said, and start to make this a global priority, that we could have reached this point that I call existential security, where we've got risk low. We've also built up the institutions and norms to keep it low. Now, I don't think that's easy, it's especially difficult given how diverse the world is, and how there's no kind of top level of governance in the world. We've got nation states and then kind of various kinds of loose ways of coordinating nation states — such as through the UN. There is a big challenge in how to coordinate, but ultimately, these risks — not in any nation's interests and in fact, if you think of like Maslow's hierarchy of needs, and you apply that to the scale of our species, existential risk is something that threatens like our very basic needs, our survival on which all of our other flourishing is built. We previously hadn't had many threats to that complete basic level at the species scale. I think that as we start to understand it, and start to talk about it, that this could become something of common agreement, where we actually do start to make large scale commitments to keep this risk low. That's not something that's going to happen next year or the year after. But if we're thinking on the scale of centuries, I think it has to happen. Otherwise, sooner or later these existential risks to us.
SPENCER: And so what would that look like in a world with existential security?
TOBY: The key thing is that we have to actually care about, this has to be something that most people just agree with. Of course, we care about humanity's continued survival. That's a key priority. It has been suggested that we'll reach this kind of security via space travel. This is a common theme many people have discussed. I think that that's not right. What happens with space travel is that if we were to settle other worlds, and to have self-sustaining independent settlements on on other planets, would be basically immune to uncorrelated events that could destroy planetary civilization, such as the asteroid impact, because the chance that two of those happen at the same time on both planets... is very low, and you could reset all the other planet from the one that survives. However, there are plenty of correlated risks such as some of the risk from pandemics would be of this form, especially if there was a deliberate attempt to destroy all the planets and risks generally from war risks from artificial intelligence — that we haven't talked about much here and I'm sure you'll dismiss have heard about from others. AI in particular seems like a very correlated one with AI catastrophes that are considered looked like that would apply to all such planets. If so, then space settlement is a good thing, because maybe it harms existential risk by getting rid of all of these uncorrelated ones. Suppose you go with my numbers about one in six, and you have it or something to about one in 12. Well, then you asked for 12 centuries on average instead of six centuries on average, and that doesn't really fundamentally solve the problem. I think ultimately, what you need to do is get to the point where we take it really seriously. We make it so that it's just not really possible for groups to threaten the world, so it wouldn't be possible for countries like the US or Russia to have the number of nuclear weapons that they have. But this gets harder if we develop new biotechnologies where very small groups with only a small amount of people involved might have the power to destroy the world through biological means. Then there's questions about how do you make sure that work on synthetic biology is studied or surveilled enough, or you know that there's enough kind of approval of people before they can get into it or something like that? In order to stop this? I think it does become very difficult. I also think it's probably the only way.
SPENCER: What would you like to see in terms of resource allocation ? If humanity was acting in its own interests, what percentage of tax revenue or something would go to preserving the long-term future in your view?
TOBY: I honestly don't know. It's not clear how quickly diminishing returns setting. At the moment, they've been various attempts to work out how much of our money gets spent on existential risks, and it kind of depends on how you slice it up. For example, quite a lot of money is being spent on climate change. Most of that is not being spent in a way that's really treating it as an existential risk or dealing with those scenarios. In fact, there's almost no study on scenarios of warming above six degrees, despite that being where most existential risk would lie. It doesn't really seem quite fair to count all of that as money on existential risk. Basically, any reasonable accounting of it, we, humanity, spends more each year on ice cream than we do on existential risk reduction. We're in a situation where we clearly don't have our priorities together. What I say in the precipice is that we'll be good to at least get to the point where we spend as much as we do on ice cream, and then take it from there, maybe something like a percent or a few percent of world product, something like that. Beyond that, I don't know. It's really unclear what you'll be spending all our money on?
TOBY: I really don't know, if the answer is that you just want to kind of continue doing what we're normally doing, I definitely am not claiming. This is so important, we should spend 90% of all resources on it or something like that, because I just have no idea whether that can be productively done.
SPENCER: Suppose that there was a potential increase in funding? How would you see that funding allocated? What sort of things would it be used on?
TOBY: Well, again, I would say that I don't really know. At the moment, we're thinking on much smaller margins. The community of people looking at this is ultimately spending some small number of millions of dollars per year. Large foundations, such as OpenFill, that have many billions of dollars to spend, ultimately, but I'm having trouble spending that money finding opportunities that are good ways to spend it. We're kind of currently thinking on the margins of how could you spend another million dollars? Whereas, if we're then asking, instead of a question about how could you spend a billion dollars per annum or something, or perhaps much more $10 billion per annum? That's just not the kind of question we've been asking. So I have a little idea about how to do it.
SPENCER: What are some of the opportunities you're seeing on the margin for a million dollars here and there?
TOBY: I would say the main thing I see is research, there's just so little research done on these things. It's not a popular or fashionable area of academia. Ways of trying to actually build up these research communities and the credibility of the fields would be helpful, but also OpenFill are not blind to these things, they're trying to fill those funding gaps, as well. It can be difficult to really see — but I should say that as well as existential risks, like particular risks, such as asteroids, or artificial intelligence, there's also something that I call an existential risk factor. And that's something that is not in itself an existential risk. It's something where if it happens, then it makes existential risk go up. So for example, imagine whatever the risk is, over the next 100 years, in my guess it's one in six. Now, imagine what would happen to that risk, if we knew that there was not going to be any great power war. So we knew that the US and China, Russia, none of them were going to go to war with each other over the next 100 years, nor any other great powers that might arise, it will be lower, it's very hard to know how much lower my kind of very rough guess was like a 10th lower or something like that. Instead of it being something like 16%, it's something like 14 and a half percent, maybe that sounds arbitrarily accurate. I just made it more roughly, though, be more than one percentage point, that existential risk would drop if you're in a world where you knew that there was going to be no war between the great powers. Maybe your numbers would differ, but it's — I think something like that's pretty plausible. In which case, great power war is causing something near the current levels of great power wall risk is causing something like a percentage point of existential risk, which is more than almost any particular risk. It's far more than something like asteroids. It means that maybe instead of focusing on a particular risk, people should be focusing more on avoiding war between great powers. There are other examples that one could apply as well. Some of these existential risk factors seem like things that perhaps could absorb more money as well, where if someone said, "Is it possible to kind of reasonably use $100 million per annum in terms of global peace and security efforts?", that sounds like it probably it's a larger field and probably has more absorptive capacity for that kind of funding.
SPENCER: What about differential technology development?
TOBY: Yeah, so that's the idea that it can really matter what order we get these technologies in and fit, it could be good. Even if one doesn't slow down the risky technologies, it could be good to accelerate the protective technologies such that we get them earlier. I think that that's a great idea. It's a very general idea. There's a big question about actually trying to find examples where one could play that game. One that I'm quite hopeful about actually is meta-genomic sequencing, which is a new thing where there's new technologies by a company such as Oxford Nanopore. Basically, it sequences all genetic material in a sample from a patient, it potentially sequences the human DNA in there any viral DNA or RNA, any bacterial DNA or RNA or from, other parasites. We had such technology that was mature, it looks like we could get to a situation where, for something like 100 pounds, you could take a sample and sequence everything in it, and match it to a database of known pathogens. In our case, basically, you could imagine the healthcare system, if it finds a patient where it can't work out what's wrong with them, then it sends off to this kind of generic lab where they do this sequencing on the sample cost $100. If that have been done in Wuhan, it would have come back saying "unknown viral pathogen, whose closest match is SARS 1", that is distinctly different from SARS1 it's like a new form of something in that family, you formed Coronavirus, that's similar to SARS. You would have been able to find that out within a day. So that is a very powerful technology and would also be very effective against biological warfare or other forms of engineered pandemics, because it'd be able to find out very quickly once they started infecting people that there was a new, unknown virus around and give you its genome and everything very quickly. I think that that's a very exciting example of a defensive technology that we should be accelerating.
SPENCER: Yeah. I mentioned there are other ones in different domains — like environmentalists, there's carbon capture technology, which seems like hard to do anything bad with, or an AI maybe technology for Explainable AI to better understand what AI systems are doing, not to make them more powerful, and so on.
TOBY: I think that's right. It's kind of interesting to look for patents here. Because both — I was thinking of the same example for AI. It's similar to the bio example in their kind of both forms of reading, rather than writing or something like that kind of information gathering rather than action in the world. I wonder if there's their various patents the way one could kind of like take these kinds of lenses, and then apply them in a whole lot of different fields to try to find the things that are more protective or defensive, rather than the things that are more aggressive or a heightening of our powers.
SPENCER: That's good. I like that. A lot of what we've been talking about is kind of gesturing around this idea of longtermism. Do you want to introduce to us what longtermism is, or why should we care?
TOBY: Yeah, longtermism is — there's still kind of questions about what you know, what should be a precise definition — but it's a kind of way of seeing our moral duties or the world where we take the long-term future of humanity really seriously. We take the fact that our actions have effects on people in many generations to come. As a serious aspect about when we try and decide, what's the right policy or the right action. In many cases, the effects on people in the distant future might be almost impossible to determine. We can't really say much either way when it comes to assessing an action. I think that's the common reason people give for just ignoring this. The usual approach — in economics, and in moral philosophy — is just to ignore long term consequences — but there are some cases where we take actions that they're pretty obviously have very long-term consequences. Existential risk is a very clear cut example, where if we were to destroy humanity in our generation, that would destroy all generations to come. It's fairly clear that for extinction to be the clearest example, there'll be no way back from that. This irrevocability of it makes it kind of easier to see what the long-term consequences of this would be. You still might not know exactly — maybe an even better species will arrive and humanity, and it would actually be a good thing. It still seems like that on balance, we'd expect that not to be true. If we had a possibility of certainly causing human extinction, we would not take us. And thus, that shows that we're not very convinced by an argument that we think is likely that something better would happen. We can kind of get predictable long term consequences of certain things that have certain kinds of lock in effects or other things like that. There might be other things as well, it might not just refer to existential risk. For example, economists like Tyler Cowen, make a kind of long-term use case for accelerating economic growth. Because if you imagine drawing, this exponential curve of rising income over time growing at a world growth rate, and then you imagine adjusting that growth rate, so that it's faster, then you find that those kind of distant times get much, much wealthier than they would otherwise be. This has these kinds of cascading effects over time. Maybe if you if you have a more realistic model, that's not exponential growth, but it's some kind of S curve, where we kind of like, reach some kind of technological sophistication plateau, where it's not really possible to get materially wealthier, then it looks less dramatic, it still kind of like moves, it makes the time that we reach that kind of plateau happened earlier. It could be a really big, macroscopic kind of difference to people over the future. It could also be fairly predictable. I think the main way that one ceases to be predictable, is that accelerating our prosperity, may well increase existential risk. So there's a kind of interesting kind of trade off question there. But just an example of how you could have somewhat predictable long-term effects of our actions, even in cases that aren't trying to achieve that through reducing risk.
SPENCER: So some people argue that the best thing we can do right now is focus on the problems we have. Because there's so many problems in the world, there's so many bad things happening, so much suffering, and these are kind of concrete things that we can kind of immediately try to make progress on. And other people say, "well, but the future is so vast to show many possible human beings in the future, to show me the potential lives at stake, 1000, a million times more than there are today, potentially, if civilization survives, that the bulk of what matters is just trying to nudge the probabilities of the future going", well, how do you think about those kind of two arguments?
TOBY: I think there's a whole lot that you could say on this — if it's kind of near-term versus the long-term. I think that there's certain kinds of arguments based on urgency or concreteness that don't work. I could kind of see that. If the idea was, if the long-term risks were saying, we need to create heaven on earth, we need to build this utopian paradise, then maybe the opponents could say, "Well, there's no actual urgency on that," or perhaps if you said, "We need to discover everything that's knowable," that would be a fantastic achievement. Maybe that's not that it can't be kind of understood in terms of welfare of individuals, it's like an amazing thing humanity could do. It just doesn't matter exactly when we do it. There'll be some urgent arguments. When it comes to existential risks, that happen on our watch, that is urgent, if we don't deal with the risks of the next 30 years, say, no one else is going to be able to. This was very clear, when the risks of the Cold War were looming over people. It reached the point where it was just very intuitive to people, the largest ever protests in American history at the time in the 1980s, was in Central Park was a huge rally against nuclear weapons. Similarly, the kind of largest protests of our time, these climate marches, again, with something that is widely thought to be an existential risk, even as somewhat unclear exactly what the chances of it destroying humanity are. A lot of the people who are marching are doing so on those grounds. I think that we could agree that that is an urgent issue, even if the damage of climate change may take a long time to actually eventuate the issue has to do with passing various points of no return. There is an urgency in dealing with things before we do that. I think that it can be just as urgent, even though it's a bit less obvious at first as to how that is. There are perhaps better arguments to do with, maybe there are some arguments that tried to do something with really big effects, but very tiny probabilities are fishy in some way. I'm sympathetic to that. My best response to that would be to say that if you zoom out a bit, and you think about this, from the perspective of humanity, what should humanity, as in what should all the humans be focusing on? It seems that since we've reached this time of very heightened risk, a substantial focus of humanity should be on making it through this very dangerous time or perhaps kind of getting us out of the danger. That seems very plausible, in which case, a substantial number of people have to go do that.
SPENCER: So yeah, I would like to divide longtermism into what you might think of as like near-term longtermism and long-term longtermism, we're like near-term-longtermism — which I think is the one I'm most sympathetic to — is this idea that right now, we face great risks. In the next 10 to 20 years, we face great risks, whether it's the risk of nuclear war, or bioterrorism or potentially even risk from artificial intelligence. So that's much more speculative. It seems like if we think these things actually have a very substantial risk in the next, let's say, 20 years, there seems like a lot of concrete actions we've been getting to take on them, compared to what you might think of as like longtermism, which is, "can we influence the world 1000 years from now?", “where I think it becomes much harder to think about, like," what are the effects of our actions?", and "how could we ever model which things we do today will influence the world?", and "what ways?". I'm curious to hear your reaction to that?
TOBY: Yeah, I think I agree with that — that the longtermism aspect of what I'm thinking about here is that beneficiaries of our actions will exist or at least could exist over an extremely long span of time. It's entirely possible that there are species that have lasted for over 500 million years. This is an immense amount of time. I think there's every chance that humanity could last for 500 million years until we reach the kind of point where the Earth may no longer be habitable. I think we could survive beyond that around other stars, because we'd have ample time to have perfected the technologies of space travel required. So what motivates me is this thinking about this vast future ahead of us, but that doesn't mean that I'm trying to micromanage that future. I do think that there are serious challenges of how you actually make an action that predictably influences that long-term future. That's why I think that existential risks that would strike, say in the next 10, or 20 years, is really elevated as an area because there's a kind of urgency of dealing with those things, if we don't deal with them, then no one else can. It feels like it's really our duty. I kind of agree with that. I guess, if you want to call that near-term longtermism, or urgent longtermism, or something like that. I will say, though, in regards to defending the kind of longer-term view on my estimates for what they're worth of something like 1%, existential risk in the 20th century and guess 17%, this century. I'm saying that they're substantially more now than there was back then. If so, then maybe it would have been a good idea for people who were facing the largest existential risks that humanity had ever known back in the early days of the Cold War, maybe the best thing that they could have done would be to try to build up the communities and resources for our century to be able to deal with the much greater risks that we'd be facing. That's not implausible, although it turns out, it's difficult to make a community that lasts into the next century. So I'm not saying it necessarily would have been the best approach. I think that you could — it was a plausible thing. If some of the people have been trying to focus and use their resources on that, maybe that would have been a good idea for at least some fraction of those who really cared about these issues.
SPENCER: Yeah, if you believe that institutions don't just by default rot after 100 years, obviously, there are some institutions that have survived much longer than that, like Oxford University, although I think we've seen a lot of institutions, they just can't exist that long and still kind of be efficient, or do the same thing that they used to do it kind of will drift over time.
TOBY: Although maybe that describes Oxford University as well. I mean, Oxford University was founded before the Aztec empire. That's partly because the Aztecs were way more recent than you might think. Oxford was certainly not founded before the minds.
SPENCER: Yeah, and it's an interesting question with the founders of Oxford, even recognized with universities today or view it as even trying to achieve the same goals they were trying to achieve. I really don't know.
TOBY: Yeah, probably not. I mean, it was initially very focused on theology. The philosophy was in service with the theology. So they might not recognize a lot of it. I guess they would be an interesting mixture of shock, that certain things were going on, but also being impressed that other things were going on. I don't know. I'd like to hope that they were overall impressed. But you're right, it's very difficult to have long run institutions, or to kind of gather wealth and resources over the long run in order to be able to kind of gain these compounding interest rates and then effectively donate it to people trying to do good in the future. I think that such things might be possible, and — I find them like a really interesting approach, one that I want to keep an eye on. My best guess is that the best way of fulfilling this kind of long-term perspective on ethics is by focusing on existential risk at least at the moment at least until we've really saturated that area.
SPENCER: Toby, this was super interesting. Thank you so much for coming on. I know you have a new book out, too. Do you want to mention work? Where can people find your book?
TOBY: Yeah, it's called “The Precipice: Existential Risk and the Future of Humanity". You can find it anywhere. It was a kind of strange thing to have a book come out at the time when the pandemic struck, such that book stores were closed when the book first arrived better these days, that doesn't really stop anyone getting the hands on it, or you can listen to me reading it over the audiobook version.
SPENCER: Great. Thanks, Toby.
TOBY: Thank you very much.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Host / Director