CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 224: The path to utopia (with Nick Bostrom)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

August 25, 2024

Why do there seem to be more dystopias than utopias in our collective imagination? Why is it easier to find agreement on what we don't want than on what we do want? Do we simply not know what we want? What are "solved worlds", "plastic worlds", and "vulnerable worlds"? Given today's technologies, why aren't we working less than we potentially could? Can humanity reach a utopia without superintelligent AI? What will humans do with their time, and/or how will they find purpose in life, if AIs take over all labor? What are "quiet" values? With respect to AI, how important is it to us that our conversation partners be conscious? Which factors will likely make the biggest differences in terms of moving the world towards utopia or dystopia? What are some of the most promising strategies for improving global coordination? How likely are we to end life on earth? How likely is it that we're living in a simulation?

Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, along with philosophy. He's been a Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute from 2005 until its closure in April 2024. He is currently the founder and Director of Research of the Macrostrategy Research Initiative. Bostrom is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). His work has pioneered many of the ideas that frame current thinking about humanity's future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, the unilateralist's curse, etc.), while some of his recent work concerns the moral status of digital minds. His most recent book, Deep Utopia: Life and Meaning in a Solved World, was published in March of 2024. Learn more about him at his website, nickbostrom.com.

JOSH: Hello and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Nick Bostrom about the potential of AI-driven utopia and the search for meaning in a post-instrumental world.

SPENCER: Nick, welcome.

NICK: Hey, Spencer.

SPENCER: I don't know about you, but I found that there's a lot more interesting, vivid descriptions of dystopias out there than of utopias. Do you agree with that?

NICK: Yes, I think that's true. I wonder what that says about us humans, but it does seem to be a pattern. Most people could probably rattle off a bunch of different dystopias — Brave New World, 1984, A Handmaid's Tale — but the average person would have trouble, I think, naming even a single utopia.

SPENCER: Yeah, it's interesting, because it seems important to figure out where we want society to go, not simply what we want to avoid.

NICK: Yeah. And it gets worse; once you actually look at the utopias that have been attempted, by and large, you wouldn't actually want to live in any of them. They often seem too pat or they become a little bit like a plastic toy or something cloying, often. And so it's actually hard to think of a true utopia. What's easy is, you could imagine, if you take the current world and you imagine some small improvement, that's easy enough; like the current world except without childhood leukemia seems just better. But once you start to add a whole bunch of different improvements, eventually you get to the point where it no longer seems attractive. It's almost like plastic surgery. You could imagine, if somebody has some big wart or something on their face, you would probably improve their appearance by removing the wart. But if you just keep going with more and more supposed improvements, you kind of enter Michael Jackson territory.

SPENCER: Yeah, one thing I've observed that's relevant to that is that people seem to agree much more on what we don't want than on what we do want. Everyone can agree that we want less illness, we want less poverty, we want people to have fewer issues when they're growing up, etc. But then you say, "Okay, what do we want society to be like?" Do we want everyone to be frittering around and having intense pleasure all the time? Do we want people to still be working hard to achieve their goals? Do we want lots of little microsocieties that are completely different? That's when people seem to really start to disagree.

NICK: Yeah, but it's not just that we disagree with each other. It can be hard, even as an individual, if you try to conceive of a utopia that would be super attractive, even just to you yourself. That is already a pretty difficult challenge. I think more so if we consider not just a few rearrangements in the current condition — like a little bit more money and a little bit better medicine — but more profound transformations of the human condition, such as that we might attain technological maturity.

SPENCER: Why do you think it's difficult for people to think of utopias that even they themselves would think are ideal? Is it because people have multiple values, and as you try to build a utopia, those values come into conflict, or some other reason?

NICK: I suppose our self-conception and our goals that we have and the applied values that we pursue are conditioned on our current location. We have various projects going on, things we're trying to achieve, but all of that is based on where we are now. And so if you imagine a very different situation where none of that applies, it might seem that we are a little bit at sea, and it doesn't relate to us, and it seems almost like we are contemplating somebody else's life. That could be something, that we are anchored in our immediate surroundings and our current predicament, and anything that takes us too far out of that is also losing ourselves and our life, in some sense.

SPENCER: One thing you talk about in your book, Deep Utopia, is that the regular world imposes these constraints on us and gives us these things to strive for. And then suddenly, if we imagine a world where production levels are so high that we really don't need to strive for anything, we lack those constraints, and it leaves us out at sea.

NICK: Yeah, I have this metaphor of the insect with the exoskeleton that holds the squishy bits together and gives it shape. Analogously, the various instrumental necessities that we face in our lives, and that all humans so far ever on this planet have faced, it's always been part of our environment that there are a whole bunch of things we need to do. You need to make money to pay the rent, you need to brush your teeth to maintain a healthy mouth, you need to do this, that and the other. Our lives are lived inside these constraints. Just as the insect has an exoskeleton, our souls have a kind of exoskeleton of these instrumental necessities. And if we one day attain a post-instrumental condition, as I call it, then there is a question of what happens to our souls. Do we just become contented pleasure blobs, or would there still be something that gives structure to our psyches and our lives?

SPENCER: It seems that if you go back a few hundred years and you ask people what utopia looks like, there's going to be a lot of focus on just material abundance: wouldn't it be wonderful if we just had enough food to eat all the time, and great shelter/comfortable shelter, and it was always the right temperature, and so on. And yet, to a shocking degree, many people in the world have those things today. Not everyone, of course; there are still people living in poverty. But many people live this way, and yet, it still feels very far from utopia.

NICK: Yeah. It seems like pretty much every different culture has some notion of this land of plenty, the Land of Cockayne. There are different names for it, but I think if you imagine some medieval peasant living under great material deprivation, with backbreaking labor from morn to dusk, imagining a condition in which there was plenty of food and you could rest as much as you wanted, a continual feasting would already be enough to just give you a, "Wow, that's like a fantasy." But now, I think that wouldn't really count for most people. We already have fridges stuffed full of food and. Although we are not yet liberated from all economic work, some people are. And even the ones who are not, are now having holidays and weekends and stuff. We're almost struggling with the opposite end of this — the boredom or lethargy or lack of purpose that can come from having it too easy. That creates a different source of misery and need.

SPENCER: I'm not certain this story is true, but a friend of mine went and visited a tribe, and through a translator, she was asking them about their afterlife. The way that the translator explained their afterlife was that, basically, it's just like normal life, except the cows give more milk. [laughs] And I thought that was so surprising; it's that things are just a little bit more comfortable.

NICK: Yeah. Now, if you did end up in this afterlife with a lot of milk, then you might dream in that life, what would be after that life. And then you might think, well, the trees also give a lot more fruit, and the fruits are sweeter. And then, you could repeat this, and then, maybe your bed is softer and more comfortable. But if you iterate far enough in that direction, then you do eventually get to this notion: I call it a "solved world" and even a "plastic world." These are two different concepts I introduce in this book, Deep Utopia, which describe/characterize a much more radically profound and, in some sense, improved condition that we might attain, I think, perhaps not long after the machine intelligence transition, if things go well. And I think this really forces us to confront some pretty fundamental questions in philosophy: what ultimately gives value to life, what gives meaning to life, and other values? Some values are easy, just clearly, we could have a lot more of them in this condition. But there are also some other values that seem to be at risk of being undermined in such a "solved world."

SPENCER: Yeah, you mentioned this question of, what do we do when we have abundance? And an interesting place to start there is to look at what we have done as we made more and more production. In theory, people could work a lot less because we've got all of this food, we've got all this comfort. Why are we working so hard? So what historically has happened as production has increased?

NICK: We do work a bit less, so we've taken some of our increased productivity out as leisure. Some of that is, we have longer childhoods and education, but we also have weekends off and we work shorter hours, and maybe work less hard than the peasants did, and then longer sick leave and maternity leave and paternity leave, and then longer retirements. But we've only taken out some of our increased productivity for leisure; most, we have taken out for increased consumption. John Maynard Keynes, the famous economist, wrote this essay almost 100 years ago now where he, at the time, predicted a century hence, that people would work way less, because he extrapolated from then-recent increases in productivity and thought, if that continued another hundred years, we would be four to eight times richer, and we would scarcely need to work, and what would we be doing all day long? Now, as it turns out, we do work maybe 25% less or something, but overall, greed has triumphed over sloth, and so we just spend more, and a lot of that is on positional goods. So you buy more and more expensive luxuries to try to one-up the other, and the fancier the clothes and the cars that your neighbors have, the more you have to spend to keep up or overtake them. So there is an element of zero sumness to our consumption rat race. That's where a lot of our increased economic wealth has gone, I think, in addition to absolute improvements and also some increases in leisure.

SPENCER: It's funny because, with positional goods, you could spend an unlimited amount of money on them, because it's all just relative.

NICK: You could have the billionaire with his megayacht, and then there's another billionaire. Both of them want to have the biggest yacht in the world, but it's impossible, no matter how much economic growth there is, for them both to have their preferences satisfied. So if you have a 200-meter long yacht and somebody else builds one that is 210 meters long, then you need to upgrade yours to get 20 meters more, and that could keep going. So in a "solved world," it's not defined by the idea that all preferences can be satisfied or that people wouldn't want more money. I'm sure some people, at least, have unlimited appetites. But that doesn't imply that people would still be working because, although there might be a desire for more, it doesn't mean that you could get more by working yourself. If AIs can do everything better and cheaper and more efficiently than humans can, there would just not be any demand for human labor. Now, I think there are some exceptions to that but, as a broad pattern, I think you could have something approximating full unemployment, if AI succeeds.

SPENCER: How linked are these kinds of arguments to AI in particular? Do you see a way that, one day, humanity could get to utopia even without super intelligent AI?

NICK: In principle, yes, I think you could get most, or perhaps almost all, of the same affordances without advanced AI. You could imagine, for example, there are a lot of intellectual tasks that need to be done in the economy and that humans now do. Maybe given unlimited time, you could develop software for each of those tasks; you would figure out how to automate it using non-AI techniques. And I think realistically, if we do get there, the way that we will get there will be through AI. That just seems a lot easier to build one AI that can learn to do all these different tasks than to make a specific software program for every particular task.

SPENCER: In your book, Superintelligence, you analyze risk from AI in great depth. But I think for this conversation, let's assume that AI doesn't go horribly wrong, like it doesn't kill all humanity. Let's say it's still under the control of humans to a reasonable degree. Even with that assumption, it seems like people may be very worried that, if AI started producing as well as humans at almost every task, does that leave anything for humans to do, and what kind of society does that look like ultimately?

NICK: Yeah, I'm ultimately optimistic about the possibility of having a really desirable and wonderful type of existence in a "post-instrumental world." But I do think it does go through some significant challenges. The book isn't trying to sell utopia. It's not an argument for, "Oh, here is how great it would be." It rather tries to think through the full implications of what it would mean to have a "solved world." And then there are — at least at first sight — quite disconcerting implications of that: would it really be attractive to live in this world? It seems some values at least would be potentially undermined here. And I do think ultimately, the kind of existence that would make sense here would be significantly different from our familiar human existence. It might require us to give up some values or reconsider them in a fairly fundamental way.

SPENCER: So what are some of those values we might have to reconsider?

NICK: Well, take purpose. Some people think that it's important for a human being to have purpose in their life, and that life is diminished if it lacks purpose. Now, if you think of purpose as something that is worthwhile that you have to exert effort over some period of time to try to achieve, and that draws on your different capabilities, it's, at least at first sight, not clear that that would still exist in utopia, inasmuch as — at least for a very wide range of different things you might want to achieve — there would be shortcuts. It would be easier just to ask your AI system to do them. And so, future lives might have less purpose and meaning; it might seem that there is nothing that we are needed for. A lot of people today take a kind of pride in being a breadwinner or in making a positive contribution to society at large perhaps. Or at least on a smaller scale, they think they benefit their family in some way by being around. Or at the very least, themselves, there are various projects you can undertake to try to educate yourself, or get fit, or you redecorate your home to get a nicer home, all of these little things, little projects, that we are engaged in. And that's a big constituent of what we're up to, of our lives. If all that went away and it was all just on tap, it might seem to create a kind of purposeless, meaningless existence.

SPENCER: A friend of mine said to me fairly recently that she still prefers talking to me about her problems than talking to an AI, but that's not true of all of her friends. So there's some friends who'd rather talk to the AI. And there was something a little scary about her saying that. It's like, "Well, okay, so maybe next year, she'll prefer to talk to the AI than talk to me, right?" [laughs] And then you start to think, well, if you really push this thought experiment forward, and you say, well, what if, really, there's nothing you can do better than the AI, what does that mean for all the things you care about trying to do in your life?

NICK: Yeah. So that would be one instance. If other people right now need you in different ways — whether it's for conversation or support, or they're children or relatives or friends — and if AI has just become better as social companions, that would remove one way in which we can be practically useful and of help and value to other people. And then if you generalize that, then you start to get the sense for this challenge. The book doesn't try to shy away from that. It dives straight into this and tries to take that on in its full force.

SPENCER: So how do you think about starting to have a great utopia, even if it's going to fundamentally be different from what we are used to trying to do in our lives?

NICK: Well, if we start from an empty slate, and then one can consider what values we could add there in utopia... I should first say, although the book doesn't dwell much on this, it is super important in the actual all-things-considered situation, which is the opportunity to get rid of a whole bunch of negatives that currently plague the human condition and, indeed, the animal condition as well. I think that already alone — just getting rid of all the bad stuff — would possibly be reason enough. But let's set that aside and just consider what positive values could exist in utopia. First, we have pleasure. Now, I think it is very easy to dismiss that as, yeah, sure, they could have some super drug and be blissed-out junkies, or maybe direct forms of brain manipulation, and to kind of sniff at that. I think actually, that alone is a much more serious and possibly attractive proposition than a lot of people would give it credit for. And I think the people who are very down on this form of hedonism might easily change their mind if they actually got to sample some of this pleasure that would be in the offing at technological maturity.

[promo]

SPENCER: People seem to find pleasure more palatable when it's linked to bigger things. Like, there are a lot of people who would say, "Well, if all life was, was the most base pleasure — like the pleasure you might get from doing heroin or something like that — then maybe that feels worse. But if (let's say) it was like viewing beautiful art, but it was so beautiful that you were in this incredible awe state, maybe that would be more appealing."

NICK: Yeah. This is one thing that we can add, that I think we should add, but it is just worth dwelling, very briefly at least, on just the raw pleasure itself. It's intellectually uninteresting and trite, but it might ultimately be the most important single thing, although I do think there are additional elements that we can add. And so, you could combine the pleasure — as in positive hedonic tone — with experience texture. So this would, for example, be the appreciation of beauty, as you mentioned. And you could take pleasure in contemplating beauty, or in understanding truths, or in admiring goodness. So you don't just have the pleasure, but you also have some other maybe more complex and rich mental content attached to the pleasure. And already, that seems perhaps a lot more appealing and, indeed, some traditional conceptions of heaven consider it as being a state of contemplating perfect goodness in the form of God and then experiencing love and happiness as a result of that. But the experience is the conjunct of the thing being contemplated, and then an emotional response to that. Both of these things clearly would be possible in a "solved world" to extremely high degrees: extremely high degrees of pleasure, and also extremely clear and strong and sophisticated or well-targeted forms of contemplation or other mental content. So those two, we can put those in the bank; you have pleasure and you have experience texture. And we can then ask, can we add further elements to these things? One thing we don't yet have is any type of activity or pursuit or purpose. At the moment, we have blissful experiences of contemplating various things. But you could then think, why wouldn't we also be able to engage in various types of activities if we think various forms of activities are intrinsically valuable? You could create artificial purpose. We already do this today, like when we play games, we set ourselves some goal, some arbitrary goal, and then we try to achieve it. Like, you want to try to get this golf ball into a sequence of 18 holes using only this little inconvenient implement, a golf club. These are random goals you just set yourself. And then once you have these goals, then you can engage in the activity of golf. Similarly, in utopia, people could set themselves arbitrary goals, and that would enable the activity of pursuing those goals. Importantly here, you would include in the goal constraints on how you are supposed to achieve it. Just as if you set yourself the goal of doing well on the golf course, it's part of the goal that you're only supposed to use a golf club to propel the ball, as opposed to picking it up with your hand and placing it sequentially in each hole. Similarly, in utopia, there would be all these technological shortcuts you could use to achieve a particular outcome. And so you would have to bake into the goal the idea of achieving the outcome only using a certain limited set of permissible means, if what you want is an activity that you yourself have to engage in.

SPENCER: You can't just tell the AI to do the thing for you. That defeats the purpose. [laughs]

NICK: Yeah, but you just make that part of this arbitrary goal you adopt. So then we can add one more element. We have pleasure, we have the experience texture, and now we also have artificial purpose that allows us to engage in various forms of activities that could be intrinsically valuable. And already, I think, it starts to seem a lot richer; a future of contemplating beauty and playing games while greatly enjoying ourselves doing these things seems maybe at least more attractive to many than being this blissed-out pleasure blob, like a junkie enjoying some super drugs, sprawled out on a flea-infested mattress in a dark room, where there's nothing else going on except the pleasure itself. Now we already have something more, and it's starting to look better, I think. And I think that we can even add a little bit more to this.

SPENCER: So what element would you add next?

NICK: Well, some people might think, although you could have these artificial purposes, it's still not quite the same as real purposes. Like, you don't really need to have the ball go into each of these 18 holes or to defeat the boss monsters in this computer game. They are fake purposes in some sense, as opposed to now, there's a whole bunch of stuff in the world that actually needs doing. And so you might think it can add value to life if it has realer or more natural purposes — purposes that exist, not just because we arbitrarily, just ourselves, made up some goal. And now, could we have such things in utopia? Well, I think yes, there would be some opportunities for this. Perhaps the easiest way to see it is if you imagine you have two people, A and B. And let's suppose that A just happens to have a preference that B's preferences be satisfied, like you care about the other person. And then if now, B happens to have a preference that you do a certain thing on your own, then now you have a real purpose. If you want to actually achieve your goal of satisfying B's preferences, the only way you can do that is by doing this thing yourself that B wants you to do.

SPENCER: So essentially, because we have preferences that other people do specific things, not just that they cause those things to occur by asking their AI agent to do it?

NICK: Yeah, so you could even give a purpose to a friend by adopting the preference or goal, or setting yourself up in such a way that you will be more happy if they do a certain thing. Once you've done that, then they have an actual real purpose to do that thing if they care about you and your preferences. In this particular reductionist case with A and B settings, it's a bit hokey perhaps, but I think subtler versions of this are actually quite common, in that, we have various shared cultural commitments and commitments to (say) various traditions, for example, that we might just want to uphold, and those traditions might call for us ourselves to engage in various forms of practice. And the tradition wouldn't count as having been continued or honored if what instead we did was create a bunch of robots who went around performing the rituals or whatever. In order for the tradition to successfully have been continued, for certain types of tradition, it may require our own involvement.

SPENCER: An example of this, I think, that can be poignant here, is, imagine you're writing a speech for someone's wedding. If you were to just ask an AI to write it for you and put no effort into it, and the AI spits it out, even if it's like a really good speech, there's a way in which it feels like you didn't satisfy the obligation. And whoever asked you to speak at their wedding might be disappointed that you didn't actually put your own thought into it.

NICK: Yeah. And I think these are fairly ubiquitous actually. And more broadly, I suspect that there are a whole bunch of aesthetic reasons for doing various things that would come into view in this condition of a "solved world." And if you think about it, the aesthetic significance of something often depends on who did it and why they did it and the means by which they achieved it, as opposed to just the particular artifact that results. And so our lives might become more like artworks, and to achieve a particular expressive content of those artworks, it would, in many cases, call upon us to do things on our own steam. More broadly, I have this notion of quiet values or subtle values. I think there might exist a whole bunch of these that are more or less obscure to us currently. Just as during the day, when you go outside, you don't see any stars, but it's not because they're not there; it's because the sun is there, and it's so much brighter. Similarly, in our current lives, there are these pressing instrumental needs, horrors going on in the world — things we have to do in our own lives, or various kinds of catastrophes happen — these are the louder values. So we need to fight injustice and prevent pain, all these things that fill our conscious minds. We don't see what I think is there, which is a constellation of these subtler values. But if all these kinds of pressing, urgent moral needs were one day all taken care of, then I think this richer canopy of subtler values would potentially come into view. And just as during nighttime, our pupils expand to take in more light, so in this future, it would be appropriate for us to become attuned to place more weight on these subtler reasons for doing stuff, including aesthetic reasons, broadly construed. And subtler things like honoring your ancestors or upholding traditions or achieving various kinds of aesthetically beautiful shapes in your life and in the way you relate to other people, could just constitute a larger portion of reasons for doing stuff. They might be really important; it's just that their importance is up-calibrated, because the things that are currently important would no longer be there, so it would make sense to care more about these subtler things.

SPENCER: It seems that a lot of meaning we get in practice comes from our relationships with other people and that, even if we're in a situation where there's nothing that we can really add because there's agents that can do it better than us, we could still have deep relationships that could be very fulfilling. How does that work into your vision of utopia?

NICK: I think a lot of these purposes that could remain in a "solved world" would broadly arise from socio-cultural entanglements. Other people, not just other individual people but also other cultural phenomena and commitments and our participation in traditions and communities of various sorts, to the extent that there are still natural purposes remaining, I think a lot of them would come from that source. It is not completely obvious how people will choose in this respect so we'd already like to anchor it a little bit more to the here and now. We have these increasingly capable social companion bots that are being, or will be, created in the near-term future. We already have the chat bots, but you will have more multimodal with maybe visual avatars and voice. You can already see the beginnings of this, and I imagine some of these might become very compelling to some people. It might just not have time to play out if AI timelines are really fast. But if you imagine AI frozen in its current state — or a couple of years from now, and we have that level of technology — would people start to spend more and more of their time interacting with these social artificial intelligences, as opposed to real humans? And would that be good or bad? The first answer that pops into most people's minds would be, it seems to be bad; it's much better to spend time with real people. I wonder whether that's one of these generational things. So us old fogies are like, "Oh, no, you gotta spend time with real people. That's how we did it." But if a new generation grows up with this kind of stuff, maybe they'd just think we have these hang-ups. It's easy to form an opinion but it can be really hard in situations like that to form an opinion that reflects more than just your own idiosyncratic upbringing and personality, and that actually goes down more to bedrock of value. That's a really hard evaluation to do.

SPENCER: I don't know if others will find this compelling but, to me, it matters a lot if those I'm interacting with are conscious, in other words, that they're actually having internal experiences. For example, they're feeling pleasure and pain and so on. If I was talking to AIs that were not conscious, that, to me, would seem to sap a lot of the meaning out of it.

NICK: It might be that, after a while, if you were in the habit, you would forget about this question, that, just as some people think consciousness doesn't exist, there are eliminativists about consciousness. Like some philosophical views think that it's a confused concept and actually nobody is conscious, that we should just get rid of the very notion because it's a philosophical confusion. I think they still then get on with their lives pretty much in the normal way. I'd imagine that, eventually, people would just settle into something that was not very tightly coupled to some abstract philosophical belief. Of course, it's also possible that some of these digital companions are conscious, or will be conscious. You could have artificial persons that are conscious. It's just that they have maybe been designed to be more optimal as social interaction partners and, therefore, more compelling. Just as some people are annoying and tiresome and full of themselves and puffing empty air and irksome, others are more charming and compelling and really wonderful human beings that you want to spend time with. Similarly, some of these AI companion bots might just become much better in the same ways that some humans are better at being friends. This is maybe a good time again to remind the listener that I'm not advocating specifically for this. I'm not trying to sell a particular vision here; I'm just trying to look at what it actually is and take it on. I obviously also understand the aspect of this that seems repellent, the idea that we would have these highly optimized AI companion bots that we would spend all our time with instead of interacting with human beings, that there is at least an initial kind of 'yuck!' reaction to that. But I want to not just stop at that initial 'yuck!' reaction but dwell in the discomfort and then see if one can understand precisely why it is and whether ultimately it makes sense, or whether it's just a kind of prejudice.

SPENCER: Changing tacks a little bit, assuming that humanity continues advancing AI, it gets incredibly advanced, and we're able to keep it under control, what do you see as some of the concerns about how it could lead to a utopia versus a dystopia, and what we should be thinking about there?

NICK: There's a whole bunch of practical difficulties between where we are now and attaining anything like a "solved world." We have the alignment problem, of course, then various versions of the governance problem, and also the problem of the ethics of digital minds, that we want the future ultimately not just to go well for human beings, but also for other morally considerable beings that we will share the future with, hopefully, including animals and maybe some of these digital minds, whether because they are conscious or they have other attributes that give them various forms and degrees of moral status. That's a lot there. The book just brackets all of that in order to actually get to the point where we can ask the question, "What then?" Because I think, at some point, somebody should probably ask that, even though most of our time should be focused on making sure we actually get there, as opposed to destroying ourselves beforehand.

[promo]

SPENCER: So the alignment problem is really about getting AIs to do what we want and not do things we don't want. How would you describe the governance problem?

NICK: Well, it's a broad category. One aspect of it is making sure that we humans don't use these AI tools — even if we assume they are aligned — for bad purposes. All kinds of other very general purpose technologies that have been developed have been used both for good and for bad. So you have people engaging in warfare, some people oppressing other people, all kinds of mischief. And so with AI — a similarly very powerful and very general tool — there are all kinds of opportunities for misuse, for conflict and for oppression, and for other types of malfeasance. And so, broadly speaking, you might think of the governance problem as how to ensure, at least, that the preponderance of uses are positive. It also interacts with the alignment problem, in that, you might potentially need various forms of governance, regulations and oversight to ensure that the alignment problem gets solved in time and that alignment solutions get implemented; these interact in various complex ways. It's a little bit arbitrary how you break these out, but I think of the alignment problem as a technical problem that people who are good with math and computer science need to figure out by being clever. And then the governance problem is more like a broader political challenge, where it's not so much that there is a clever little answer somebody comes up with, but it's more like a continuous effort by many people to try to achieve a more benevolent and cooperative condition in the world. And then the third, I call it "the ethical problem of digital minds;" part of that is philosophical and computer science-y to figure out which minds actually have what kinds of morally relevant attributes. But it then also quickly becomes a challenge for human empathy and ethics, and ultimately, for governance to ensure that whatever we figure out regarding the question of how these minds ought to be treated actually also gets implemented in practice. I would say that it's not a book that has the structure of: here is a thesis, here is the argument for the thesis. It's more designed to be an experience, to try to put the reader into a position to think for themselves, seriously and deeply, about these questions with the right kind of attitude and open-minded form of curiosity and benevolence. Because ultimately, if things go well, I think somebody somewhere will need to make up their mind about what we actually want about the future. And it's a really hard deliberation, and you would hope that they don't just take out some cached thought, or do some off-the-cuff thing, or project onto the future some random little feature just being a function of their current level of neurotransmitters. And I think it's possible that the answers people might give might depend quite a lot on how they come at this problem, the attitude with which they enter this deliberation. And so the book, ultimately, has a secret purpose; if some group — whether it's some people in an AI lab, or a government or some humanity-wide deliberation process — it would be good for there to be something to read in preparation for going into such a deliberation. And I'm hoping this book will (a.) equip them with certain concepts and put various questions more clearly into focus, but also (b.) help prepare a certain kind of attitude of benevolent generosity and open-mindedness and playful contemplation that I'm thinking is likely to make that deliberation go better than some of the alternative attitudes with which one could come at it.

SPENCER: Well, hopefully, if we start approaching utopia, the people involved will read your book, and it will stir some important reflection on what we actually want utopia to be like. All right, before we wrap up, let's jump into a rapid-fire round. I'll ask you a bunch of questions to try to get your relatively short answer, although they're going to be complex questions. So first question for you: you wrote this book, Superintelligence. Since you've written it, a lot has happened in the AI world. We've seen large language models like ChatGPT. We've seen many different breakthroughs in AI. And I'm wondering, have those developments in technology changed your view considerably about AI, or do you stick to your guns on what you wrote in Superintelligence?

NICK: Yeah, in broad brushstrokes, I think it's held up really well, and I haven't changed my mind, except there's more granularity, and we can see more specifics about the particular shape. Like the idea that current AI systems are very anthropomorphic — very human-like, with human-like idiosyncrasies — is a bit surprising. The idea that, to get the best out of one of these large language models, you almost have to give it a little pep talk sometimes. "Think step by step. This is really important. My job depends on your answer." You actually get the AI to do better by doing that. That would probably have seemed a bit ridiculous ten or 20 years ago. That's like anthropomorphizing the AI. Yeah, that's where we are. I'll stop there, because you wanted short answers.

SPENCER: Some of what you write about, both in Superintelligence and in Deep Utopia, involve global coordination, which seems like something that the world struggles with. What do you think are some of the most promising strategies for improving global coordination?

NICK: Well, one, if the world ends up being a singleton — which is this concept I have of a world that is coordinated at the highest level — perhaps the most likely way for that to happen is through the AI transition, and then either one actor gets enough power to just impose itself on the world, or maybe post-AI technologies allow for easier ways of coordinating and solving coordination problems. But probably what you meant to ask was more like, what can we push on today to improve global coordination? And unfortunately, it's quite hard to find some really high leverage things in that space. There are little bits and bobs here and there that one can point to, perhaps, but probably if there were an easier way, it would already have been done.

SPENCER: One potential means of global coordination that you've written about is the idea of a single state actor that controls the world globally, maybe monitors everyone and everything to make sure that really dangerous technology isn't used. Obviously, when people hear about that kind of idea, it's terrifying; it sounds like a one-world dictatorship. Do you see ways of preventing dangerous technology that don't involve such close monitoring? Or do you see ways of involving such close monitoring that don't come across as so authoritarian?

NICK: Yeah, it's funny. I had this paper, "The Vulnerable World Hypothesis," some years ago, and one concept introduced there, I called it "the freedom tag," which is a kind of surveillance device. Imagine a kind of neck bracelet people wear that records everything they hear with omnidirectional cameras that's continuously uploaded to some sort of freedom offices or whatever, where they're like the government. So some people have accused me, "He's so ominous. He's advocating the freedom tag." Obviously, I named it 'freedom tag' on purpose, to really emphasize its Orwellian character. Now it is fairly plausible, however — whether it's good or bad — I think there will be more and more transparency and ability to surveil what people are doing in more and more detail, and also eventually what people are thinking. And already, current AI technology, I think, would be able to do a lot here. So for a couple of decades, it's been possible to record everybody's phone conversations, and everything they write on their social media, et cetera, and government security organizations like to do this. But so far, the only way you've been able to use it is if there is a particular person of interest that you could then assign the human analyst to read through what they have said and written. But now, even with current AI tools, I think, you could do mass analysis — analyzing what everybody is saying and writing, and therefore what they are thinking about the government, and do sentiment analysis and stuff. And you probably could, with current or very near-term technology, get pretty accurate results from that. And then you could imagine coupling that to some sort of social credit score system that would penalize people who express wrong things. Like, you could have an Autumn AI that digs through everything they've ever said and done to try to dig up some dirt or some bad thing they said decades ago, and then reduce their reputation accordingly. And so the potential in this technology, I think, already exists for a social equilibrium to emerge that is like one where there is much less obscurity and forgetfulness, and where it's possible for one system to see and have fine-grained ways of differentiating its reaction to everybody. I think I don't need to say that there are obvious ways for that to be dystopian, but maybe there are also some ways for it to not be dystopian and good.

SPENCER: [laughs] Well, I don't want to leave people thinking that you're pro-authoritarian, single state government — unless you are — so could you just paint momentarily, what's a good version of that where we're all monitored all the time?

NICK: Well, think of somebody living in a small community, like a kibbutzim or a little village. There are probably a lot of attractive features about that. And people probably knew a lot about what each person was like and what they said, and so it doesn't have to be a totalitarian nightmare. If you imagine this scaled up, maybe you would have a world free of crime, a world where you can't be a fraud and a jerk who goes around ripping off or taking advantage and exploiting one person and then moving on to the next, because the track record would be obvious for everybody to see, and so that could be stronger incentives to do kind and pro-social things. It's very hard. We don't have the kind of political or social science that allows us to make very clear predictions about what happens if you change some of these fundamental parameters of the collective information system. For what it's worth, my gut level attitude is, I'm more on the side of the punks, like, "Oh, these individuals stick it to the man and the apparatus and the state bureaucracy." That's like, "Yeah, hurrah for that." But if I step back and reflect on what would actually make the future go best, I probably don't really know. It's really hard to tell. So I'm more on the agnostic end of that, I think.

SPENCER: You mentioned the vulnerable world hypothesis, this idea that, every time we develop a new technology, it's like drawing a ball from an urn. Sometimes it's a white ball, where it's a good technology; it helps the world. Sometimes, it's a gray ball; it has a mix of good and bad aspects. But maybe sometimes, it's the black ball that ends life on Earth. Obviously, we haven't drawn a black ball yet; we haven't ended life on Earth. But we might. And what I'm wondering is, what's your best estimate of how likely we are to draw black balls? Is there anything we can say about that, or is it just too hard to possibly know?

NICK: Well, it looks like AI timelines are fairly short. So if there's got to be a black ball, it probably comes out of something enabled by intermediate levels of AI, like maybe some bio-weapon design AI tool that could, before we get superintelligence, you could imagine something that allows the world to get destroyed. Or some sort of social dynamics-disturbing applications of AI, like with some of these surveillance or automated propaganda type of things. These will be the most likely bets. I don't know exactly what the probability is. AI itself is interesting because it's slightly different from other existential risks, in that, it is also something which, if it goes well, could protect us against a whole host of different existential risks. And even determining exactly what counts as an existential catastrophe is difficult with respect to AI because, like with a lot of other things, the world blows up, there is nothing after; it's pretty clear it's an existential catastrophe. With AI, it's more like there is a spectrum. The world gets radically transformed and, on the other side of that, what exactly exists and how valuable is that? We presumably don't want to insist on there being human beings in exactly their current forms, running around on this planet for millions of years, doing the same old human things. That itself would seem a little bit of a letdown, I think. On the other hand, if it's a paperclip maximizer, maybe we think that also fails to realize a lot of the potential for value. But it's to say that the concept of an existential risk has a value component as well as a descriptive component. And in the case of AI existential risks, in particular, it seems the value component becomes particularly prominent.

SPENCER: You came up with this very influential idea, the simulation argument, which essentially argues that we might be more likely to be living in a computer simulation than most people acknowledge. I'm wondering, have your probabilities changed over time of how likely you think we are living in a simulation? And where do they sit at right now?

NICK: Well, I tend to punt on giving an actual number. Many have asked but none have been answered so far. I guess maybe it crept up a little but not much. I think for other people, it might be reasonable to increase their probability in the simulation hypothesis. If you think about "The Simulation Argument" — I think the paper was like 2003 or something — I circulated it a few years before that. At the time, I think it might have been a bigger imaginative leap for people to conceive of a level of technology that would make it possible to create realistic computer simulations with conscious beings in them. I think the decades of technological progress since then should make it easier. Virtual realities are higher resolution and people play these immersive computer games. That should just make it easier to imagine how, if you continue to make progress, we would eventually get to something super realistic. And then with AI as well, it just seems like a short step from where we are now to where we would actually have digital minds that are fully as sophisticated as humans. And so there's less opportunities where you could hop off the train between where we are now and where the capability exists for running ancestor simulations than there was back in the early 2000s. And so, in that sense, I think it would make sense for probabilities to creep up a bit as well.

SPENCER: Are there any concrete ways that you behave differently or live differently because of this possibility we're living in a simulation?

NICK: I think maybe a greater sense of humility with respect to the ultimate things, if you contrast it, is to take the other extreme, some kind of archetype, like the classical atheist, materialistic evolution, Richard Dawkins type. There's a fairly well-defined inventory of the world and where we are in the world. We are on this planet, there's a bunch of stars, we began this long ago, then we die. And when we die, we rot, and that's the end. All of those relative to that kind of worldview would seem pretty confident implications; whereas, if you take "The Simulation Argument" seriously — and the simulation hypothesis in particular — it suggests it could very easily be the case that there are many more possibilities. There might be a lot more in the world than is dreamt of in this naive scientific picture. That could be other simulations, that could be a basement reality under the simulations. That could be whole hierarchies of simulators who designed this. That could be afterlives of various kinds. This takes a lot more possibilities, conditional on the simulation hypothesis. And so, in a sense, we know very little about that whole space of possibilities. I think it induces a kind of epistemic humility that also can then translate into a more almost-spiritual humility, a sense of our own smallness and how much we are in the dark with respect to the ultimate things that shape our ultimate destiny.

SPENCER: Final question before we wrap up: you're sometimes described as utilitarian, although I think that you don't actually identify as one. How would you describe your metaethical views and your views on questions around, 'are there objective moral truths,' and so on?

NICK: Yeah, I don't have a good label. And in general, I always struggle with labels. They seem very confining, all these -isms that people love to subscribe to, I never really... It always seems a bit of a strain to me. I tend to think in multiple sorts of superpositions. Maybe eventually they collapse and I get some more conviction on particular views. But my attempts to articulate a kind of metaethics recently, I wrote this paper, "Base Camp for Mount Ethics." It's not a proper paper. It's more like some thinking notes so it might not be useful to anybody. It's kind of obscure, but it's an attempt to outline one direction that I'm thinking in, in terms of metaethics. I also like the idea of a moral parliament that I came up with. This is the idea that, when you face some practical moral problem, rather than (say) pick the most probable moral theory that you can think of and do what it says, you should instead, as it were, assign probabilities to different moral theories. And then you imagine that these moral theories each get to send delegates to an imaginary parliament, and the number of delegates they get to send is proportional to the probability you assign to the theory. And then you imagine these delegates of the different theories deliberating and bargaining and compromising under ideal conditions. And then you should do what this parliament recommends that you do. The idea here being that, even a moral theory that you think is less probable, but that happens to care particularly intensely about some matter, might get its way in that case, in return for conceding to other moral theories in other cases that it thinks are less important. And so you could, I think, get a greater level of wisdom and lower propensity to fanaticism by thinking in terms of this moral parliament model. It's really more like a metaphor than a formal model, but that's better than just picking your favorite moral theory and running with it.

SPENCER: Nick, great to speak with you. Thanks for coming on.

NICK: Good to talk to you. Spencer.

[outro]

JOSH: A listener asks: "What types of meditation have you tried? And which ones have seemed most impactful to you?"

SPENCER: I think I've tried over 30 types of meditation. It doesn't mean I've gone deep in them. I'm certainly not a meditation expert, but I've done a lot of experimenting. One of my favorite types of meditation is where I try to notice a good feeling in my body and mind, and then I try to let it grow. This is kind of related to what people sometimes call Jhana meditation, and I like to just do it sometimes in the morning, for 30 seconds, just let this good feeling grow. I find that really nice. I've also done a bunch of, for example, meditation where you focus on your breath. I did that. For about a year, I did that almost every morning, and I tracked some variables around that. And I found that interesting, and I found that it made me more aware of my state changes, like when my emotion would shift, or things like that. So I think it helped in a kind of introspective way, but I didn't really notice a lot of other benefits, other than maybe making me feel calmer. I've also tried a lot of wacky meditations, meditations involving visualizations, meditations involving changing sensations in my body. That was actually how I first got interested in meditation. I realized one day — this is before I knew anything about meditation — that I could make a feeling of pins and needles in my arm if I focused on it, and it would get really, really convincing to the point where I thought, "Well, maybe my arm just has pins and needles." And then I would move my arm, and the feeling would go away suddenly. That was really surprising and interesting to me, that I could do that with my mind. Then I started thinking, "What else can I do with my mind, interior in my body, and affect the way that I perceive things?" and started exploring meditation through that.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: